Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- MS SQL SERVER: Decision trees algor... by DataminingTools Inc 7153 views
- Bayesian classifiers programmed in sql by ingenioustech 1147 views
- Introduction to Statistical Machine... by mahutte 1620 views
- Statistical learning intro by Pei-shen (James) Wu 642 views
- Statistical learning by Slideshare 3418 views
- Data Applied: Developer Quicklook by DataminingTools Inc 273 views

2,329 views

Published on

MS SQL SERVER:Microsoft neural network and logistic regression

Published in:
Technology

No Downloads

Total views

2,329

On SlideShare

0

From Embeds

0

Number of Embeds

6

Shares

0

Downloads

0

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Microsoft Neural Network and Logistic Regression<br />
- 2. overview<br />Microsoft Neural Network and Logistic Regression overview<br />DMX Queries<br />Model Content<br />Principles of the Microsoft Neural Network Algorithm<br />Algorithm Parameters<br />
- 3. Microsoft Neural Network overview<br />Microsoft Neural Network derives the analysis performed from two factors. <br />Any and all of the inputs may be related somehow to any or all of the outputs, and the network must consider this in training.<br /> Different combinations of inputs may be related differently to outputs. <br />
- 4. Microsoft Neural Network overview<br />The relationships detected by the Microsoft Natural Network algorithm may span on up to two levels. <br />In the single-level case, input facts are connected directly to the outputs. <br />In the two-level case, input combinations effectively become new inputs, which are then connected to the outputs.<br />The level that transforms certain input combinations into new inputs is referred to as a hidden layer.<br />
- 5. Microsoft Logistic Regression overview<br />The Microsoft Logistic Regression algorithm is the one with a single level of relationships used to predict the probability of events based on inputs. <br />This algorithm is implemented by forcing the hidden layer of a neural network to have zero nodes and is manifest only in the internal structure of the algorithm.<br />
- 6. DMX Queries<br />The Microsoft Neural Network supports most of the tasks that Microsoft Decision Trees can do, including classification and regression.<br />The next slide shows queries to create and train a mining structure for Employee information data.<br />
- 7. DMX Queries<br />CREATE MINING STRUCTURE EmployeeStructure(<br />EmployeeID LONG KEY,<br />Gender TEXT DISCRETE,<br />[Marital Status] TEXT DISCRETE,<br />Age LONG CONTINUOUS,<br />[Education Level] TEXT DISCRETE,<br />[Home Ownership] TEXT DISCRETE,<br />TechnologyUsage TABLE<br />(<br />[Technology] TEXT KEY<br />)<br />)<br />GO<br />A mining<br />Structure<br />Holding<br />Employee<br />data and<br />Technology<br />usage<br />information<br />
- 8. DMX Queries<br />INSERT INTO MINING STRUCTURE [EmployeeStructure]<br />(<br />[EmployeeID], [Gender], [Marital Status], [Age], [Education Level], [Home Ownership],<br />[TechnologyUsage]( SKIP, [Technology] )<br />)<br />SHAPE<br />{<br />OPENQUERY ([Chapter 12],<br />‘SELECT [EmployeeID], [Gender], [Marital Status], [Age], [Education Level], [Home Ownership]<br />FROM [Customers] ORDER BY [EmployeeID]‘)<br />}<br />APPEND<br />(<br />{<br />OPENQUERY ([Chapter 12],<br />‘SELECT [EmployeeID], [Technology] FROM [Technology] ORDER BY [EmployeeID]‘)<br />}<br />RELATE [EmployeeID] To [EmployeeID]<br />)<br />AS [TechUsage]<br />GO<br />A mining structure holding customer data and technology usage information<br />
- 9. DMX Queries<br />ALTER MINING STRUCTURE EmployeeStructure<br />ADD MINING MODEL VariousPredictions(<br />EmployeeID,<br />Gender,<br />[Marital Status],<br />[Age] PREDICT,<br />[Education Level] PREDICT,<br />[Home Ownership] PREDICT<br />)<br />USING MICROSOFT NEURAL NETWORK<br />GO<br />INSERT INTO VariousPredictions<br />GO<br />Query to build a Neural Network mining model that predicts both a discrete target (Home Ownership) and a continuous (Age) target.<br />
- 10. DMX Queries<br />ALTER MINING STRUCTURE EmployeeStructure<br />ADD MINING MODEL NestedTableInput(<br />EmployeeID,<br />Gender,<br />[Marital Status],<br />[Age] PREDICT,<br />[Education Level],<br />[Home Ownership],<br />TechnologyUsage<br />(<br />Technology<br />)<br />)<br />USING MICROSOFT NEURAL NETWORK<br />GO<br />INSERT INTO NestedTableInput<br />GO<br />You can also include a nested table in a neural network algorithm, as long as it is not marked as predictable. <br />Query to predict Age based on the Employee’s demographic data, as well as the technology items that the Employee is currently using.<br />
- 11. Model Content<br />A Neural Network model has one or more subnets. <br />The model content describes the topologies of these subnets. <br />It also stores the weights of each edge of the neural network.<br />
- 12. Model Content<br />
- 13. Understanding the Structure of a Neural Network Model<br />Each neural network model has a single parent node that represents the model and its metadata, and a marginal statistics node that provides descriptive statistics about the input attributes. <br />Underneath these two nodes, there are at least two more nodes, and might be many more, depending on how many predictable attributes the model has.<br />The first node always represents the top node of the input layer. Beneath this top node, you can find input nodes that contain the actual input attributes and their values.<br />Successive nodes each contain a different sub network .<br /> Each sub network always contains a hidden layer , and an output layer for that sub network.<br />
- 14. Principles of the Microsoft Neural Network Algorithm<br />The origin of the Neural Network algorithm can be traced to the 1940s, when two researchers, Warren McCulloch and Walter Pits, tried to build a model to simulate how biological neurons work.<br />Neural networks mainly address the classification and regression tasks of data mining such as decision trees, neural networks can find nonlinear relationships among input attributes and predictable attributes.<br />Neural networks supports both discrete and continuous outputs.<br />
- 15. How the algorithm works?<br />The Microsoft Neural Network algorithm creates a network that is composed of up to three layers of neurons. <br />Input layer: Input neurons define all the input attribute values for the data mining model, and their probabilities.<br />Hidden layer: Hidden neurons receive inputs from input neurons and provide outputs to output neurons. The hidden layer is where the various probabilities of the inputs are assigned weights. The greater the weight that is assigned to an input, the more important the value of that input is. <br />Output layer: Output neurons represent predictable attribute values for the data mining model.<br />
- 16. Neural Network Model<br />
- 17. Backpropagation<br />Backpropagation(which is considered as the core process of the algorithm)involves the following steps:<br />1. Randomly assigns values for all the weights in the network at the initial stage (usually ranging from –1.0 to 1.0).<br />2. For each training example, the algorithm calculates the outputs based on the current weights in the network.<br />3. This step calculates the errors for each output and hidden neuron in the network. <br /> The weights in the network are updated.<br />4. Step 2 is repeated until the condition is satisfied.<br />
- 18. Algorithm Parameters<br />The Microsoft Neural Network algorithm supports several parameters that affect the behavior, performance, and accuracy of the resulting mining model.<br /><ul><li>MAXIMUM_INPUT_ATTRIBUTES determines the maximum number of input attributes that can be supplied to the algorithm before feature selection is employed. Setting this value to 0 disables feature selection for input attributes.</li></ul>The default value is 255.<br /><ul><li>MAXIMUM_OUTPUT_ATTRIBUTES determines the maximum number of output attributes that can be supplied to the algorithm before feature selection is employed. Setting this value to 0 disables feature selection for output attributes.</li></ul>The default value is 255.<br />
- 19. Algorithm Parameters<br /><ul><li>MAXIMUM_STATES specifies the maximum number of attribute states that the algorithm supports. </li></ul>If the number of states that an attribute has is greater than the maximum number of states, the algorithm uses the attribute’s most popular states and treats the remaining states as Missing.<br />The default value is 100.<br /><ul><li>SAMPLE_SIZE is the upper limit of the number of cases used for training. </li></ul>Default value is 10000.<br />
- 20. Algorithm Parameters<br /><ul><li>HOLDOUT_PERCENTAGE specifies the percentage of holdout data.</li></ul> The holdout data is used to validate the accuracy during the training. <br />The default value is 0.1.<br /><ul><li>HOLDOUT_SEED is an integer for specifying the seed for selecting the holdout data set.
- 21. HIDDEN_NODE_RATIO specifies the ratio of hidden neurons to input and output neurons. </li></ul>The following formula determines the initial number of neurons in the hidden layer:<br /> HIDDEN_NODE_RATIO * SQRT(Total input neurons * Total output neurons)<br />The default value is 4.0.<br />
- 22. SUMMARY<br />Microsoft Neural Network and Logistic Regression overview<br />DMX Queries<br />Model Content<br />Principles of the Microsoft Neural Network Algorithm<br />Algorithm Parameters<br />
- 23. Visit more self help tutorials<br />Pick a tutorial of your choice and browse through it at your own pace.<br />The tutorials section is free, self-guiding and will not involve any additional support.<br />Visit us at www.dataminingtools.net<br />

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment