2. Topics Covered We will see how knowledge can be represented: Decision tables Decision tress Classification and Association rules Dealing with complex rules involving exceptions and relations Trees for numeric prediction Instance based representation Clustering
3. Decision Tables Simplest way to represent the output is using the way input was represented Selection of attributes is crucial Only attributes contributing to the results should be a part of a table
4. Decision Trees Divide and conquer approach gives us the results in the form of decision trees
5. Nodes in a decision tree involve testing a particular attribute Leaf nodes give a classification that applies to all instances that reach the leaf The number of children emerging from a node depends on the type of attribute being tested in the node For nominal attribute the number of splits is generally the number of different values of nominal attribute For example we can see 3 splits for outlook as it has three possible value For numeric attribute, generally we have a two way split representing sets of numbers < or > that the attribute For example attribute humidity in the previous example
6. Classification Rules Popular alternative to decision trees Antecedent, or precondition, of a rule is a series of tests (like the ones at the nodes of a decision tree) Consequent, or conclusion, gives the class or classes that apply to instances covered by that rule
7. Rules VS Tree Replicated Sub-tree Problem Some time the transformation of rules into tree is impractical : Consider the following classification rules and the corresponding decision tree If a and b then x If c and d then x
8. Advantages of rules over trees Rules are usually more compact than tree, as we observed in the case of replicated sub tree problem New rules can be added to the existing rule set without disturbing ones already there, whereas a tree may require complete reshaping Advantages of trees over rules Because of the redundancy present in the tree , any sort of ambiguities is avoided An instance might be encountered that the rules fail to classify, usually not the case with trees
9. Disjunctive Normal Form A rule in distinctive normal form follows close world assumption Close world assumption avoids ambiguities These rules are written as logical expressions, that is: Disjunctive(OR) conditions Conjunction(AND) conditions
10. Association Rules Association rules can predict any attribute, not just the class They can predict combination of attributes To select association rules which apply to large number of instances and have high accuracy, we use the following parameter to select an association rule: Coverage/Support : Number of instances for which it predicts correctly Accuracy/Confidence : Number of instances it predicts correctly in proportion to all the instances to which it is applied
11. Rules with Exception For classification rules Exceptions can be expressed using the ‘except’ keyword, for example: We can have exceptions to exceptions and so on Exceptions allows us to scale up well
12. Rules with Relations We generally use propositional rules, where we compare an attribute with a constant. For example : Relational rules are those which express relationship between attributes, for example:
13. Standard Relations: Equality(=) and Inequality (!=) for nominal attributes Comparison operators like < and > with numeric attributes
14. Trees for Numerical Prediction For numerical prediction we use decision trees Right side of the rule, or leaf of tree, would contain a numeric value that is the average of all the training set values to which the rule or leaf applies Prediction of numerical quantities is called regression Therefore trees for numerical prediction are called regression trees
15. Instance based learning In instance based learning we don’t create rules and use the stored instances directly In this all the real work is done during the classification of new instances, no pre-processing of training set The new instance is compared with the existing ones using a distance metric Using the distance metric, the close existing instance is used to assign the class to new one
16. Sometimes more than one nearest neighbor is used, the majority class of the closest k neighbor is assigned to the new instance This technique is called k-nearest-neighbor method Distance metric used should be according to the data set, most popular is Euclidian distance In case of nominal attributes distance metric has to defined manually, for example If two attribute are equal, then distance equals 0 else 1
17. Clusters When clusters rather than a classifier is learned, the output takes the form of a diagram which shows how the instances fall into clusters The output can be of 4 types: Clear demarcation of instances into different clusters An instance can be a part of more than one cluster, represented by a Venn diagram Probability of an instance falling in a cluster, for all the clusters Hierarchical tree like structure dividing trees into sub trees and so on
19. Visit more self help tutorials Pick a tutorial of your choice and browse through it at your own pace. The tutorials section is free, self-guiding and will not involve any additional support. Visit us at www.dataminingtools.net