Your SlideShare is downloading. ×
0
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
WEKA:Algorithms The Basic Methods
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

WEKA:Algorithms The Basic Methods

2,184

Published on

WEKA:Algorithms The Basic Methods

WEKA:Algorithms The Basic Methods

Published in: Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,184
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Algorithms: The Basic Methods <br />
  • 2. 1-rule Algorithm (1R)<br />Way to find very easy classification rule<br />Generates a one level decision tree which tests just one attribute<br />Steps:<br />Consider each attribute in turn<br />There will be one branch in the decision tree for each value of this attribute<br />Allot the majority class to each branch <br />Repeat the same for all attributes and choose the one with minimum error<br />
  • 3. 1R Pseudo Code<br />Pseudo code for 1R<br />
  • 4. 1R in action<br />Consider the problem of weather’s effect on play. Data is: <br />
  • 5. 1R in action<br />Let us consider the Outlook parameter first<br /> Total Error = 4/14<br />
  • 6. 1R in action<br />Consolidated table for all the attributes, ‘*’ represent arbitrary choice from equivalent options:<br />
  • 7. 1R in action<br />From this table we can see that a decision tree on Outlook and Humidity gives minimum error<br />We can choose and of these two attributes and the corresponding rules as our choice of classification rule<br />Missing is treated as just another attribute, one branch in the decision tree dedicated to missing values like any other attribute value<br />
  • 8. Numeric attributes and 1R<br />To deal with numeric attributes, we Discretize them <br />The steps are :<br />Sort instances on the basis of attribute’s value<br />Place breakpoints where class changes<br />These breakpoints gives us discrete numerical range<br />Majority class of each range is considered as its range <br />
  • 9. Numeric attributes and 1R<br />We have the following data for the weather example,<br />
  • 10. Numeric attributes and 1R<br />Applying the steps we get:<br />The problem with this approach is that we can get a large number of division or Overfitting<br />Therefore we enforce a minimum number of instances , for example taking min = 3 in above example, we get:<br />
  • 11. Numeric attributes and 1R<br />When two adjacent division have the same majority class, then we can join these two divisions<br />So after this we will get:<br />Which gives the following classification rules:<br />
  • 12. Statistical Modeling<br />Another classification technique<br />Assumptions (for a given class):<br />All attributes contributes equally to decision making<br />All attributes are independent of each other<br />
  • 13. Statistical Modeling: An example<br />Given Data:<br />
  • 14. Statistical Modeling: An example<br />Data Description:<br />The upper half shows how many time a value of an attribute occurs for a class<br />The lower half shows the same data in terms of fraction <br />For example, class is yes 9 times<br />For class = yes, outlook = sunny 2 times<br />So under outlook = sunny and class = yes we have 2/9<br />
  • 15. Statistical Modeling<br />Problem at hand:<br />Solution:<br />Taking into the consideration that all attributes equally and are independent<br />Likelihood of yes = 2/9x3/9x3/9x3/9x9/14 = 0.0053<br />Likelihood of no = 3/5x1/5x4/5x3/5x5/14 = 0.0206<br />
  • 16. Statistical Modeling: An example<br />Solution continued..<br />As can be observed, likelihood of yes is high<br />Using normalization, we can calculate probability as:<br />Probability of yes = (.0053)/(.0053 + .0206) = 20.5%<br />Probability of no = (.0206)/(.0053 + .0206) = 79.5%<br />
  • 17. Statistical Modeling: An example<br />Derivation using Bayes’ rule:<br />Acc to Bayes’ rule, for a hypothesis H and evidence E that bears on that hypothesis, then <br />P[H|E] = (P[E|H] x P[H]) / P[E]<br />For our example hypothesis H is that play will be, say, yes and E is the particular combination of attribute values at hand<br />Outlook = sunny(E1)<br />Temperature = cool (E2)<br />Humidity = high(E3)<br />Windy = True (E4)<br />
  • 18. Statistical Modeling: An example<br />Derivation using Bayes’ rule:<br />Now since E1, E2, E3 and E4 are independent therefore we have<br /> P[H|E] = (P[E1|H] x P[E2|H] x P[E3|H] x P[E4|H] x P[H] ) / P[E]<br />Replacing values from the table we get, <br /> P[yes|E] = (2/9 x 3/9 x 3/9 x 3/9 x 9/14) / P[E]<br />P[E] will be taken care of during normalization of P[yes|E] and P[No|E]<br /> This method is called as Naïve Bayes<br />
  • 19. Problem and Solution for Naïve Bayes<br />Problem:<br />In case we have an attribute value (Ea)for which P[Ea|H] = 0, then irrespective of other attributes P[H|E] = 0<br />Solution:<br />We can add a constant to numerator and denominator, a technique called Laplace Estimator for example, <br /> P1 + P2 + P3 = 1:<br />
  • 20. Statistical Modeling: Dealing with missing attributes<br />Incase an value is missing, say for attribute Ea in the given data set, we just don’t count it while calculating the P[Ea|H]<br />Incase an attribute is missing in the instance to be classified, then its factor is not there in the expression for P[H|E], for example if outlook is missing then we will have:<br />Likelihood of Yes = 3/9 x 3/9 x 3/9 x 9/14 = 0.0238<br /> Likelihood of No = 1/5 x 4/5 x 3/5 x 5/14 = 0.0343 <br />
  • 21. Statistical Modeling: Dealing with numerical attributes<br />Numeric values are handled by assuming that they have :<br />Normal probability distribution<br />Gaussian probability distribution<br />For a normal distribution we have:<br /> u = mean<br /> sigma = Standard deviation<br /> x = instance under consideration<br /> f(x) = contribution of to likelihood figures<br />
  • 22. Statistical Modeling: Dealing with numerical attributes<br />An example, we have the data:<br />
  • 23. Statistical Modeling: Dealing with numerical attributes<br />So here we have calculated the mean and standard deviation for numerical attributes like temperature and humidity<br />For temperature = 66<br />So the contribution of temperature = 66 in P[yes|E] is 0.0340<br />We do this similarly for other numerical attributes<br />
  • 24. Divide-and-Conquer: Constructing Decision Trees<br />Steps to construct a decision tree recursively:<br />Select an attribute to placed at root node and make one branch for each possible value <br />Repeat the process recursively at each branch, using only those instances that reach the branch<br /> If at any time all instances at a node have the classification, stop developing that part of the tree<br />Problem: How to decide which attribute to split on<br />
  • 25. Divide-and-Conquer: Constructing Decision Trees<br />Steps to find the attribute to split on:<br />We consider all the possible attributes as option and branch them according to different possible values<br />Now for each possible attribute value we calculate Information and then find the Information gain for each attribute option<br />Select that attribute for division which gives a Maximum Information Gain<br />Do this until each branch terminates at an attribute which gives Information = 0 <br />
  • 26. Divide-and-Conquer: Constructing Decision Trees<br />Calculation of Information and Gain:<br />For data: (P1, P2, P3……Pn) such that P1 + P2 + P3 +……. +Pn = 1 <br />Information(P1, P2 …..Pn) = -P1logP1 -P2logP2 – P3logP3 ……… -PnlogPn<br />Gain= Information before division – Information after division <br />
  • 27. Divide-and-Conquer: Constructing Decision Trees<br />Example:<br />Here we have consider each<br />attribute individually<br />Each is divided into branches <br />according to different possible <br />values <br />Below each branch the number of<br />class is marked <br />
  • 28. Divide-and-Conquer: Constructing Decision Trees<br />Calculations:<br />Using the formulae for Information, initially we have<br />Number of instances with class = Yes is 9<br /> Number of instances with class = No is 5<br />So we have P1 = 9/14 and P2 = 5/14<br />Info[9/14, 5/14] = -9/14log(9/14) -5/14log(5/14) = 0.940 bits<br />Now for example lets consider Outlook attribute, we observe the following:<br />
  • 29. Divide-and-Conquer: Constructing Decision Trees<br />Example Contd.<br />Gain by using Outlook for division = info([9,5]) – info([2,3],[4,0],[3,2])<br /> = 0.940 – 0.693 = 0.247 bits<br />Gain (outlook) = 0.247 bits<br /> Gain (temperature) = 0.029 bits<br /> Gain (humidity) = 0.152 bits<br /> Gain (windy) = 0.048 bits<br />So since Outlook gives maximum gain, we will use it for division<br />And we repeat the steps for Outlook = Sunny and Rainy and stop for Overcast since we have Information = 0 for it <br />
  • 30. Divide-and-Conquer: Constructing Decision Trees<br />Highly branching attributes: The problem<br />If we follow the previously subscribed method, it will always favor an attribute with the largest number of branches<br />In extreme cases it will favor an attribute which has different value for each instance: Identification code<br />
  • 31. Divide-and-Conquer: Constructing Decision Trees<br />Highly branching attributes: The problem<br />Information for such an attribute is 0<br />info([0,1]) + info([0,1]) + info([0,1]) + …………. + info([0,1]) = 0<br />It will hence have the maximum gain and will be chosen for branching<br />But such an attribute is not good for predicting class of an unknown instance nor does it tells anything about the structure of division<br />So we use gain ratio to compensate for this <br />
  • 32. Divide-and-Conquer: Constructing Decision Trees<br />Highly branching attributes: Gain ratio<br />Gain ratio = gain/split info<br />To calculate split info, for each instance value we just consider the number of instances covered by each attribute value, irrespective of the class<br />Then we calculate the split info, so for identification code with 14 different values we have:<br />info([1,1,1,…..,1]) = -1/14 x log1/14 x 14 = 3.807<br />For Outlook we will have the split info:<br />info([5,4,5]) = -1/5 x log 1/5 -1/4 x log1/4 -1/5 x log 1/5 = 1.577<br />
  • 33. Divide-and-Conquer: Constructing Decision Trees<br />Highly branching attributes: Gain ratio<br />So we have:<br />And for the ‘highly branched attribute’, gain ratio = 0.247<br />
  • 34. Divide-and-Conquer: Constructing Decision Trees<br />Highly branching attributes: Gain ratio<br />Though the ‘highly branched attribute’ still have the maximum gain ratio, but its advantage is greatly reduced<br />Problem with using gain ratio:<br />In some situations the gain ratio modification overcompensates and can lead to preferring an attribute just because its intrinsic information<br /> is much lower than that for the other attributes.<br /> A standard fix is to choose the attribute that maximizes the gain ratio, provided that the information gain or that attribute is at least as great as the average information gain for all the attributes examined<br />
  • 35. Covering Algorithms: Constructing rules<br />Approach:<br />Consider each class in turn<br />Seek a way of covering all instances in it, excluding instances not belonging to this class<br />Identify a rule to do so<br /> This is called a covering approach because at each stage we identify a rule that covers some of the instances<br />
  • 36. Covering Algorithms: Constructing rules<br />Visualization:<br /> Rules for class = a:<br /><ul><li> If x > 1.2 then class = a
  • 37. If x > 1.2 and y > 2.6 then class = a
  • 38. If x > 1.2 and y > 2.6 then class = a</li></ul> If x &gt; 1.4 and y &lt; 2.4 then class = a <br />
  • 39. Covering Algorithms: Constructing rules<br />Rules Vs Trees:<br />Covering algorithm covers only a single class at a time whereas division takes all the classes in account as decision trees creates a combines concept description<br />Problem of replicated sub trees is avoided in rules<br />Tree for the previous problem:<br />
  • 40. Covering Algorithms: Constructing rules<br />PRISM Algorithm: A simple covering algorithm<br />Instance space after addition of rules:<br />
  • 41. Covering Algorithms: Constructing rules<br />PRISM Algorithm: Criteria to select an attribute for division<br />Include as many instances of the desired class and exclude as many instances of other class as possible<br />If a new rule covers t instances of which p are positive examples of the class and t-p are instances of other classes i.e errors, then try to maximize p/t<br />
  • 42. Covering Algorithms: Constructing rules<br />PRISM Algorithm: Example data<br />
  • 43. Covering Algorithms: Constructing rules<br />PRISM Algorithm: In action<br />We start with the class = hard and have the following rule:<br />If ? Then recommendation = hard<br />Here ? represents an unknown rule<br />For unknown we have nine choices:<br />
  • 44. Covering Algorithms: Constructing rules<br />PRISM Algorithm: In action<br />Here the maximum t/p ratio is for astigmatism = yes (choosing randomly between equivalent option in case there coverage is also same)<br />So we get the rule:<br />If astigmatism = yes then recommendation = hard<br />We wont stop at this rule as this rule gives only 4 correct results out of 12 instances it covers<br />We remove the correct instances of the above rule from our example set and start with the rule:<br />If astigmatism = yes and ? then recommendation = hard<br />
  • 45. Covering Algorithms: Constructing rules<br />PRISM Algorithm: In action<br />Now we have the data as:<br />
  • 46. Covering Algorithms: Constructing rules<br />PRISM Algorithm: In action<br />And the choices for this data is:<br />We choose tear production rate = normal which has highest t/p<br />
  • 47. Covering Algorithms: Constructing rules<br />PRISM Algorithm: In action<br />So we have the rule:<br />If astigmatism = yes and tear production rate = normal then recommendation = hard<br />Again, we remove matched instances, now we have the data:<br />
  • 48. Covering Algorithms: Constructing rules<br />PRISM Algorithm: In action<br />Now again using t/p we finally have the rule (based on maximum coverage):<br />If astigmatism = yes and tear production rate = normal and spectacle prescription = myope then recommendation = hard <br />And so on. …..<br />
  • 49. Covering Algorithms: Constructing rules<br />PRISM Algorithm: Pseudo Code<br />
  • 50. Covering Algorithms: Constructing rules<br />Rules Vs decision lists<br />The rules produced, for example by PRISM algorithm, are not necessarily to be interpreted in order like decision lists<br />There is no order in which class should be considered while generating rules <br />Using rules for classification, one instance may receive multiple receive multiple classification or no classification at all<br />In such cases go for the rule with maximum coverage and training examples respecitively<br />These difficulties are not there with decision lists as they are to be interpreted in order and have a default rule at the end <br />
  • 51. Mining Association Rules<br />Definition:<br />An association rule can predict any number of attributes and also any combination of attributes<br />Parameter for selecting an Association Rule:<br />Coverage: The number of instances they predict correctly<br />Accuracy: The ratio of coverageand total number of instances the rule is applicable<br />We want association rule with high coverage and a minimum specified accuracy<br />
  • 52. Mining Association Rules<br />Terminology:<br />Item – set: A combination of attributes<br />Item: An attribute – value pair<br />An example:<br />For the weather data we have a table with each column containing an item – set having different number of attributes<br />With each entry the coverage is also given<br />The table is not complete, just gives us a good idea<br />
  • 53. Mining Association Rules<br />
  • 54. Mining Association Rules<br />Generating Association rules:<br />We need to specify a minimum coverage and accuracy for the rules to be generated before hand<br />Steps:<br />Generate the item sets<br />Each item set can be permuted to generate a number of rules<br />For each rule check if the coverage and accuracy is appropriate<br /> This is how we generate association rules<br />
  • 55. Mining Association Rules<br />Generating Association rules:<br />For example if we take the item set:<br />humidity = normal, windy = false, play = yes<br />This gives seven potential rules (with accuracy):<br />
  • 56. Linear models<br />We will look at methods to deal with the prediction of numerical quantities<br />We will see how to use numerical methods for classification<br />
  • 57. Linear models<br />Numerical Prediction: Linear regression<br />Linear regression is a technique to predict numerical quantities<br />Here we express the class (a numerical quantity) as a linear combination of attributes with predetermined weights<br />For example if we have attributes a1,a2,a3…….,ak<br />x = (w0) + (w1)x(a1) + (w2)x(a2) + …… + (wk)x(ak)<br /> Here x represents the predicted class and w0,w1……,wk are the predetermined weights<br />
  • 58. Linear models<br />Numerical Prediction: Linear regression<br />The weights are calculated by using the training set<br />To choose optimum weights we select the weights with minimum square sum:<br />
  • 59. Linear models<br />Linear classification: Multi response linear regression <br />For each class we use linear regression to get a linear expression <br />When the instance belongs to the class output is 1, otherwise 0<br />Now for an unclassified instance we use the expression for each class and get an output<br />The class expression giving the maximum output is selected as the classified class<br />This method has the drawbacks that values produced are not proper probabilities <br />
  • 60. Linear models<br />Linear classification: Logistic regression<br />To get the output as proper probabilities in the range 0 to 1 we use logistic regression<br />Here the output y is defined as:<br /> y = 1/(1+e^(-x))<br />x = (w0) + (w1)x(a1) + (w2)x(a2) + …… + (wk)x(ak)<br />So the output y will lie in the range (0,1]<br />
  • 61. Linear models<br />Linear classification: Logistic regression<br />To select appropriate weights for the expression of x, we maximize:<br />To generalize Logistic regression we can use do the calculation like we did in Multi response linear regression <br />Again the problem with this approach is that the probabilities of different classes do not sum up to 1<br />
  • 62. Linear models<br />Linear classification using the perceptron<br />If instances belonging to different classes can be divided in the instance space by using hyper planes, then they are called linearly separable<br />If instances are linearly separable then we can use perceptron learning rule for classification <br />Steps:<br />Lets assume that we have only 2 classes<br />The equation of hyper plane is (a0 = 1):<br /> (w0)(a0) + (w1)(a1) + (w2)(a2) +…….. + (wk)(ak) = 0<br />
  • 63. Linear models<br />Linear classification using the perceptron<br />Steps (contd.):<br />If the sum (mentioned in previous step) is greater than 0 than we have first class else the second one<br />The algorithm to get the weight and hence the equation of dividing hyper plane (or the perceptron)is:<br />
  • 64. Instance-based learning<br />General steps:<br />No preprocessing of training sets, just store the training instances as it is<br />To classify a new instance calculate its distance with every stored training instance<br />The unclassified instance is allotted the class of the instance which has the minimum distance from it<br />
  • 65. Instance-based learning<br />The distance function<br />The distance function we use depends on our application<br />Some of the popular distance functions are: Euclidian distance, Manhattan distance metric etc.<br />The most popular distance metric is Euclidian distance (between teo instances) given by:<br /> K is the number of attributes<br />
  • 66. Instance-based learning<br />Normalization of data:<br />We normalize attributes such that they lie in the range [0,1], by using the formulae:<br />Missing attributes:<br />In case of nominal attributes, if any of the two attributes are missing or if the attributes are different, the distance is taken as 1 <br />In nominal attributes, if both are missing than difference is 1. If only one attribute is missing than the difference is the either the normalized value of given attribute or one minus that size, which ever is bigger<br />
  • 67. Instance-based learning<br />Finding nearest neighbors efficiently:<br />Finding nearest neighbor by calculating distance with every attribute of each instance if linear<br />We make this faster by using kd-trees<br />KD-Trees:<br />They are binary trees that divide the input space with a hyper plane and then split each partition again, recursively<br />It stores the points in k dimensional space, k being the number of attributes<br />
  • 68. Instance-based learning<br />Finding nearest neighbors efficiently:<br />
  • 69. Instance-based learning<br />Finding nearest neighbors efficiently:<br />Here we see a kd tree and the instances and splits with k=2<br />As you can see not all child nodes are developed to the same depth<br />We have mentioned the axis along which the division has been done (v or h in this case)<br />Steps to find the nearest neighbor:<br />Construct the kd tree (explained later)<br />Now start from the root node and comparing the appropriate attribute (based on the axis along which the division has been done), move to left or the right sub-tree <br />
  • 70. Instance-based learning<br />Steps to find the nearest neighbor (contd.):<br />Repeat this step recursively till you reach a node which is either a leaf node or has no appropriate leaf node (left or right)<br />Now you have find the region to which this new instance belong<br />You also have a probable nearest neighbor in the form of the regions leaf node (or immediate neighbor)<br />Calculate the distance of the instance with the probable nearest neighbor. Any closer instance will lie in a circle with radius equal to this distance<br />
  • 71. Instance-based learning<br />Finding nearest neighbors efficiently:<br />Steps to find the nearest neighbor (contd.):<br />Now we will move redo our recursive trace looking for an instance which is closer to put unclassified instance than the probable nearest neighbor we have<br />We start with the immediate neighbor, if it lies in the circle than we will have to consider it and all its child nodes (if any)<br />If condition of previous step is not true then we check the siblings of the parent of our probable nearest neighbor<br />We repeat these steps till we reach the root<br />In case we find instance(s) which are nearer, we update the nearest neighbor<br />
  • 72. Instance-based learning<br />Steps to find the nearest neighbor (contd.):<br />
  • 73. Instance-based learning<br />Construction of KD tree:<br />We need to figure out two things to construct a kd tree:<br />Along which dimension to make the cut<br />Which instance to use to make the cut<br />Deciding the dimension to make the cut:<br />We calculate the variance along each axis<br />The division is done perpendicular to the axis with minimum variance<br />Deciding the instance to be used for division:<br />Just take the median as the point of division <br />So we repeat these steps recursively till all the points are exhausted<br />
  • 74. Clustering<br />Clustering techniques apply when rather than predicting the class, we just want the instances to be divided into natural group<br />Iterative instance based learning: k-means<br />Here k represents the number of clusters<br />The instance space is divided in to k clusters<br />K-means forms the cluster so as the sum of square distances of instances from there cluster center is minimum<br />
  • 75. Clustering<br />Steps:<br />Decide the number of clusters or k manually<br />Now from the instance set to be clustered, randomly select k points. These will be our initial k cluster centers of our k clusters<br />Now take each instance one by one , calculate its distance from all the cluster centers and allot it to the cluster for which it has the minimum distance<br />Once all the instances have been classified, take centroid of all the points in a cluster. This centroid will be give the new cluster center<br />Again re-cluster all the instances followed by taking the centroid to get yet another cluster center<br />Repeat step 5 till we reach the stage in which the cluster centers don’t change. Stop at this, we have our k-clusters<br />
  • 76. Visit more self help tutorials<br />Pick a tutorial of your choice and browse through it at your own pace.<br />The tutorials section is free, self-guiding and will not involve any additional support.<br />Visit us at www.dataminingtools.net<br />

×