Bayes Theorem and KNN Algorithm
Bayes Theorem
KNN algorithm
 The K-Nearest Neighbors (KNN) algorithm is a supervised machine
learning algorithm for classification and regression
 It can be used to solve classification and regression problems.
 The output depends on whether k-nearest neighbors are used for
classification or regression. The main idea behind K-NN is to find the K
nearest data points, or neighbors, to a given data point and then
predict the label or value of the given data point based on the labels or
values of its K nearest neighbors.
 K can be any positive integer, but in practice, K is often small
Distance Metrics
Predict the answer
Key aspects of K-nearest neighbour's
 In the k-nearest neighbor’s classification, the output is a class
membership. An object is classified by a majority vote of its
neighbors, with the object being assigned to the class most
common among its k nearest neighbors (k is a positive integer,
typically small). If k = 1, then the object is simply assigned to
the class of that single nearest neighbor.
 In the K-nearest neighbors regression, the output is the property
value for the object. This value is the average of the values of its
k nearest neighbors.
 K-nearest neighbor is a non-parametric method, which means
that it does not make any assumptions about the underlying
data.
 This is advantageous over parametric methods, which do make
such assumptions. The models don’t learn parameters from the
training data set to come up with a discriminative function in
order to classify the test or unseen data set
Predict the output for K=3 and k=5
What is the most appropriate value
of K

Bayes Theorem and KNN Machine Learning algorithm .pptx

  • 1.
    Bayes Theorem andKNN Algorithm
  • 2.
  • 3.
    KNN algorithm  TheK-Nearest Neighbors (KNN) algorithm is a supervised machine learning algorithm for classification and regression  It can be used to solve classification and regression problems.  The output depends on whether k-nearest neighbors are used for classification or regression. The main idea behind K-NN is to find the K nearest data points, or neighbors, to a given data point and then predict the label or value of the given data point based on the labels or values of its K nearest neighbors.  K can be any positive integer, but in practice, K is often small
  • 4.
  • 5.
  • 6.
    Key aspects ofK-nearest neighbour's  In the k-nearest neighbor’s classification, the output is a class membership. An object is classified by a majority vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.  In the K-nearest neighbors regression, the output is the property value for the object. This value is the average of the values of its k nearest neighbors.
  • 7.
     K-nearest neighboris a non-parametric method, which means that it does not make any assumptions about the underlying data.  This is advantageous over parametric methods, which do make such assumptions. The models don’t learn parameters from the training data set to come up with a discriminative function in order to classify the test or unseen data set
  • 8.
    Predict the outputfor K=3 and k=5
  • 9.
    What is themost appropriate value of K