KNN algorithm
TheK-Nearest Neighbors (KNN) algorithm is a supervised machine
learning algorithm for classification and regression
It can be used to solve classification and regression problems.
The output depends on whether k-nearest neighbors are used for
classification or regression. The main idea behind K-NN is to find the K
nearest data points, or neighbors, to a given data point and then
predict the label or value of the given data point based on the labels or
values of its K nearest neighbors.
K can be any positive integer, but in practice, K is often small
Key aspects ofK-nearest neighbour's
In the k-nearest neighbor’s classification, the output is a class
membership. An object is classified by a majority vote of its
neighbors, with the object being assigned to the class most
common among its k nearest neighbors (k is a positive integer,
typically small). If k = 1, then the object is simply assigned to
the class of that single nearest neighbor.
In the K-nearest neighbors regression, the output is the property
value for the object. This value is the average of the values of its
k nearest neighbors.
7.
K-nearest neighboris a non-parametric method, which means
that it does not make any assumptions about the underlying
data.
This is advantageous over parametric methods, which do make
such assumptions. The models don’t learn parameters from the
training data set to come up with a discriminative function in
order to classify the test or unseen data set