K-Nearest Neighbor is a simple machine learning algorithm that classifies unlabeled examples based on their similarity to labeled examples in a feature space. It works by finding the k closest training examples in the feature space and assigning the label based on a majority vote of the k neighbors. The algorithm does not use the training data for generalization and requires all data during testing. It treats features as coordinates and measures distance between points to determine similarity. Choosing an appropriate value for k and preparing the data through normalization are important for the efficacy of the model. Some applications of k-NN include agriculture, finance, and medicine.