Harris corner detection is used to extract local features from images. It works by (1) computing the gradient at each point, (2) constructing a second moment matrix from the gradient, and (3) using the eigenvalues of this matrix to score how "corner-like" each point is. Points with a large, local maximum score are detected as corners. The Harris operator, which is a variant using the trace of the matrix, is commonly used due to its efficiency. Corners provide distinctive local features that can be matched between images.