This paper presents a novel algorithm for feature subset selection in high-dimensional data by eliminating irrelevant and redundant features. The method utilizes symmetric uncertainty and a minimum spanning tree constructed using Prim's algorithm to identify and cluster relevant features, ultimately selecting a representative feature from each cluster. Experimental results demonstrate that this algorithm significantly reduces dimensionality while maintaining classification accuracy, outperforming existing techniques.