This document presents a comprehensive overview of performance measures in machine learning classification and highlights their critical roles in model development and evaluation. It proposes a novel evaluation metric based on the voting results of three existing measures to address the limitations of traditional metrics like accuracy, particularly in imbalanced datasets. The paper emphasizes the need for a balanced approach to evaluating classifiers while outlining common pitfalls and proposing a framework for constructing better evaluation metrics.