This paper surveys the current methods of machine translation (MT) evaluation, categorizing them into manual and automatic methods. It discusses traditional human criteria like intelligibility and fidelity, advanced assessment methods, and automatic evaluation techniques that involve lexical similarity and linguistic features. The document also highlights the evolution of MT since the 1950s and emphasizes the importance of robust evaluation metrics for the development of effective MT systems.