This document summarizes a research paper on emotion recognition based on audio speech. It discusses how acoustic features are extracted from speech signals by applying preprocessing techniques like preemphasis and framing. It describes extracting features like Mel frequency cepstral coefficients (MFCCs) that capture characteristics of the vocal tract. Support vector machines (SVMs) are used as pattern classification methods to build models for each emotion and compare test speech features to recognize emotions. The paper confirms the advantage of its audio-based emotion recognition approach through experimental results and discusses potential improvements and future work on increasing efficiency and recognizing emotion intensity.