This document discusses using deep learning techniques to recognize music genres from audio files. It evaluates three approaches: extracting Mel-spectrograms, MFCC plots, and chroma STFT features from audio and using those as input to CNN models. A CNN architecture with 5 conv layers performed best on Mel-spectrograms, achieving over 90% accuracy. MFCC plots achieved over 70% accuracy. Chroma STFT features performed worst at around 57% accuracy. In conclusion, Mel-spectrograms were found to be the most effective audio feature for music genre classification using deep learning.