This document discusses using deep neural networks for audio chord recognition from music recordings. It describes the task of identifying chord labels for time segments of audio. The model uses convolutional and recurrent layers with chromagram features extracted from the audio as input. Evaluation shows the CNN+LSTM model achieves over 50% accuracy on a dataset of 180 annotated songs. Future work ideas include improving segmentation and exploring additional neural network architectures.