The document describes a system called MOODetector that classifies music based on mood and generates mood-based playlists. It discusses research goals of using different mood models and feature extraction from audio, MIDI, and lyrics. Current work involves automatically creating playlists based on the Thayer mood model using audio features and support vector regression. Future work includes improving the ground truth, extracting more relevant features, and using techniques like genetic algorithms and neural-fuzzy systems.