This document describes a mood-based music player application that uses facial expression recognition to determine a user's mood and generate an appropriate music playlist. The application first scans a user's device to analyze audio files and extract features to categorize songs by genre or mood. It then uses the Viola-Jones algorithm on a captured photo of the user's face to detect their expression and deduce their mood. The recognized mood is then used to select a pre-generated playlist that matches. The goal is to automate the process of playlist generation based on a user's current emotional state to provide a more personalized music listening experience.