This document describes a project to build a convolutional neural network (CNN) model to recognize six basic human emotions (angry, fear, happy, sad, surprise, neutral) from facial expressions. The CNN architecture includes convolutional, max pooling and fully connected layers. Models are trained on two datasets - FERC and RaFD. Experimental results show that Model C achieves the best testing accuracy of 71.15% on FERC and 63.34% on RaFD. Visualizations of activation maps and a prediction matrix are provided to analyze the model's performance and confusions between emotions. A live demo application is also developed using OpenCV to demonstrate real-time emotion recognition from video frames.