Advertisement

More Related Content

Slideshows for you(20)

Similar to B8_Mini project_Final review ppt.pptx(20)

Advertisement

B8_Mini project_Final review ppt.pptx

  1. A MINI PROJECT On EMOTION-BASED-MUSIC-PLAYER BACHELOR OF TECHNOLOGY IN COMPUTER SCIENCE AND ENGINEERING BY Shamala Tejaswini (18VE1A05A8) Marpaka Shivani Reddy (18VE1A0589) Poddaturi Vishal (18VE1A05A0) Gangula Bani Vishwas (18VE1A0575) Under the Guidance of M.Sudhakar, Asst.prof ACADEMIC BATCH: 2018-2022
  2.  Abstract  Problem Statement  Literature Survey  Existing System  Drawbacks  Proposed System  Advantages  Implementation  Software and Hardware requirements  Design and Analysis  Architecture Diagram  Class Diagram  Use case Diagram  Sequence Diagram  Activity Diagram  Sample code  Test Case and Expected results  Testing and analysis  Results  Conclusion
  3.  A novel approach that provides, the user with an automatically created playlist of songs based on the mood of the user.  Music plays a very important role in human’s daily life and in the modern advanced technologies.  The difficulties in the creation of large playlists can overcome here.  The music player itself the songs based on the current mood of the user.
  4.  Existing methods for automating the playlist generation process are computationally slow, less accurate and sometimes even require use of additional hardware like sensors.  This proposed system based on extraction of facial expressions that will generate a playlist automatically thereby reducing the time and effort.  The accuracy of real time algorithm is 85-90% ,while for static images it is 98-100%. Cont………
  5. Many factors contribute in conveying emotions of an individual. Humans can recognize emotions with accuracy. If we can effectively and efficiently utilize heretofore found knowledge in computer science to find practical solutions for automatic recognition of facial emotions.
  6.  Various techniques and approaches have been proposed and developed to classify human emotional state of behavior.  Facial features have been categorized into two major categories such as Appearance-based feature extraction and Geometric based feature extraction.
  7.  Current music players have features like play, pause, shuffle, play next, play previous.
  8.  Detector is most effective only on frontal images of faces.  Sensitive to lighting conditions.  We might get multiple detections of the same face, due to overlapping sub-windows.  Does not detect multiple images.
  9.  The foremost concept of this project is to automatically play songs based on the emotions of the user.  It aims to provide user-preferred music with respect to the emotions detected. In existing system user has to manually select the songs, randomly played songs may not match to the mood of the user, user has to classify the songs into multiple emotions and then for playing the songs user has to manually select a particular emotion.
  10.  Fast feature computation.  Efficient feature selection.  Ease of use.  Mixed mood detection.  Improved accuracy.  Reduced computational time.
  11. • Hardware requirements:  Device enabled with internet  2 GB RAM  4 GB Internal storage memory. • Software requirements:  OS : Windows 7 and above  Platform : OpenCV-Python
  12.  Our project detects the mood of the user and plays a song or playlist according to his mood.  The project uses a web camera to capture the image of the user, it then classifies the facial expression as happy, sad, neutral, or angry and then plays the song according to the input image.
  13.  This study proposes a music recommendation system which extracts the image of the user, which is captured with the help of a camera attached to the computing platform. Once the picture has been captured, the captured frame of the image from webcam feed is then being converted to a grayscale image to improve the performance of the classifier that is used to identify the face present in the picture. Once the conversion is complete, the image is sent to the classifier algorithm which, with the help of feature extraction techniques is able to extract the face from the frame of the web camera feed. Once the face is extracted individual features from the face is extracted and is sent to the trained network to detect the emotion expressed by the user.
  14. Cont……… The overall idea behind making the system is to enhance the experience of the user and ultimately relieve some stress or lighten the mood of the user. The user does not have to waste any time in searching or to look up for songs and the best track matching the user’s mood is detected and played automatically by the music player. The image of the user is captured with the help of a webcam. The user’s picture is taken and then as per the mood/emotion of the user an appropriate song from the playlist of the user is played matching the user’s requirement.
  15. For training we have used fisherface method Train method to train the model To save the model For loading model Prediction and confidence of result
  16. Test case Result Face Scanning Success Feature Extraction Success Emotion Recognition Success Multiple Emotions Failure Bad light Conditions Failure
  17.  The user carried out system testing once the completion of the system development.  The purpose of this testing is to check the functionalities system, whether if it is usable and well-functioned Happy Sad Angry Neutral
  18. If the face detected is – Angry If the face detected is - Sad
  19. If the face detected is – Happy If the face detected is - Neutral
  20.  Which is very less thus helping in achieving a better real time performance and efficiency. MODULE TIME TAKEN (sec) Face detection 0.8126 Facial Feature Extraction 0.9216 Emotion extraction Module 0.9994 Emotion – Audio Integration Module 0.0006 Proposed System 1.0000
  21.  The future scope of the system would to design a mechanism that would be helpful in music therapy treatment and provide the music therapist the help need to treat the patience suffering from disorders like mental stress, anxiety, acute depression and trauma.  The proposed system also tends to avoid in the future the unpredictable results produced in extreme bad light conditions and very poor camera resolution.
  22.  https://www.researchgate.net/publication/267229317_ Human_Emotion_Recognition_System  http://www.paulvangent.com/2016/04/01/emotion- recognition-with-python-opencv-and-a-face-dataset/  http://www.paulvangent.com/2016/06/30/making-an- emotion-aware-music-player/  https://www.geeksforgeeks/emotion-recognition
Advertisement