0907008
Upcoming SlideShare
Loading in...5
×
 

0907008

on

  • 133 views

 

Statistics

Views

Total Views
133
Views on SlideShare
133
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

0907008 0907008 Presentation Transcript

  • Prediction of Time Varying Musical Mood Distributions Using Kalman Filtering Researched by Erik M. Schmidt and Youngmoo E. Kim Music and Entertainment Technology Laboratory (MET-lab) Presented by Sanjoy Dutta Roll: 0907008 Department of Computer Science & Engineering Khulna University of Engineering & Technology
  • Objective Enabling more robust emotion-based music recommendation considering how emotion in music varies over time.
  • Keywords Emotion recognition, Audio features, Regression, Linear dynamical systems, Kalman filtering View slide
  • Emotion Recognition View slide
  • Block diagram of modeling and predicting emotion in music through machine learning
  • Audio Feature Extraction
  • GROUND TRUTH DATA COLLECTION Participants use a graphical interface in Moodswing game to indicate a dynamic position within the A-V space to annotate five 30-second music clips. Each subject provides a check against the other, reducing the probability of nonsense labels.
  • MoodSwings Lite Corpus Developed a reduced dataset consisting of 15-second music clips from 240 songs, selected using the original label set, to approximate an even distribution across the four primary quadrants of the A-V space. These clips were subjected to intense focus within the game in order to form a corpus, referred to here as MoodSwings Lite, with significantly more labels per song clip, which is used in this analysis.
  • Data Preprocessing Time-varying emotion distribution regression results for three example 15-second music clips (markers become darker as time advances): Second-by-second labels per song (gray bullet), Standard deviation of the collected labels over 1-second intervals (red ellipse), and Standard deviation of the distribution projected from acoustic features in 1-second intervals (blue ellipse). Heavy amount of noise in the covariance ellipses Apply preprocessing using a Kalman/Rauch-Tung-Striebel (RTS) smoother.
  • Experiment result
  • Experiment Analysis The Kalman mixture provides the best result of any system, using only four clusters they achieve an average KL of 2.881, which is a significant improvement MLR system at 4.576 and the MLR mixture at 3.179. In terms of mean error, however, Kalman and MLR mixtures produce nearly identical results, with normalized distances of about 0.109. not over both the
  • Conclusion Analysis of state of musical emotion in terms of mathematical representation along with the help of A-V repsentation. They proposed that they can develop the most accurate representation of their ground truth using a distribution. Using a Kalman filtering approach, have been able to form robust estimates of their distribution and how it evolves over time.
  • Thank You