Your SlideShare is downloading. ×
SEASR Audio
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

SEASR Audio

740
views

Published on

Pathway to SEASR Workshop in March 2009 in North Carolina

Pathway to SEASR Workshop in March 2009 in North Carolina

Published in: Education, Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
740
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Pathways to SEASR Audio Analysis NEMA NESTER National Center for Supercomputing Applicationsquot; University of Illinois at Urbana-Champaign The SEASR project and its Meandre infrastructure! are sponsored by The Andrew W. Mellon Foundation
  • 2. Defining Music Information Retrieval? •  Music Information Retrieval (MIR) is the process of searching for, and finding, music objects, or parts of music objects, via a query framed musically and/or in musical terms •  Music Objects: Scores, Parts, Recordings (WAV, MP3, etc.), etc. •  Musically framed query: Singing, Humming, Keyboard, Notation-based, MIDI file, Sound file, etc. •  Musical terms: Genre, Style, Tempo, etc.
  • 3. NEMA Networked Environment for Music Analysis –  UIUC, McGill (CA), Goldsmiths (UK), Queen Mary (UK), Southampton (UK), Waikato (NZ) –  Multiple geographically distributed locations with access to different audio collections –  Distributed computation to extract a set of features and/or build and apply models
  • 4. SEASR: @ Work – NEMA Executes a SEASR flow for each run –  Loads audio data –  Extracts features from every 10 second moving window of audio –  Loads models –  Applies the models –  Sends results back to the WebUI
  • 5. NEMA Flow – Blinkie
  • 6. NEMA Vision •  researchers at Lab A to easily build a virtual collection from Library B and Lab C, •  acquire the necessary ground-truth from Lab D, •  incorporate a feature extractor from Lab E, combine with the extracted features with those provided by Lab F, •  build a set of models based on pair of classifiers from Labs G and H •  validate the results against another virtual collection taken from Lab I and Library J. •  Once completed, the results and newly created features sets would be, in turn, made available for others to build upon
  • 7. Do It Yourself (DIY) 1
  • 8. DIY Options
  • 9. DIY Job List
  • 10. DIY Job View
  • 11. Nester: Cardinal Annotation •  Audio tagging environment •  Green boxes indicate a tag by a researcher •  Given tags, automated approaches to learn the pattern are applied to find untagged patterns
  • 12. Nester: Cardinal Catalog View
  • 13. Examining Audio Collection •  Tagged a set of examples Male and Female
  • 14. Pathways to SEASRquot; Audio National Center for Supercomputing Applicationsquot; University of Illinois at Urbana-Champaign The SEASR project and its Meandre infrastructurequot; are sponsored by The Andrew W. Mellon Foundation