Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Working Notes for the Placing Task at MediaEval 2012

900 views

Published on

Published in: Technology
  • Be the first to comment

Working Notes for the Placing Task at MediaEval 2012

  1. 1. ! Smile!? Placing Task Organisers: Adam Rae (Yahoo! Research) Pascal Kelm (Technische Universität Berlin)
  2. 2. Task Description• Given a video, how accurately can it be placed on a map and be given latitude and longitude coordinates? METADATA
  3. 3. Task Overview• Automatic location annotation of online videos• 7 teams submitted results (17% up) – 5 veterans – 2 new participants• First year for code sharing – GitHub (currently)
  4. 4. Data• Provided – Textual metadata: tags, titles, descriptions – Visual: 9 visual features extracted for key frames every 4 seconds – Additional media: images with textual and visual feature data• Available (external) – Up to the participant, but controlled according to run submission
  5. 5. Data• Training – 15,563 videos (combination of last year’s training and test data) – 3,185,258 additional Flickr images• Test – 4,182 videos
  6. 6. Evaluation• Take the latitude + longitude suggested by participants for each video• Compute Haversine distance between that and the ‘true’ location• We group results into buckets of increasing radii, e.g. 1km, 10km, 20km, etc.
  7. 7. Overall Best Results Percentage of correct locations @ 1km TUD ICSI TUB Organiser-connected team GENTUNICAMP IRISA CEALIST 0% 5% 10% 15% 20% 25% 30%
  8. 8. Only Restriction: No new material, gazetteer permitted 4500 4000 3500Correct Test Videos 3000 2500 2000 1500 1000 500 0 1 10 100 1000 10000 100000 Distance from Ground Truth ICSI TUD UG-CU UNICAMP CEA_LIST London Baseline
  9. 9. Restriction: Visual Only 4500 4000 3500Correct Test Videos 3000 2500 2000 1500 1000 500 0 1 10 100 1000 10000 100000 Distance from Ground Truth CEA_LIST ICSI IRISA UG-CU UNICAMP TUB
  10. 10. Detected trends and activity of note• What classes of approaches were taken (has this change since last year?) – Textual, visual – Graph modelling – User modelling – …combinations of above• Challenging Assumptions – Spatial locality  visual stability?• Absolute performance lower than last year – but… – Different data set – Less textual metadata in general
  11. 11. Future of the task• Still room for improvement• Still a valuable task?• Standard of science improving• Need new organisers! Talk to Pascal and me

×