A METRIC FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT FOR HD TV DELIVERY BASED ON SALIENCY MAPS <br />H. BOUJUT*, J. BENOIS-P...
Overview<br />Introduction<br />Focus Of Attention and Saliency Maps<br />Our approach: Weighted Macro Block Error Rate (W...
Introduction<br />Motivation<br />VQA for HD broadcast applications<br />Measure the influence of transmission loss on per...
Focus of Attention and Saliency maps<br />FOA is mostly attracted by salient areas which stand out from the visual scene.<...
Saliency maps (1/2)<br />Several methods for saliency map extraction already exist in the literature.<br />All methods wor...
Saliency maps (2/2)<br />In this work we re-used the saliency map extraction method published at IS&T Electronic Imaging 2...
Saliency map fusion (1/2)<br />We use the multiplication fusion method                  and the logarithm fusion method   ...
To produce spatio-temporal saliency map, we also propose a new fusion method <br />Similar fusion properties as <br />Give...
WMBER Vq metric based on saliency maps (1/3) <br />Weighted Macro Block Error Rate (WMBER) is a No Reference metric<br />V...
WMBER Vq metric based on saliency maps (2/3)<br />MB errormap<br />&<br />Decoder<br />Decoded Frame<br />Gradient energy<...
WMBER Vq metric based on saliency maps (3/3)<br />When MB errors covers the whole frame and the energy of the gradient is ...
Subjective Experiment<br />Subjective experiment<br />According to:<br />VQEG Report on Validation of the Video Quality Mo...
Subjective experiment results<br />
We propose to use a supervised learning method to predict MOS values from WMBER or MSE<br />This prediction method is call...
Evaluation and results<br />We compare 6 objective video quality metrics:<br />MSE<br />WMBER using the 5 v/deg 2D Gaussia...
Conclusion and future Work<br />We were interested in the problem of objective video quality assessment over lossy channel...
Thank you for your attention. Any questions?<br />
Upcoming SlideShare
Loading in …5
×

A metric for no reference video quality assessment for hd tv delivery based on saliency maps

1,347 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,347
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
25
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

A metric for no reference video quality assessment for hd tv delivery based on saliency maps

  1. 1. A METRIC FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT FOR HD TV DELIVERY BASED ON SALIENCY MAPS <br />H. BOUJUT*, J. BENOIS-PINEAU*, T. AHMED*, O. HADAR** & P. BONNET***<br />*LaBRI UMR CNRS 5800, University of Bordeaux, France<br />**Communication Systems Engineering Dept., Ben Gurion University of the Negev, Israel<br />***AudematWorldCast Systems Group, France<br />ICME 2011 – Workshop on Hot Topics in Multimedia Delivery (HotMD’11)<br />2011-07-11<br />
  2. 2. Overview<br />Introduction<br />Focus Of Attention and Saliency Maps<br />Our approach: Weighted Macro Block Error Rate (WMBER) based on saliency maps a no reference video quality metric<br />Prediction of subjective quality metrics from objective quality metrics<br />Evaluation and results<br />Conclusion and future work<br />
  3. 3. Introduction<br />Motivation<br />VQA for HD broadcast applications<br />Measure the influence of transmission loss on perceived quality<br />Video quality assessment protocol<br />Full Reference (FR)<br />SSIM (Z. Wang, A. Bovik)<br />A novel perceptual metric for video compression (A. Bhat, I. Richardson) PCS’09<br />Evaluation of temporal variation of video quality in packet loss networks (C. Yim, A. C. Bovik, 2011) Image Communication 26 (2011)<br />Reduced Reference (RR)<br />A Convolutional Neural Network Approach for Objective Video Quality Assessment (P. Le Callet, C. Viard-Gaudin, D. Barba) IEEE Transactions on Neural Networks 17.<br />No Reference (NR)<br />No-reference image and video quality estimation: Applications and human-motivated design (S. Hemami, A. Reibman) Image Communication 25 (2010)<br />In this work:<br />NR VQA with visual saliency in H.264/AVC framework<br />Contributions:<br />Visual saliency map during compression process<br />WMBER NR quality metric<br />Prediction of subjective quality metrics from objective quality metrics<br />
  4. 4. Focus of Attention and Saliency maps<br />FOA is mostly attracted by salient areas which stand out from the visual scene.<br />FOA is sequentially grabbed over the salient areas.<br />Salient stimuli are mainly due to:<br />High color<br />Contrast<br />Motion<br />Edge orientation <br />Original Frame<br />Saliency map<br />Tractor sequence (TUM/VQEG)<br />
  5. 5. Saliency maps (1/2)<br />Several methods for saliency map extraction already exist in the literature.<br />All methods work in the same way [O. Brouard, V. Ricordel and D. Barba, 2009], [S. Marat, et al., 2009]:<br />Extraction of the spatial saliency map (static pathway)<br />Extraction of the temporal saliency map (dynamic pathway)<br />Fusion of the spatial and the temporal saliency maps (fusion)<br />Temporal saliency map<br />Spatial saliency map<br />Spatio-temporal saliency map<br />
  6. 6. Saliency maps (2/2)<br />In this work we re-used the saliency map extraction method published at IS&T Electronic Imaging 2011 :<br />Based on the saliency map model from O. Brouard, V. Ricordeland D. Barba.<br />Use partial decoding of H.264 stream to reach real-time performances.<br />A fusion method to combine spatial and temporal saliency maps has been proposed.<br />We propose a new fusion method <br />
  7. 7. Saliency map fusion (1/2)<br />We use the multiplication fusion method and the logarithm fusion method , both weighted with a 5 visual deg. 2D Gaussian 2DGauss(s) to compare with our proposed fusion method.<br />Spatio-temporal saliency map<br />
  8. 8. To produce spatio-temporal saliency map, we also propose a new fusion method <br />Similar fusion properties as <br />Gives more weight to regions which have both:<br />High spatial saliency<br />High temporal saliency<br />Do not provide null spatio-temporal saliency when temporal saliency is very low.<br />Saliency map fusion (2/2)<br />
  9. 9. WMBER Vq metric based on saliency maps (1/3) <br />Weighted Macro Block Error Rate (WMBER) is a No Reference metric<br />Visual attention is focused on the saliency map<br />Video transmission artifacts may change the saliency map<br />We propose to extract the saliency maps on the already broadcasted disturbed video stream.<br />WMBER also relies on MB error detection in the bit stream<br />DC/AC and MV error detection<br />Error propagation according to H.264 decoding process<br />WMBER is based on:<br />MB error detection<br />Weighted by Saliency maps<br />Original transmission error<br />Propagation of transmission errors<br />
  10. 10. WMBER Vq metric based on saliency maps (2/3)<br />MB errormap<br />&<br />Decoder<br />Decoded Frame<br />Gradient energy<br />X<br />Σ<br />GME<br />SaliencyMap<br />/<br />Σ<br />WMBER<br />
  11. 11. WMBER Vq metric based on saliency maps (3/3)<br />When MB errors covers the whole frame and the energy of the gradient is high:<br />WMBER is high (near 1.0)<br />When there no MB errors or the energy of the gradient is low:<br />WMBER is low (near 0.0)<br />The WMBER of a video sequence is the average WMBER of the frames.<br />
  12. 12. Subjective Experiment<br />Subjective experiment<br />According to:<br />VQEG Report on Validation of the Video Quality Models for High Definition Video Content (June 2010).<br />ITU-R Rec. BT.500-11<br />20 HDTV (1920x1080 pixels) video sources (SRC) from :<br />The Open Video Project: www.open-video.org<br />NTIA/ITS<br />TUM/Taurus Media Technik<br />French HDTV<br />Measure the influence of transmission loss on perceived quality<br />2 loss models:<br />IP model (ITU-T Rec. G.1050)<br />RF (Radio Frequency) model<br />8 loss profiles were compared<br />160 Processed Video Streams (PVS)<br />35 participants were gathered<br />MOS values were computed for each SRC and PVS.<br />Experiment room<br />
  13. 13. Subjective experiment results<br />
  14. 14. We propose to use a supervised learning method to predict MOS values from WMBER or MSE<br />This prediction method is called: Similarity-weighted average<br />Requires a training data set of n known pairs (xi, yi) to predict y from x.<br />Here (xi, yi) pairs are WMBERor MSE values associated with MOS values.<br />y is the predicted MOS from a given WMBER/MSE x.<br />The prediction is performed using (known as a weighted mean classifier):<br />Prediction of subjective quality metrics from objective quality metrics<br />
  15. 15. Evaluation and results<br />We compare 6 objective video quality metrics:<br />MSE<br />WMBER using the 5 v/deg 2D Gaussian (WMBER2DGauss)<br />WMBER using the multiplication fusion (WMBERmul)<br />WMBER using the log sum fusion (WMBERlog)<br />WMBER using the square sum fusion (WMBERsquare)<br />WMBER using the spatial saliency map (WMBERsp)<br />All metrics are computed for each 160 PVS + 20 SRC.<br />6data sets are built:<br />180 pairs Objective Metric/MOS<br />Each data set is split in 2 equal parts:<br />Training set and Evaluation set<br />The Pearson Correlation Coefficient (PCC) is used for the evaluation<br />Cross validation<br />
  16. 16. Conclusion and future Work<br />We were interested in the problem of objective video quality assessment over lossy channels.<br />We followed the recent trends in the definition of spatio-temporal saliency maps for FOA.<br />New no reference metric : the WMBER based on saliency maps.<br />We bought a new solution for saliency maps fusion: the Square sum fusion.<br />We proposed a supervised learning method to predict subjective quality metric MOS from objective quality metrics.<br />Similarity weighted average.<br />Gives better results than the conventional approach: polynomial fitting.<br />We intend to improve the saliency model to better consider:<br />Transmission artifacts<br />Masking effect in the neighborhood of high saliency areas.<br />We plan to evaluate the WMBER on the IRCCyN/IVC Eyetracker SD 2009_12 Database.<br />
  17. 17. Thank you for your attention. Any questions?<br />

×