SlideShare a Scribd company logo
1 of 23
Edge-Based Feature Extraction for Artifacts Detection and Error
Pattern Analysis from Broadcasted Videos
Supervised by
Prof. Oksam Chae
Md. Mehedi Hasan, 2010315443
Image Processing Lab,
Department of Computer Engineering
Kyung Hee University, Korea
2012.05.08
Presentation Outline
2
•Objectives
•Challenges
Introduction
Contributions
Related Works
•The Proposed Video Artifacts measure and Error Frame Detection
•The Proposed Spatial Error Block Analysis System
Proposed Artifact Detection and Error Pattern Analysis
Experimental Results
Conclusion and Future Work
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
3
Introduction
• To gain a system that detect video artifacts happened not only in compression
(Block based) but also occurred during transmission or broadcasting.
• To reduce the time complexity of the conventional pixel based detection
methods which requires high memory and too much computation time.
• Selection of light weighted human vision measurement system and Choosing a
detection mechanism to detect the distorted frames in real-time.
• Introduce a error block classification and analysis method that can be used in
video restoration, error concealment , video retrieval and many other
commercial applications .
Objective
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Noise and error model for broadcasting and surveillance systems
Introduction
4
Video artifacts detection and distorted pattern analysis is difficult
 Videos are distorted with compression , wireless transmission based and broadcasting based artifacts
 In image and video communication , original image and video is not accessible which is called No-reference approach, is a
challenging research issue.
 Compression based artifacts are sustained in a block based manner (typically , 8 by 8) but wireless transmission and
broadcasting related artifacts are not always sustained in a block based manner.
 A real time application that not only show the quality measure but also detect the distorted frames from videos.
 Classify and analyze the error patterns from defected frames that can be used in video restoration, error concealment and
retrieval.
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Challenges
Introduction
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 5
Sample Videos and Images
Courtesy: Samples provided by KBS
Proposed Method
6
• Contains Edge magnitude and direction
• Less Sensitive to illumination Variation and Noise
• Extract frames from videos and analyze
• Incorporate Kirsch Mask to detect edge pixels.
• Can detect candidate frame with high disruption in
sequence of frames(Temporal Information).
• More gradient direction is analyzed for complex
environment
• Block classification is done in three steps.
• Edge Block and Texture Content Block is analyzed .
• Error Block Analysis is incorporated for better accuracy
that can be used is Error concealment and restoration.
Video Artifacts Measure and Error
Frame Detection
Spatial Error Block Analysis (SEBA)
Statistical Background Modeling and Multiple Motion Analysis for the Parametric Gesture Representation
7
Related Works
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Artifact Measure and Error Frame Detection
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 8
Generate Distortion Metric
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 9
 Further the locations of the
compression block boundaries may
be detected by observing where the
maximum correlation value occurs.
 The resulting correlation results are
proposed to generate a picture
quality rating for the image which
represents the amount of human-
perceivable block degradation that
has been introduced into the
proposed video signal.
 Combining the results in a simple
way yields a metric that shows a
promising performance with respect
to practical reliability, prediction
accuracy, and computational
efficiency.
Distortion Metric for Error Frames
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos10
Error Frame Detection
 To compute the distortion measure of every frame we compare deviation with the
previous frame. If the value is within a certain threshold value then it is considered
as successful undistorted frame. Otherwise it is consider as distorted frame and
forwarded to next report results module.
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos11 
0
N
r msr
n
F B n

 
Criteria
Function
Deviation of
Frames
Calculate
Mean of
frames
Spatial Error Block Analysis
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos12
Proposed System
Flowchart of Block Analysis
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos13
Edge Direction
Classification
Error Block Classification
Block Shape and
Rotation Formulation
Forward parameters for
error concealment
Detected Error Frame
Sobel Mask in 60
Gradient direction
Magnitude and
Histogram Accum.
Convolution Mask
and shift matching
Restoration and
Retrieval
Spatial Error Block Analysis
Edge Direction & Error Block Classification
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos14
Edge Direction Classification
Uniform block: the gray level of
EB may be constant or nearly so.
I.e., there is no obvious edge in
the block.
Edge block: there are few edges
passing through the block and the
direction of each edge, in general,
is with no or little change.
Texture block: both gray level
and edge direction varies
significantly in the block, so the
edge magnitudes of many
directions are very strong.
Error Block Classification- 1
Histogram Accumulation
Error Block Classification- 2
Error Block Classification(2)
Bin Reduction:
 The bin reduction of histogram of gradients is
used for classifying the edge blocks and texture
blocks.
 It also can be used for improving the speed and
performance of our algorithm.
 Bin: 59, 0,1, 14, 15, 16, 29, 30, 31, 44, 45 and 46
are most contributing for texture blocks
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos15
Block Rotation and Shape Formulation
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos16
Histogram
Characteristics
Convolution
Mask
Phase Offset
Calculation
Block Matching and
Shifting
Experimental Results(1)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 17
Algorithm Pearson
Correlation
Spearman
Correlation
Block_msr -.721 .685
MGBIM [9] -.597 .584
S[63] .614 .570
Algorithm Pearson
Correlation
Spearman
Correlation
Block_msr -.843 .838
MGBIM [9] -.727 .925
S[63] .944 .937
Approaches Pearson Corr. Spearman Corr. RMSE
Wu and Yuen’s [9] .6344 .7365 7.1869
Vlachos’ [65] .5378 .7930 7.0183
Pan et al.’s [66] .6231 .6684 8.4497
Perra et al.’s [67] .6916 .6531 8.4357
Pan et al.’s [68] .5008 .6718 8.1979
Muijs & Kirenko’s
[69]
.7875 .6939 7.9394
Proposed Method .8627 .7104 7.0236
Pearson Correlation and Spearman for FUB database Pearson Correlation and Spearman for LIVE database
Test result using different approaches on the MPEG-2 video dataset
Experimental Results(2)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 18
DATASET Wu et al.'s Pan et al.'s Mujis et al.'s Proposed
Sequence Recall Precision Recall Precision Recall Precision Recall Precision
LIV E : BlueSky 87.01 87.02 88.31 98.27 88.31 98.27 86.35 95.40
LIV E : Pedestrian 88.88 88.03 83.34 93.31 67.29 77.24 76.74 96.52
LIV E : RiverBed 76.58 86.50 87.57 97.57 64.28 74.89 75.54 92.26
LIV E : RushHour 77.64 87.54 86.83 96.83 68.80 78.02 77.63 90.60
LIV E : ParkRun 78.08 82.05 77.35 97.32 66.20 76.23 85.47 95.49
OCN : One 69.44 89.28 77.77 93.33 63.89 79.31 83.33 96.77
OCN : Mr:Big 70.23 88.67 79.41 90.94 68.56 75.42 85.58 98.11
OCN : Swim 66.87 85.72 84.56 95.24 65.55 78.56 88.23 95.46
Comparison of different algorithms showing the detection rate of distorted frames
Experimental Results(3)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 19
Pattern orientation calculation considering histogram bin Bin Reduction : Selection of significant histogram bins
Rotation Formulation and Bin Reduction
Histogram Accumulation of Match and Shifting
Experimental Results(4)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 20
High Priority bins to take the decision:
32, (88, 90, 92) and 128
[For Matched Case, High accumulation].
Second high priority bins to take decision:
(14, 15, 16) and (44, 45, 46)
[Accumulation from High (full unmatched)
to zero (partially matched)].
Discussion and Decision
21
 We have proposed an efficient Video
artifact measurement and error frame
detection method- that does not
restrict itself only compression based
artifacts.
Major Contribution-1
 Our Error block analysis algorithm is
less sensitive to illumination variation
and noise. Moreover, it can deal with
not only traditional artifacts but also
wireless transmission and broadcasting
related artifacts.
Major Contribution-2
 Our analysis method can formulate the
distortion pattern rotation and shape-
in later part which can be used in video
restoration, concealment and retrieval.
Major Contribution-3
Feature Work
We will use the analytical parameters for video
error concealment. How we incorporate these
information for next step is a challenging
research issue.
Conclusion and Future Work
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Publication List
SCI/SCIE Indexed Journals
1. Md. Mehedi Hasan, Kiok Ahn, Mahbub Murshed, Oksam Chae; “Hawkeye: A Cloud Architecture for Automated Video Error Detection in Real-time”,
INFORMATION Journal) (Accepted: 12th April, 2012) (SCIE) [ISSN: 1343-4500, E-ISSN: 1344-8994].
2. Md. Mehedi Hasan, Kiok Ahn, JeongHeon Lee, SM Zahid Ishraque, Oksam Chae; “Fast and Reliable Structure-Oriented Distortion Measure for Video
Processing”, Advanced Science Letters (Accepted: 6th December, 2011) (SCIE, IF: 1.253) [ISSN: 1936-6612, E-ISSN: 1936-7317].
International Journals
1. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Faster Detection of Independent Lossy Compressed Block Errors in Images and Videos”, International
Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 5, no. 1,pp. 151-164, March, 2012)[ISSN: 2005-4254].
2. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Measuring Blockiness of Videos using Edge Enhancement Filtering ”, SIP, Communications in Computer
and Information Science (CCIS), vol. 260, pp. 10-19, January, 2012) (Springer- Verlag, Berlin-Heidelberg)[ ISSN: 1865-0929, ISBN: 978-3-642-
27182-3].
International Conference Papers
1. Md. Mehedi Hasan, Kiok Ahn, Md. Shariful Haque, Oksam Chae; “Blocking Artifact Detection by Analyzing the Distortions of Local Properties in
Images ”,ICCIT 2011, 14th International Conference on Computer and Information Technology, IEEE Xplore, pp. 475-480, Dec. 22-24, 2012) [ISBN:
978-1- 61284-907-2].
2. Md. Mehedi Hasan, Kiok Ahn, SM Zahid Ishraque, Oksam Chae; “Hawkeye: Real-time Video Error Detection Using Cloud Computing Platform ”,AIM
2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), pp. 121-122, Seoul, Korea, Feb. 6-8, 2012).
3. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Measuring Artifacts of Broadcasted Videos by Accumulating Edge Gradient Magnitude ”,YSEC 2012,
Proceedings of the 37th KIPS Spring Conference), Korea, April 26-28, 2012).
4. Md. Mehedi Hasan, Kiok Ahn, Mohammad Shoyaib, Oksam Chae; “Content- Based Error Detection and Concealment for Video Transmission over
WLANS ”,AIM Summer 2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), Jeju, Korea, July 10-
12, 2012) [Accepted].
5. Mahbub Murshed, SM Zahid Ishraque, Md. Mehedi Hasan, Oksam Chae; “Cloud Architecture for Lossless Image Compression by Efficient Bit-Plane
Similarity Coding ”, AIM 2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), pp. 123-124,
Seoul, Korea, Feb. 6-8, 2012).
6. Minsun Park, Md. Mehedi Hasan, Jaemyun Kim, Oksam Chae; “Hand Detection and Tracking Using Depth and Color Information ”,IPCV 2012, The
2012 International Conference on Image Processing, Computer Vision, and Pattern Recognition), Las Vegas, USA, July 16-19, 2012).
22Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Questions and Comments
23Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos

More Related Content

What's hot

26.motion and feature based person tracking
26.motion and feature based person tracking26.motion and feature based person tracking
26.motion and feature based person trackingsajit1975
 
Conference research paper_target_tracking
Conference research paper_target_trackingConference research paper_target_tracking
Conference research paper_target_trackingpatrobadri
 
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHM
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHMVIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHM
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHMijcsa
 
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...Comparative Study of Various Algorithms for Detection of Fades in Video Seque...
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...theijes
 
An Object Detection, Tracking And Parametric Classification– A Review
An Object Detection, Tracking And Parametric Classification– A ReviewAn Object Detection, Tracking And Parametric Classification– A Review
An Object Detection, Tracking And Parametric Classification– A ReviewIRJET Journal
 
From Unsupervised to Semi-Supervised Event Detection
From Unsupervised to Semi-Supervised Event DetectionFrom Unsupervised to Semi-Supervised Event Detection
From Unsupervised to Semi-Supervised Event DetectionVincent Chu
 
Robust image processing algorithms, involving tools from digital geometry and...
Robust image processing algorithms, involving tools from digital geometry and...Robust image processing algorithms, involving tools from digital geometry and...
Robust image processing algorithms, involving tools from digital geometry and...Antoine Vacavant
 
極紫外線散射儀於先進製程檢測應用
極紫外線散射儀於先進製程檢測應用極紫外線散射儀於先進製程檢測應用
極紫外線散射儀於先進製程檢測應用CHENHuiMei
 
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...sipij
 
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...sipij
 
Video Content Identification using Video Signature: Survey
Video Content Identification using Video Signature: SurveyVideo Content Identification using Video Signature: Survey
Video Content Identification using Video Signature: SurveyIRJET Journal
 
76201950
7620195076201950
76201950IJRAT
 
Ijarcet vol-2-issue-3-891-896
Ijarcet vol-2-issue-3-891-896Ijarcet vol-2-issue-3-891-896
Ijarcet vol-2-issue-3-891-896Editor IJARCET
 

What's hot (18)

26.motion and feature based person tracking
26.motion and feature based person tracking26.motion and feature based person tracking
26.motion and feature based person tracking
 
Conference research paper_target_tracking
Conference research paper_target_trackingConference research paper_target_tracking
Conference research paper_target_tracking
 
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHM
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHMVIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHM
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHM
 
23-02-03[1]
23-02-03[1]23-02-03[1]
23-02-03[1]
 
C04841417
C04841417C04841417
C04841417
 
Cb35446450
Cb35446450Cb35446450
Cb35446450
 
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...Comparative Study of Various Algorithms for Detection of Fades in Video Seque...
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...
 
An Object Detection, Tracking And Parametric Classification– A Review
An Object Detection, Tracking And Parametric Classification– A ReviewAn Object Detection, Tracking And Parametric Classification– A Review
An Object Detection, Tracking And Parametric Classification– A Review
 
From Unsupervised to Semi-Supervised Event Detection
From Unsupervised to Semi-Supervised Event DetectionFrom Unsupervised to Semi-Supervised Event Detection
From Unsupervised to Semi-Supervised Event Detection
 
Robust image processing algorithms, involving tools from digital geometry and...
Robust image processing algorithms, involving tools from digital geometry and...Robust image processing algorithms, involving tools from digital geometry and...
Robust image processing algorithms, involving tools from digital geometry and...
 
極紫外線散射儀於先進製程檢測應用
極紫外線散射儀於先進製程檢測應用極紫外線散射儀於先進製程檢測應用
極紫外線散射儀於先進製程檢測應用
 
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
 
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...
 
F0953235
F0953235F0953235
F0953235
 
L010427275
L010427275L010427275
L010427275
 
Video Content Identification using Video Signature: Survey
Video Content Identification using Video Signature: SurveyVideo Content Identification using Video Signature: Survey
Video Content Identification using Video Signature: Survey
 
76201950
7620195076201950
76201950
 
Ijarcet vol-2-issue-3-891-896
Ijarcet vol-2-issue-3-891-896Ijarcet vol-2-issue-3-891-896
Ijarcet vol-2-issue-3-891-896
 

Viewers also liked

motion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosmotion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosshiva kumar cheruku
 
Artist talks new album
Artist talks new albumArtist talks new album
Artist talks new albummusiccoza
 
Inplant training report
Inplant training reportInplant training report
Inplant training reportapplesam8
 
Feature Extraction
Feature ExtractionFeature Extraction
Feature Extractionskylian
 
Edge detection of video using matlab code
Edge detection of video using matlab codeEdge detection of video using matlab code
Edge detection of video using matlab codeBhushan Deore
 
Moving object detection
Moving object detectionMoving object detection
Moving object detectionManav Mittal
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extractionRushin Shah
 
Low level feature extraction - chapter 4
Low level feature extraction - chapter 4Low level feature extraction - chapter 4
Low level feature extraction - chapter 4Aalaa Khattab
 

Viewers also liked (10)

motion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosmotion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videos
 
Artist talks new album
Artist talks new albumArtist talks new album
Artist talks new album
 
James D
James DJames D
James D
 
Inplant training report
Inplant training reportInplant training report
Inplant training report
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Feature Extraction
Feature ExtractionFeature Extraction
Feature Extraction
 
Edge detection of video using matlab code
Edge detection of video using matlab codeEdge detection of video using matlab code
Edge detection of video using matlab code
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extraction
 
Low level feature extraction - chapter 4
Low level feature extraction - chapter 4Low level feature extraction - chapter 4
Low level feature extraction - chapter 4
 

Similar to Artifacts Detection by Extracting Edge Features and Error Block Analysis from Broadcasted Videos

IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval IRJET Journal
 
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET Journal
 
Recognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenesRecognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenesIJCSEA Journal
 
IRJET- Study of SVM and CNN in Semantic Concept Detection
IRJET- Study of SVM and CNN in Semantic Concept DetectionIRJET- Study of SVM and CNN in Semantic Concept Detection
IRJET- Study of SVM and CNN in Semantic Concept DetectionIRJET Journal
 
24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)IAESIJEECS
 
24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)IAESIJEECS
 
Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...
Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...
Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...ijtsrd
 
Design and Analysis of Quantization Based Low Bit Rate Encoding System
Design and Analysis of Quantization Based Low Bit Rate Encoding SystemDesign and Analysis of Quantization Based Low Bit Rate Encoding System
Design and Analysis of Quantization Based Low Bit Rate Encoding Systemijtsrd
 
Dynamic Threshold in Clip Analysis and Retrieval
Dynamic Threshold in Clip Analysis and RetrievalDynamic Threshold in Clip Analysis and Retrieval
Dynamic Threshold in Clip Analysis and RetrievalCSCJournals
 
Key frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptorsKey frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptorseSAT Publishing House
 
Key frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptorsKey frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptorseSAT Journals
 
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptx
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptxVideo Annotation for Visual Tracking via Selection and Refinement_tran.pptx
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptxAlyaaMachi
 
SENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTION
SENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTIONSENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTION
SENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTIONsipij
 
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET Journal
 
Optimal Repeated Frame Compensation Using Efficient Video Coding
Optimal Repeated Frame Compensation Using Efficient Video  CodingOptimal Repeated Frame Compensation Using Efficient Video  Coding
Optimal Repeated Frame Compensation Using Efficient Video CodingIOSR Journals
 
IRJET- Video Forgery Detection using Machine Learning
IRJET-  	  Video Forgery Detection using Machine LearningIRJET-  	  Video Forgery Detection using Machine Learning
IRJET- Video Forgery Detection using Machine LearningIRJET Journal
 
Blur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A SurveyBlur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A SurveyEditor IJCATR
 
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV CameraIRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV CameraIRJET Journal
 
Motion detection in compressed video using macroblock classification
Motion detection in compressed video using macroblock classificationMotion detection in compressed video using macroblock classification
Motion detection in compressed video using macroblock classificationacijjournal
 

Similar to Artifacts Detection by Extracting Edge Features and Error Block Analysis from Broadcasted Videos (20)

IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval
 
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
 
Recognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenesRecognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenes
 
IRJET- Study of SVM and CNN in Semantic Concept Detection
IRJET- Study of SVM and CNN in Semantic Concept DetectionIRJET- Study of SVM and CNN in Semantic Concept Detection
IRJET- Study of SVM and CNN in Semantic Concept Detection
 
24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)
 
24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)24 7912 9261-1-ed a meaningful (edit a)
24 7912 9261-1-ed a meaningful (edit a)
 
Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...
Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...
Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...
 
Design and Analysis of Quantization Based Low Bit Rate Encoding System
Design and Analysis of Quantization Based Low Bit Rate Encoding SystemDesign and Analysis of Quantization Based Low Bit Rate Encoding System
Design and Analysis of Quantization Based Low Bit Rate Encoding System
 
Dynamic Threshold in Clip Analysis and Retrieval
Dynamic Threshold in Clip Analysis and RetrievalDynamic Threshold in Clip Analysis and Retrieval
Dynamic Threshold in Clip Analysis and Retrieval
 
Key frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptorsKey frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptors
 
Key frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptorsKey frame extraction for video summarization using motion activity descriptors
Key frame extraction for video summarization using motion activity descriptors
 
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptx
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptxVideo Annotation for Visual Tracking via Selection and Refinement_tran.pptx
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptx
 
SENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTION
SENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTIONSENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTION
SENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTION
 
N046047780
N046047780N046047780
N046047780
 
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...
 
Optimal Repeated Frame Compensation Using Efficient Video Coding
Optimal Repeated Frame Compensation Using Efficient Video  CodingOptimal Repeated Frame Compensation Using Efficient Video  Coding
Optimal Repeated Frame Compensation Using Efficient Video Coding
 
IRJET- Video Forgery Detection using Machine Learning
IRJET-  	  Video Forgery Detection using Machine LearningIRJET-  	  Video Forgery Detection using Machine Learning
IRJET- Video Forgery Detection using Machine Learning
 
Blur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A SurveyBlur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A Survey
 
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV CameraIRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
 
Motion detection in compressed video using macroblock classification
Motion detection in compressed video using macroblock classificationMotion detection in compressed video using macroblock classification
Motion detection in compressed video using macroblock classification
 

Artifacts Detection by Extracting Edge Features and Error Block Analysis from Broadcasted Videos

  • 1. Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos Supervised by Prof. Oksam Chae Md. Mehedi Hasan, 2010315443 Image Processing Lab, Department of Computer Engineering Kyung Hee University, Korea 2012.05.08
  • 2. Presentation Outline 2 •Objectives •Challenges Introduction Contributions Related Works •The Proposed Video Artifacts measure and Error Frame Detection •The Proposed Spatial Error Block Analysis System Proposed Artifact Detection and Error Pattern Analysis Experimental Results Conclusion and Future Work Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
  • 3. 3 Introduction • To gain a system that detect video artifacts happened not only in compression (Block based) but also occurred during transmission or broadcasting. • To reduce the time complexity of the conventional pixel based detection methods which requires high memory and too much computation time. • Selection of light weighted human vision measurement system and Choosing a detection mechanism to detect the distorted frames in real-time. • Introduce a error block classification and analysis method that can be used in video restoration, error concealment , video retrieval and many other commercial applications . Objective Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos Noise and error model for broadcasting and surveillance systems
  • 4. Introduction 4 Video artifacts detection and distorted pattern analysis is difficult  Videos are distorted with compression , wireless transmission based and broadcasting based artifacts  In image and video communication , original image and video is not accessible which is called No-reference approach, is a challenging research issue.  Compression based artifacts are sustained in a block based manner (typically , 8 by 8) but wireless transmission and broadcasting related artifacts are not always sustained in a block based manner.  A real time application that not only show the quality measure but also detect the distorted frames from videos.  Classify and analyze the error patterns from defected frames that can be used in video restoration, error concealment and retrieval. Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos Challenges
  • 5. Introduction Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 5 Sample Videos and Images Courtesy: Samples provided by KBS
  • 6. Proposed Method 6 • Contains Edge magnitude and direction • Less Sensitive to illumination Variation and Noise • Extract frames from videos and analyze • Incorporate Kirsch Mask to detect edge pixels. • Can detect candidate frame with high disruption in sequence of frames(Temporal Information). • More gradient direction is analyzed for complex environment • Block classification is done in three steps. • Edge Block and Texture Content Block is analyzed . • Error Block Analysis is incorporated for better accuracy that can be used is Error concealment and restoration. Video Artifacts Measure and Error Frame Detection Spatial Error Block Analysis (SEBA) Statistical Background Modeling and Multiple Motion Analysis for the Parametric Gesture Representation
  • 7. 7 Related Works Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
  • 8. Artifact Measure and Error Frame Detection Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 8
  • 9. Generate Distortion Metric Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 9  Further the locations of the compression block boundaries may be detected by observing where the maximum correlation value occurs.  The resulting correlation results are proposed to generate a picture quality rating for the image which represents the amount of human- perceivable block degradation that has been introduced into the proposed video signal.  Combining the results in a simple way yields a metric that shows a promising performance with respect to practical reliability, prediction accuracy, and computational efficiency.
  • 10. Distortion Metric for Error Frames Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos10
  • 11. Error Frame Detection  To compute the distortion measure of every frame we compare deviation with the previous frame. If the value is within a certain threshold value then it is considered as successful undistorted frame. Otherwise it is consider as distorted frame and forwarded to next report results module. Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos11  0 N r msr n F B n    Criteria Function Deviation of Frames Calculate Mean of frames
  • 12. Spatial Error Block Analysis Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos12 Proposed System
  • 13. Flowchart of Block Analysis Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos13 Edge Direction Classification Error Block Classification Block Shape and Rotation Formulation Forward parameters for error concealment Detected Error Frame Sobel Mask in 60 Gradient direction Magnitude and Histogram Accum. Convolution Mask and shift matching Restoration and Retrieval Spatial Error Block Analysis
  • 14. Edge Direction & Error Block Classification Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos14 Edge Direction Classification Uniform block: the gray level of EB may be constant or nearly so. I.e., there is no obvious edge in the block. Edge block: there are few edges passing through the block and the direction of each edge, in general, is with no or little change. Texture block: both gray level and edge direction varies significantly in the block, so the edge magnitudes of many directions are very strong. Error Block Classification- 1 Histogram Accumulation Error Block Classification- 2
  • 15. Error Block Classification(2) Bin Reduction:  The bin reduction of histogram of gradients is used for classifying the edge blocks and texture blocks.  It also can be used for improving the speed and performance of our algorithm.  Bin: 59, 0,1, 14, 15, 16, 29, 30, 31, 44, 45 and 46 are most contributing for texture blocks Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos15
  • 16. Block Rotation and Shape Formulation Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos16 Histogram Characteristics Convolution Mask Phase Offset Calculation Block Matching and Shifting
  • 17. Experimental Results(1) Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 17 Algorithm Pearson Correlation Spearman Correlation Block_msr -.721 .685 MGBIM [9] -.597 .584 S[63] .614 .570 Algorithm Pearson Correlation Spearman Correlation Block_msr -.843 .838 MGBIM [9] -.727 .925 S[63] .944 .937 Approaches Pearson Corr. Spearman Corr. RMSE Wu and Yuen’s [9] .6344 .7365 7.1869 Vlachos’ [65] .5378 .7930 7.0183 Pan et al.’s [66] .6231 .6684 8.4497 Perra et al.’s [67] .6916 .6531 8.4357 Pan et al.’s [68] .5008 .6718 8.1979 Muijs & Kirenko’s [69] .7875 .6939 7.9394 Proposed Method .8627 .7104 7.0236 Pearson Correlation and Spearman for FUB database Pearson Correlation and Spearman for LIVE database Test result using different approaches on the MPEG-2 video dataset
  • 18. Experimental Results(2) Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 18 DATASET Wu et al.'s Pan et al.'s Mujis et al.'s Proposed Sequence Recall Precision Recall Precision Recall Precision Recall Precision LIV E : BlueSky 87.01 87.02 88.31 98.27 88.31 98.27 86.35 95.40 LIV E : Pedestrian 88.88 88.03 83.34 93.31 67.29 77.24 76.74 96.52 LIV E : RiverBed 76.58 86.50 87.57 97.57 64.28 74.89 75.54 92.26 LIV E : RushHour 77.64 87.54 86.83 96.83 68.80 78.02 77.63 90.60 LIV E : ParkRun 78.08 82.05 77.35 97.32 66.20 76.23 85.47 95.49 OCN : One 69.44 89.28 77.77 93.33 63.89 79.31 83.33 96.77 OCN : Mr:Big 70.23 88.67 79.41 90.94 68.56 75.42 85.58 98.11 OCN : Swim 66.87 85.72 84.56 95.24 65.55 78.56 88.23 95.46 Comparison of different algorithms showing the detection rate of distorted frames
  • 19. Experimental Results(3) Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 19 Pattern orientation calculation considering histogram bin Bin Reduction : Selection of significant histogram bins Rotation Formulation and Bin Reduction Histogram Accumulation of Match and Shifting
  • 20. Experimental Results(4) Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 20 High Priority bins to take the decision: 32, (88, 90, 92) and 128 [For Matched Case, High accumulation]. Second high priority bins to take decision: (14, 15, 16) and (44, 45, 46) [Accumulation from High (full unmatched) to zero (partially matched)]. Discussion and Decision
  • 21. 21  We have proposed an efficient Video artifact measurement and error frame detection method- that does not restrict itself only compression based artifacts. Major Contribution-1  Our Error block analysis algorithm is less sensitive to illumination variation and noise. Moreover, it can deal with not only traditional artifacts but also wireless transmission and broadcasting related artifacts. Major Contribution-2  Our analysis method can formulate the distortion pattern rotation and shape- in later part which can be used in video restoration, concealment and retrieval. Major Contribution-3 Feature Work We will use the analytical parameters for video error concealment. How we incorporate these information for next step is a challenging research issue. Conclusion and Future Work Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
  • 22. Publication List SCI/SCIE Indexed Journals 1. Md. Mehedi Hasan, Kiok Ahn, Mahbub Murshed, Oksam Chae; “Hawkeye: A Cloud Architecture for Automated Video Error Detection in Real-time”, INFORMATION Journal) (Accepted: 12th April, 2012) (SCIE) [ISSN: 1343-4500, E-ISSN: 1344-8994]. 2. Md. Mehedi Hasan, Kiok Ahn, JeongHeon Lee, SM Zahid Ishraque, Oksam Chae; “Fast and Reliable Structure-Oriented Distortion Measure for Video Processing”, Advanced Science Letters (Accepted: 6th December, 2011) (SCIE, IF: 1.253) [ISSN: 1936-6612, E-ISSN: 1936-7317]. International Journals 1. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Faster Detection of Independent Lossy Compressed Block Errors in Images and Videos”, International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 5, no. 1,pp. 151-164, March, 2012)[ISSN: 2005-4254]. 2. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Measuring Blockiness of Videos using Edge Enhancement Filtering ”, SIP, Communications in Computer and Information Science (CCIS), vol. 260, pp. 10-19, January, 2012) (Springer- Verlag, Berlin-Heidelberg)[ ISSN: 1865-0929, ISBN: 978-3-642- 27182-3]. International Conference Papers 1. Md. Mehedi Hasan, Kiok Ahn, Md. Shariful Haque, Oksam Chae; “Blocking Artifact Detection by Analyzing the Distortions of Local Properties in Images ”,ICCIT 2011, 14th International Conference on Computer and Information Technology, IEEE Xplore, pp. 475-480, Dec. 22-24, 2012) [ISBN: 978-1- 61284-907-2]. 2. Md. Mehedi Hasan, Kiok Ahn, SM Zahid Ishraque, Oksam Chae; “Hawkeye: Real-time Video Error Detection Using Cloud Computing Platform ”,AIM 2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), pp. 121-122, Seoul, Korea, Feb. 6-8, 2012). 3. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Measuring Artifacts of Broadcasted Videos by Accumulating Edge Gradient Magnitude ”,YSEC 2012, Proceedings of the 37th KIPS Spring Conference), Korea, April 26-28, 2012). 4. Md. Mehedi Hasan, Kiok Ahn, Mohammad Shoyaib, Oksam Chae; “Content- Based Error Detection and Concealment for Video Transmission over WLANS ”,AIM Summer 2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), Jeju, Korea, July 10- 12, 2012) [Accepted]. 5. Mahbub Murshed, SM Zahid Ishraque, Md. Mehedi Hasan, Oksam Chae; “Cloud Architecture for Lossless Image Compression by Efficient Bit-Plane Similarity Coding ”, AIM 2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), pp. 123-124, Seoul, Korea, Feb. 6-8, 2012). 6. Minsun Park, Md. Mehedi Hasan, Jaemyun Kim, Oksam Chae; “Hand Detection and Tracking Using Depth and Color Information ”,IPCV 2012, The 2012 International Conference on Image Processing, Computer Vision, and Pattern Recognition), Las Vegas, USA, July 16-19, 2012). 22Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
  • 23. Questions and Comments 23Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos

Editor's Notes

  1. Existing methods fail due to the difficulty to manage motion in the background. Existing moving object detectors fail when motion-free backgrounds are not available. Existing segmentation methods cannot separate them. Existing shape matching methods can not track shape and color variation at the same time.