This document presents a method for edge-based feature extraction to detect artifacts and analyze error patterns in broadcast videos. The method uses edge magnitude and direction features that are less sensitive to noise. Detected error frames are analyzed to classify error blocks based on edge direction, texture content, and shape. Experimental results show the method achieves high accuracy in detecting distorted frames and analyzing error patterns compared to other approaches. Future work will apply the error analysis to video error concealment.
Most common defects are flicker, dirt, dust and line scratches.Here we consider line scratch
detection.Line scratches appear as thin bright or dark line.This line are usually straight and vertical.The
restoration of old videos is based on primary interest because of great quantity of old film records. But
manual digital restoration of videos is time consuming process. To detect the scratch in films or video
is very difficult task because of the various characteristics of the detected defects. There are main
problems created during line scratch detection like sensitivity to noise and texture or some time wrong
detection due to thin structure related to scene. In this method, robust and automatic algorithm for
frame scratch detection in videos and an temporal algorithm for filtering wrong detection is applied.
Hence in this, there is relax some of the techniques used for detection which causes more number of
scratches are detected. In this way effectiveness and lack of some external parameters are acquire by
combining of a contrario methodology as well as local statistical estimation. In this way ,scratch
detection in textures is reduced fast.The filtering algorithm eliminates wrong detection due to vertical
structure by exploiting the coherence of motion in videos.
Video Key-Frame Extraction using Unsupervised Clustering and Mutual ComparisonCSCJournals
Key-frame extraction is one of the important steps in semantic concept based video indexing and retrieval and accuracy of video concept detection highly depends on the effectiveness of keyframe extraction method. Therefore, extracting key-frames efficiently and effectively from video shots is considered to be a very challenging research problem in video retrieval systems. One of many approaches to extract key-frames from a shot is to make use of unsupervised clustering. Depending on the salient content of the shot and results of clustering, key-frames can be extracted. But usually, because of the visual complexity and/or the content of the video shot, we tend to get near duplicate or repetitive key-frames having the same semantic content in the output and hence accuracy of key-frame extraction decreases. In an attempt to improve accuracy, we proposed a novel key-frame extraction method based on unsupervised clustering and mutual comparison where we assigned 70% weightage to color component (HSV histogram) and 30% to texture (GLCM), while computing a combined frame similarity index used for clustering. We suggested a mutual comparison of the key-frames extracted from the output of the clustering where each key-frame is compared with every other to remove near duplicate keyframes. The proposed algorithm is both computationally simple and able to detect non-redundant and unique key-frames for the shot and as a result improving concept detection rate. The efficiency and effectiveness are validated by open database videos.
An Efficient Method For Gradual Transition Detection In Presence Of Camera Mo...ijafrc
Gradual transition detection is one of the most important issues in the field of video indexing and retrieval. Among the various types of gradual transitions, the fade and dissolve type the gradual transition is considered the most common one, but it is most difficult one to detect. In most of the existing fade and dissolve detection algorithms, the false detection problem caused by motion is very serious. In this paper we present a novel gradual transition detection algorithm using local key-point integrated with twin comparison method that can correctly distinguish fades and dissolve from object and camera motion.
Threshold based filtering technique for efficient moving object detection and...eSAT Journals
Abstract Detection and tracking of moving objects are an important research area in a video surveillance application. Object tracking is
used in several applications such as video compression, surveillance, robot technology and so on. Recently many researches has
been developed for video object detection, however the object detection accuracy and background object detection in the video
frames are still poses demanding issues. In this paper, a novel framework called Threshold Filtered Video Object Detection and
Tracking (TFVODT) is designed for effective detection and tracking of moving objects. TFVODT framework initially takes video
file as input, and then video frames are segmented using Median Filter-based Enhanced Laplacian Thresholding for improving
the video quality by reducing mean square error. Next, Color Histogram-based Particle Filter is applied to the segmented objects
in TFVODT framework for video object tracking. The Color Histogram-based Particle Filter measures the likelihood function,
particle posterior and particle prior function based on the Bayes Sequential Estimation model for improving the object tracking
accuracy. Finally, the objects detection is performed with help of Improvisation of Enhanced Laplacian Threshold (IELT) to
enhance video object detection accuracy and to recognize background moving object detection. The proposed TFVODT
framework using video images obtained from Internet Archive 501(c) (3) for conducting experiment and comparison is made with
the existing object detection techniques. Experimental evaluation of TFVODT framework is done with the performance metrics
such as object segmentation accuracy, Peak Signal to Noise Ratio, object tracking accuracy, Mean Square Error and object
detection accuracy of moving video object frames. Experimental analysis shows that the TFVODT framework is able to improve
the video object detection accuracy by 18% and reduces the Peak Signal to Noise Ratio by 23 % when compared to the state-ofthe-
art works.
Keywords: Object segmentation, Object tracking, Object Detection, Enhanced Laplacian Thresholding, Median
Filter, Color Histogram-based Particle Filter
Most common defects are flicker, dirt, dust and line scratches.Here we consider line scratch
detection.Line scratches appear as thin bright or dark line.This line are usually straight and vertical.The
restoration of old videos is based on primary interest because of great quantity of old film records. But
manual digital restoration of videos is time consuming process. To detect the scratch in films or video
is very difficult task because of the various characteristics of the detected defects. There are main
problems created during line scratch detection like sensitivity to noise and texture or some time wrong
detection due to thin structure related to scene. In this method, robust and automatic algorithm for
frame scratch detection in videos and an temporal algorithm for filtering wrong detection is applied.
Hence in this, there is relax some of the techniques used for detection which causes more number of
scratches are detected. In this way effectiveness and lack of some external parameters are acquire by
combining of a contrario methodology as well as local statistical estimation. In this way ,scratch
detection in textures is reduced fast.The filtering algorithm eliminates wrong detection due to vertical
structure by exploiting the coherence of motion in videos.
Video Key-Frame Extraction using Unsupervised Clustering and Mutual ComparisonCSCJournals
Key-frame extraction is one of the important steps in semantic concept based video indexing and retrieval and accuracy of video concept detection highly depends on the effectiveness of keyframe extraction method. Therefore, extracting key-frames efficiently and effectively from video shots is considered to be a very challenging research problem in video retrieval systems. One of many approaches to extract key-frames from a shot is to make use of unsupervised clustering. Depending on the salient content of the shot and results of clustering, key-frames can be extracted. But usually, because of the visual complexity and/or the content of the video shot, we tend to get near duplicate or repetitive key-frames having the same semantic content in the output and hence accuracy of key-frame extraction decreases. In an attempt to improve accuracy, we proposed a novel key-frame extraction method based on unsupervised clustering and mutual comparison where we assigned 70% weightage to color component (HSV histogram) and 30% to texture (GLCM), while computing a combined frame similarity index used for clustering. We suggested a mutual comparison of the key-frames extracted from the output of the clustering where each key-frame is compared with every other to remove near duplicate keyframes. The proposed algorithm is both computationally simple and able to detect non-redundant and unique key-frames for the shot and as a result improving concept detection rate. The efficiency and effectiveness are validated by open database videos.
An Efficient Method For Gradual Transition Detection In Presence Of Camera Mo...ijafrc
Gradual transition detection is one of the most important issues in the field of video indexing and retrieval. Among the various types of gradual transitions, the fade and dissolve type the gradual transition is considered the most common one, but it is most difficult one to detect. In most of the existing fade and dissolve detection algorithms, the false detection problem caused by motion is very serious. In this paper we present a novel gradual transition detection algorithm using local key-point integrated with twin comparison method that can correctly distinguish fades and dissolve from object and camera motion.
Threshold based filtering technique for efficient moving object detection and...eSAT Journals
Abstract Detection and tracking of moving objects are an important research area in a video surveillance application. Object tracking is
used in several applications such as video compression, surveillance, robot technology and so on. Recently many researches has
been developed for video object detection, however the object detection accuracy and background object detection in the video
frames are still poses demanding issues. In this paper, a novel framework called Threshold Filtered Video Object Detection and
Tracking (TFVODT) is designed for effective detection and tracking of moving objects. TFVODT framework initially takes video
file as input, and then video frames are segmented using Median Filter-based Enhanced Laplacian Thresholding for improving
the video quality by reducing mean square error. Next, Color Histogram-based Particle Filter is applied to the segmented objects
in TFVODT framework for video object tracking. The Color Histogram-based Particle Filter measures the likelihood function,
particle posterior and particle prior function based on the Bayes Sequential Estimation model for improving the object tracking
accuracy. Finally, the objects detection is performed with help of Improvisation of Enhanced Laplacian Threshold (IELT) to
enhance video object detection accuracy and to recognize background moving object detection. The proposed TFVODT
framework using video images obtained from Internet Archive 501(c) (3) for conducting experiment and comparison is made with
the existing object detection techniques. Experimental evaluation of TFVODT framework is done with the performance metrics
such as object segmentation accuracy, Peak Signal to Noise Ratio, object tracking accuracy, Mean Square Error and object
detection accuracy of moving video object frames. Experimental analysis shows that the TFVODT framework is able to improve
the video object detection accuracy by 18% and reduces the Peak Signal to Noise Ratio by 23 % when compared to the state-ofthe-
art works.
Keywords: Object segmentation, Object tracking, Object Detection, Enhanced Laplacian Thresholding, Median
Filter, Color Histogram-based Particle Filter
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHMijcsa
Video summarization of the segmented video is an essential process for video thumbnails, video surveillance and video downloading. Summarization deals with extracting few frames from each scene and creating a summary video which explains all course of action of full video with in short duration of time. The proposed research work discusses about the segmentation and summarization of the frames. A genetic algorithm (GA) for segmentation and summarization is required to view the highlight of an event by selecting few important frames required. The GA is modified to select only key frames for summarization and the comparison of modified GA is done with the GA.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...theijes
In the multimedia environment, digital data has gained more importance in daily routine. Large volume of videos such as entertainment video, news video, cartoon video, sports video is accessed by masses to accomplish their different needs. In the field of video processing Shot boundary detection is current research area. Shot boundary detection has vast impact on effective browsing and retrieving, searching of video. It serves as the beginning to construct the content of videos. Video processing technology has crucial job to provide valid information from videos without loss of any information. This paper is a survey of various novel algorithm for detecting fade-in and fade-out used by renowned personals with different methods. This survey also emphasizes on different core concepts underlying the different detection schemes for the most used video transition effect: fades
Robust image processing algorithms, involving tools from digital geometry and...Antoine Vacavant
A recurrent problem in image analysis is the presence of image data uncertainties, generically called noise. In the literature, the capacity of an algorithm to resist to such data alteration is named robustness, but without any clear formalism for image processing. During this talk, I first remind the original and foundational definition of robustness for image processing algorithms I presented in my last CBA talk (fall 2016), by considering multiple scales of noise. Then, I present robust techniques, supported by tools from digital geometry (DG) and mathematical morphology (MM) ; this part deals respectively with skeletonization and Reeb graph calculation, and smoothed shock denoising and enhancement filtering. The next part is devoted to recent advances in defining robustness more accurately and to combining both DG and MM approaches for applications in liver biomedical image analysis. This talk finishes with research leads about robust image processing approaches oriented towards biomedical applications, involving numerical simulation and machine learning.
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...sipij
Long range infrared videos such as the Defense Systems Information Analysis Center (DSIAC) videos usually
do not have high resolution. In recent years, there are significant advancement in video super-resolution
algorithms. Here, we summarize our study on the use of super-resolution videos for target detection and
classification. We observed that super-resolution videos can significantly improve the detection and
classification performance. For example, for 3000 m range videos, we were able to improve the average
precision of target detection from 11% (without super-resolution) to 44% (with 4x super-resolution) and the
overall accuracy of target classification from 10% (without super-resolution) to 44% (with 2x superresolution).
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...sipij
It is challenging to detect vehicles in long range and low quality infrared videos using deep learning
techniques such as You Only Look Once (YOLO) mainly due to small target size. This is because small
targets do not have detailed texture information. This paper focuses on practical approaches for target
detection in infrared videos using deep learning techniques. We first investigated a newer version of You
Only Look Once (YOLO v4). We then proposed a practical and effective approach by training the YOLO
model using videos from longer ranges. Experimental results using real infrared videos ranging from 1000
m to 3500 m demonstrated huge performance improvements. In particular, the average detection
percentage over the six ranges of 1000 m to 3500 m improved from 54% when we used the 1500 m videos
for training to 95% if we used the 3000 m videos for training.
After a long awaited promise for his first studio album, he instead released a smash titled "The Saga" (Link Above) featuring the famous South African legend AKA.
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHMijcsa
Video summarization of the segmented video is an essential process for video thumbnails, video surveillance and video downloading. Summarization deals with extracting few frames from each scene and creating a summary video which explains all course of action of full video with in short duration of time. The proposed research work discusses about the segmentation and summarization of the frames. A genetic algorithm (GA) for segmentation and summarization is required to view the highlight of an event by selecting few important frames required. The GA is modified to select only key frames for summarization and the comparison of modified GA is done with the GA.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative Study of Various Algorithms for Detection of Fades in Video Seque...theijes
In the multimedia environment, digital data has gained more importance in daily routine. Large volume of videos such as entertainment video, news video, cartoon video, sports video is accessed by masses to accomplish their different needs. In the field of video processing Shot boundary detection is current research area. Shot boundary detection has vast impact on effective browsing and retrieving, searching of video. It serves as the beginning to construct the content of videos. Video processing technology has crucial job to provide valid information from videos without loss of any information. This paper is a survey of various novel algorithm for detecting fade-in and fade-out used by renowned personals with different methods. This survey also emphasizes on different core concepts underlying the different detection schemes for the most used video transition effect: fades
Robust image processing algorithms, involving tools from digital geometry and...Antoine Vacavant
A recurrent problem in image analysis is the presence of image data uncertainties, generically called noise. In the literature, the capacity of an algorithm to resist to such data alteration is named robustness, but without any clear formalism for image processing. During this talk, I first remind the original and foundational definition of robustness for image processing algorithms I presented in my last CBA talk (fall 2016), by considering multiple scales of noise. Then, I present robust techniques, supported by tools from digital geometry (DG) and mathematical morphology (MM) ; this part deals respectively with skeletonization and Reeb graph calculation, and smoothed shock denoising and enhancement filtering. The next part is devoted to recent advances in defining robustness more accurately and to combining both DG and MM approaches for applications in liver biomedical image analysis. This talk finishes with research leads about robust image processing approaches oriented towards biomedical applications, involving numerical simulation and machine learning.
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...sipij
Long range infrared videos such as the Defense Systems Information Analysis Center (DSIAC) videos usually
do not have high resolution. In recent years, there are significant advancement in video super-resolution
algorithms. Here, we summarize our study on the use of super-resolution videos for target detection and
classification. We observed that super-resolution videos can significantly improve the detection and
classification performance. For example, for 3000 m range videos, we were able to improve the average
precision of target detection from 11% (without super-resolution) to 44% (with 4x super-resolution) and the
overall accuracy of target classification from 10% (without super-resolution) to 44% (with 2x superresolution).
PRACTICAL APPROACHES TO TARGET DETECTION IN LONG RANGE AND LOW QUALITY INFRAR...sipij
It is challenging to detect vehicles in long range and low quality infrared videos using deep learning
techniques such as You Only Look Once (YOLO) mainly due to small target size. This is because small
targets do not have detailed texture information. This paper focuses on practical approaches for target
detection in infrared videos using deep learning techniques. We first investigated a newer version of You
Only Look Once (YOLO v4). We then proposed a practical and effective approach by training the YOLO
model using videos from longer ranges. Experimental results using real infrared videos ranging from 1000
m to 3500 m demonstrated huge performance improvements. In particular, the average detection
percentage over the six ranges of 1000 m to 3500 m improved from 54% when we used the 1500 m videos
for training to 95% if we used the 3000 m videos for training.
After a long awaited promise for his first studio album, he instead released a smash titled "The Saga" (Link Above) featuring the famous South African legend AKA.
Low Level Feature Extraction:
Basic features that can be extracted automatically from an image without any shape information (information about spatial relationships)
-edge detection
-motion detection
IRJET-Feature Extraction from Video Data for Indexing and Retrieval IRJET Journal
Amanpreet Kaur ,Rimanpal Kaur "Feature Extraction from Video Data for Indexing and Retrieval ", International Research Journal of Engineering and Technology (IRJET), Vol2,issue-01 March 2015. e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
In recent years, the multimedia storage grows and the cost for storing multimedia data is cheaper. So there is huge number of videos available in the video repositories. With the development of multimedia data types and available bandwidth there is huge demand of video retrieval systems, as users shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval regardless of video attributes being under consideration. These features are intended for selecting, indexing and ranking according to their potential interest to the user. Good features selection also allows the time and space costs of the retrieval process to be reduced. This survey reviews the interesting features that can be extracted from video data for indexing and retrieval along with similarity measurement methods.
Recognition and tracking moving objects using moving camera in complex scenesIJCSEA Journal
In this paper, we propose a method for effectively tracking moving objects in videos captured using a
moving camera in complex scenes. The video sequences may contain highly dynamic backgrounds and
illumination changes. Four main steps are involved in the proposed method. First, the video is stabilized
using affine transformation. Second, intelligent selection of frames is performed in order to extract only
those frames that have a considerable change in content. This step reduces complexity and computational
time. Third, the moving object is tracked using Kalman filter and Gaussian mixture model. Finally object
recognition using Bag of features is performed in order to recognize the moving objects.
Coronary heart disease is a disease with the highest mortality rates in the world. This makes the development of the diagnostic system as a very interesting topic in the field of biomedical informatics, aiming to detect whether a heart is normal or not. In the literature there are diagnostic system models by combining dimension reduction and data mining techniques. Unfortunately, there are no review papers that discuss and analyze the themes to date. This study reviews articles within the period 2009-2016, with a focus on dimension reduction methods and data mining techniques, validated using a dataset of UCI repository. Methods of dimension reduction use feature selection and feature extraction techniques, while data mining techniques include classification, prediction, clustering, and association rules.
Key frame extraction is an essential technique in the computer vision field. The extracted key frames should brief the salient events with an excellent feasibility, great efficiency, and with a high-level of robustness. Thus, it is not an easy problem to solve because it is attributed to many visual features. This paper intends to solve this problem by investigating the relationship between these features detection and the accuracy of key frames extraction techniques using TRIZ. An improved algorithm for key frame extraction was then proposed based on an accumulative optical flow with a self-adaptive threshold (AOF_ST) as recommended in TRIZ inventive principles. Several video shots including original and forgery videos with complex conditions are used to verify the experimental results. The comparison of our results with the-state-of-the-art algorithms results showed that the proposed extraction algorithm can accurately brief the videos and generated a meaningful compact count number of key frames. On top of that, our proposed algorithm achieves 124.4 and 31.4 for best and worst case in KTH dataset extracted key frames in terms of compression rate, while the-state-of-the-art algorithms achieved 8.90 in the best case.
Key Frame Extraction in Video Stream using Two Stage Method with Colour and S...ijtsrd
Key Frame Extraction is the summarization of videos for different applications like video object recognition and classification, video retrieval and archival and surveillance is an active research area in computer vision. In this paper describe a new criterion for well presentative key frames and correspondingly, create a key frame selection algorithm based Two stage Method. A two stage method is used to extract accurate key frames to cover the content for the whole video sequence. Firstly, an alternative sequence is got based on color characteristic difference between adjacent frames from original sequence. Secondly, by analyzing structural characteristic difference between adjacent frames from the alternative sequence, the final key frame sequence is obtained. And then, an optimization step is added based on the number of final key frames in order to ensure the effectiveness of key frame extraction. Khaing Thazin Min | Wit Yee Swe | Yi Yi Aung | Khin Chan Myae Zin "Key Frame Extraction in Video Stream using Two-Stage Method with Colour and Structure" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27971.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-processing/27971/key-frame-extraction-in-video-stream-using-two-stage-method-with-colour-and-structure/khaing-thazin-min
Design and Analysis of Quantization Based Low Bit Rate Encoding Systemijtsrd
The objective of this paper is to develop a low bit rate encoding for VQ problems such as real time image coding.. The decision tree is generated by an offline process.. A new systolic architecture to realize the encoder of full search vector quantization VQ for high speed applications is presented here. Over past decades digital video compression technologies have become an integral part. Therefore the purpose is to improve image quality in Remote cardiac pulse measurement using Adaptive filter. It describes the approach to be used for feature extraction from many images.. This paper presents a real time application of compression of the image processing technique which can be efficiently used for the interfacing with any hardware. Therefore we have used Raspberry Pi in compression of image. We have developed an algorithm that is based on the endoscopic images that consist of the differential pulse code modulation. The compressors consist of a low cost YEF colour space converters and variable length predictive algorithm for lossless compression. Mr. Nilesh Bodne | Dr. Sunil Kumar "Design and Analysis of Quantization Based Low Bit Rate Encoding System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29289.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/29289/design-and-analysis-of-quantization-based-low-bit-rate-encoding-system/mr-nilesh-bodne
Dynamic Threshold in Clip Analysis and RetrievalCSCJournals
Key frame extraction can be helpful in video summarization, analysis, indexing, browsing, and retrieval. Clip analysis of key frame sequences is an open research issues. The paper deals with identification and extraction of key frames using dynamic threshold followed by video retrieval. The number of key frames to be extracted for each shot depends on the activity details of the shot. This system uses the statistics of comparison between the successive frames within a level extracted on the basis of color histograms and dynamic threshold. Two program interfaces are linked for clip analysis and video indexing and retrieval using entropy. The results using proposed system on few video sequences are tested and the extracted key frames and retrieved results are shown.
Key frame extraction for video summarization using motion activity descriptorseSAT Journals
Abstract Summarization of a video involves providing a gist of the entire video without affecting the semantics of the video. This has been implemented by the use of motion activity descriptors which generate relative motion between consecutive frames. Correctly capturing the motion in a video leads to the identification of the key frames in the video. This motion in the video can be obtained by using block matching techniques which is an important part of this process. It is implemented using two techniques, Diamond Search and Three Step Search, which have been studied and compared. The comparison process is tried across various videos differing in category, content, and objects. It is found that there is a trade-off between summarization factor and precision during the summarization process. Keywords: Video Summarization, Motion Descriptors, Block Matching
Key frame extraction for video summarization using motion activity descriptorseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SENSITIVITY OF A VIDEO SURVEILLANCE SYSTEM BASED ON MOTION DETECTIONsipij
The implementation of a stand-alone system developed in JAVA language for motion detection has been discussed. The open-source OpenCV library has been adopted for video surveillance image processing thus implementing Background Subtraction algorithm also known as foreground detection algorithm. Generally the region of interest of a body or object to detect is related to a precise objects (people, cars, etc.) emphasized on a background. This technique is widely used for tracking a moving objects. In particular, the BackgroundSubtractorMOG2 algorithm of OpenCV has been applied. This algorithm is based on Gaussian distributions and offers better adaptability to different scenes due to changes in lighting and the detection of shadows as well. The implemented webcam system relies on saving frames and creating GIF and JPGs files with previously saved frames. In particular the Background Subtraction function, find Contours, has been adopted to detect the contours. The numerical quantity of these contours has been compared with the tracking points of sensitivity obtained by setting an user-modifiable slider able to save the frames as GIFs composed by different merged JPEGs. After a full design of the image processing prototype different motion test have been performed. The results showed the importance to consider few sensitivity points in order to obtain more frequent image storages also concerning minor movements.Sensitivity points can be modified through a slider function and are inversely proportional to the number of saved images. For small object in motion will be detected a low percentage of sensitivity points.Experimental results proves that the setting condition are mainly function of the typology of moving object rather than the light conditions. The proposed prototype system is suitable for video surveillance smart
camera in industrial systems.
Blur Detection Methods for Digital Images-A SurveyEditor IJCATR
This paper described various blur detection methods along with proposed method. Digital photos are massively produced
while digital cameras are becoming popular; however, not every photo has good quality. Blur is one of the conventional image quality
degradation which is caused by various factors like limited contrast; inappropriate exposure time and improper device handling indeed,
blurry images make up a significant percentage of anyone's picture collections. Consequently, an efficient tool to detect blurry images
and label or separate them for automatic deletion in order to preserve storage capacity and the quality of image collections is needed.
There are various methods to detect the blur from the blurry images some of which requires transforms like DCT or Wavelet and some
doesn‟t require transform.
Motion detection in compressed video using macroblock classificationacijjournal
n this paper, to detect the moving objects between frames in compressed video and to obtain the bes
t
compression video
and the noiseless video. We describe a video in which frames by classifying
macroblocks (MB), and describe motion estimation (ME), motion vector field (MV) and motion
compensation (MC). we propose to classify Macroblocks of each video frame into different
classes and use
this class information to describe the frame content based on the motion vector. MB class informatio
n
video applications such as shot change detection, motion discontinuity detection, Outlier rejection
for
global motion estimation. To reduc
e the noise and to improve the clarity of the compressed video by using
contrast limited adaptive histogram equalization (CLAHE) Algorithm.
Similar to Artifacts Detection by Extracting Edge Features and Error Block Analysis from Broadcasted Videos (20)
Motion detection in compressed video using macroblock classification
Artifacts Detection by Extracting Edge Features and Error Block Analysis from Broadcasted Videos
1. Edge-Based Feature Extraction for Artifacts Detection and Error
Pattern Analysis from Broadcasted Videos
Supervised by
Prof. Oksam Chae
Md. Mehedi Hasan, 2010315443
Image Processing Lab,
Department of Computer Engineering
Kyung Hee University, Korea
2012.05.08
2. Presentation Outline
2
•Objectives
•Challenges
Introduction
Contributions
Related Works
•The Proposed Video Artifacts measure and Error Frame Detection
•The Proposed Spatial Error Block Analysis System
Proposed Artifact Detection and Error Pattern Analysis
Experimental Results
Conclusion and Future Work
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
3. 3
Introduction
• To gain a system that detect video artifacts happened not only in compression
(Block based) but also occurred during transmission or broadcasting.
• To reduce the time complexity of the conventional pixel based detection
methods which requires high memory and too much computation time.
• Selection of light weighted human vision measurement system and Choosing a
detection mechanism to detect the distorted frames in real-time.
• Introduce a error block classification and analysis method that can be used in
video restoration, error concealment , video retrieval and many other
commercial applications .
Objective
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Noise and error model for broadcasting and surveillance systems
4. Introduction
4
Video artifacts detection and distorted pattern analysis is difficult
Videos are distorted with compression , wireless transmission based and broadcasting based artifacts
In image and video communication , original image and video is not accessible which is called No-reference approach, is a
challenging research issue.
Compression based artifacts are sustained in a block based manner (typically , 8 by 8) but wireless transmission and
broadcasting related artifacts are not always sustained in a block based manner.
A real time application that not only show the quality measure but also detect the distorted frames from videos.
Classify and analyze the error patterns from defected frames that can be used in video restoration, error concealment and
retrieval.
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Challenges
5. Introduction
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 5
Sample Videos and Images
Courtesy: Samples provided by KBS
6. Proposed Method
6
• Contains Edge magnitude and direction
• Less Sensitive to illumination Variation and Noise
• Extract frames from videos and analyze
• Incorporate Kirsch Mask to detect edge pixels.
• Can detect candidate frame with high disruption in
sequence of frames(Temporal Information).
• More gradient direction is analyzed for complex
environment
• Block classification is done in three steps.
• Edge Block and Texture Content Block is analyzed .
• Error Block Analysis is incorporated for better accuracy
that can be used is Error concealment and restoration.
Video Artifacts Measure and Error
Frame Detection
Spatial Error Block Analysis (SEBA)
Statistical Background Modeling and Multiple Motion Analysis for the Parametric Gesture Representation
8. Artifact Measure and Error Frame Detection
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 8
9. Generate Distortion Metric
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 9
Further the locations of the
compression block boundaries may
be detected by observing where the
maximum correlation value occurs.
The resulting correlation results are
proposed to generate a picture
quality rating for the image which
represents the amount of human-
perceivable block degradation that
has been introduced into the
proposed video signal.
Combining the results in a simple
way yields a metric that shows a
promising performance with respect
to practical reliability, prediction
accuracy, and computational
efficiency.
10. Distortion Metric for Error Frames
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos10
11. Error Frame Detection
To compute the distortion measure of every frame we compare deviation with the
previous frame. If the value is within a certain threshold value then it is considered
as successful undistorted frame. Otherwise it is consider as distorted frame and
forwarded to next report results module.
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos11
0
N
r msr
n
F B n
Criteria
Function
Deviation of
Frames
Calculate
Mean of
frames
12. Spatial Error Block Analysis
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos12
Proposed System
13. Flowchart of Block Analysis
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos13
Edge Direction
Classification
Error Block Classification
Block Shape and
Rotation Formulation
Forward parameters for
error concealment
Detected Error Frame
Sobel Mask in 60
Gradient direction
Magnitude and
Histogram Accum.
Convolution Mask
and shift matching
Restoration and
Retrieval
Spatial Error Block Analysis
14. Edge Direction & Error Block Classification
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos14
Edge Direction Classification
Uniform block: the gray level of
EB may be constant or nearly so.
I.e., there is no obvious edge in
the block.
Edge block: there are few edges
passing through the block and the
direction of each edge, in general,
is with no or little change.
Texture block: both gray level
and edge direction varies
significantly in the block, so the
edge magnitudes of many
directions are very strong.
Error Block Classification- 1
Histogram Accumulation
Error Block Classification- 2
15. Error Block Classification(2)
Bin Reduction:
The bin reduction of histogram of gradients is
used for classifying the edge blocks and texture
blocks.
It also can be used for improving the speed and
performance of our algorithm.
Bin: 59, 0,1, 14, 15, 16, 29, 30, 31, 44, 45 and 46
are most contributing for texture blocks
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos15
16. Block Rotation and Shape Formulation
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos16
Histogram
Characteristics
Convolution
Mask
Phase Offset
Calculation
Block Matching and
Shifting
17. Experimental Results(1)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 17
Algorithm Pearson
Correlation
Spearman
Correlation
Block_msr -.721 .685
MGBIM [9] -.597 .584
S[63] .614 .570
Algorithm Pearson
Correlation
Spearman
Correlation
Block_msr -.843 .838
MGBIM [9] -.727 .925
S[63] .944 .937
Approaches Pearson Corr. Spearman Corr. RMSE
Wu and Yuen’s [9] .6344 .7365 7.1869
Vlachos’ [65] .5378 .7930 7.0183
Pan et al.’s [66] .6231 .6684 8.4497
Perra et al.’s [67] .6916 .6531 8.4357
Pan et al.’s [68] .5008 .6718 8.1979
Muijs & Kirenko’s
[69]
.7875 .6939 7.9394
Proposed Method .8627 .7104 7.0236
Pearson Correlation and Spearman for FUB database Pearson Correlation and Spearman for LIVE database
Test result using different approaches on the MPEG-2 video dataset
18. Experimental Results(2)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 18
DATASET Wu et al.'s Pan et al.'s Mujis et al.'s Proposed
Sequence Recall Precision Recall Precision Recall Precision Recall Precision
LIV E : BlueSky 87.01 87.02 88.31 98.27 88.31 98.27 86.35 95.40
LIV E : Pedestrian 88.88 88.03 83.34 93.31 67.29 77.24 76.74 96.52
LIV E : RiverBed 76.58 86.50 87.57 97.57 64.28 74.89 75.54 92.26
LIV E : RushHour 77.64 87.54 86.83 96.83 68.80 78.02 77.63 90.60
LIV E : ParkRun 78.08 82.05 77.35 97.32 66.20 76.23 85.47 95.49
OCN : One 69.44 89.28 77.77 93.33 63.89 79.31 83.33 96.77
OCN : Mr:Big 70.23 88.67 79.41 90.94 68.56 75.42 85.58 98.11
OCN : Swim 66.87 85.72 84.56 95.24 65.55 78.56 88.23 95.46
Comparison of different algorithms showing the detection rate of distorted frames
19. Experimental Results(3)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 19
Pattern orientation calculation considering histogram bin Bin Reduction : Selection of significant histogram bins
Rotation Formulation and Bin Reduction
Histogram Accumulation of Match and Shifting
20. Experimental Results(4)
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos 20
High Priority bins to take the decision:
32, (88, 90, 92) and 128
[For Matched Case, High accumulation].
Second high priority bins to take decision:
(14, 15, 16) and (44, 45, 46)
[Accumulation from High (full unmatched)
to zero (partially matched)].
Discussion and Decision
21. 21
We have proposed an efficient Video
artifact measurement and error frame
detection method- that does not
restrict itself only compression based
artifacts.
Major Contribution-1
Our Error block analysis algorithm is
less sensitive to illumination variation
and noise. Moreover, it can deal with
not only traditional artifacts but also
wireless transmission and broadcasting
related artifacts.
Major Contribution-2
Our analysis method can formulate the
distortion pattern rotation and shape-
in later part which can be used in video
restoration, concealment and retrieval.
Major Contribution-3
Feature Work
We will use the analytical parameters for video
error concealment. How we incorporate these
information for next step is a challenging
research issue.
Conclusion and Future Work
Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
22. Publication List
SCI/SCIE Indexed Journals
1. Md. Mehedi Hasan, Kiok Ahn, Mahbub Murshed, Oksam Chae; “Hawkeye: A Cloud Architecture for Automated Video Error Detection in Real-time”,
INFORMATION Journal) (Accepted: 12th April, 2012) (SCIE) [ISSN: 1343-4500, E-ISSN: 1344-8994].
2. Md. Mehedi Hasan, Kiok Ahn, JeongHeon Lee, SM Zahid Ishraque, Oksam Chae; “Fast and Reliable Structure-Oriented Distortion Measure for Video
Processing”, Advanced Science Letters (Accepted: 6th December, 2011) (SCIE, IF: 1.253) [ISSN: 1936-6612, E-ISSN: 1936-7317].
International Journals
1. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Faster Detection of Independent Lossy Compressed Block Errors in Images and Videos”, International
Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 5, no. 1,pp. 151-164, March, 2012)[ISSN: 2005-4254].
2. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Measuring Blockiness of Videos using Edge Enhancement Filtering ”, SIP, Communications in Computer
and Information Science (CCIS), vol. 260, pp. 10-19, January, 2012) (Springer- Verlag, Berlin-Heidelberg)[ ISSN: 1865-0929, ISBN: 978-3-642-
27182-3].
International Conference Papers
1. Md. Mehedi Hasan, Kiok Ahn, Md. Shariful Haque, Oksam Chae; “Blocking Artifact Detection by Analyzing the Distortions of Local Properties in
Images ”,ICCIT 2011, 14th International Conference on Computer and Information Technology, IEEE Xplore, pp. 475-480, Dec. 22-24, 2012) [ISBN:
978-1- 61284-907-2].
2. Md. Mehedi Hasan, Kiok Ahn, SM Zahid Ishraque, Oksam Chae; “Hawkeye: Real-time Video Error Detection Using Cloud Computing Platform ”,AIM
2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), pp. 121-122, Seoul, Korea, Feb. 6-8, 2012).
3. Md. Mehedi Hasan, Kiok Ahn, Oksam Chae; “Measuring Artifacts of Broadcasted Videos by Accumulating Edge Gradient Magnitude ”,YSEC 2012,
Proceedings of the 37th KIPS Spring Conference), Korea, April 26-28, 2012).
4. Md. Mehedi Hasan, Kiok Ahn, Mohammad Shoyaib, Oksam Chae; “Content- Based Error Detection and Concealment for Video Transmission over
WLANS ”,AIM Summer 2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), Jeju, Korea, July 10-
12, 2012) [Accepted].
5. Mahbub Murshed, SM Zahid Ishraque, Md. Mehedi Hasan, Oksam Chae; “Cloud Architecture for Lossless Image Compression by Efficient Bit-Plane
Similarity Coding ”, AIM 2012, Proceedings of the FTRA International Conference on Advanced IT, engineering and Management), pp. 123-124,
Seoul, Korea, Feb. 6-8, 2012).
6. Minsun Park, Md. Mehedi Hasan, Jaemyun Kim, Oksam Chae; “Hand Detection and Tracking Using Depth and Color Information ”,IPCV 2012, The
2012 International Conference on Image Processing, Computer Vision, and Pattern Recognition), Las Vegas, USA, July 16-19, 2012).
22Edge-Based Feature Extraction for Artifacts Detection and Error Pattern Analysis from Broadcasted Videos
Existing methods fail due to the difficulty to manage motion in the background.
Existing moving object detectors fail when motion-free backgrounds are not available.
Existing segmentation methods cannot separate them.
Existing shape matching methods can not track shape and color variation at the same time.