This document describes a scene change detection method using block processing. It begins with an introduction to scene change detection and its applications. It then discusses existing scene change detection approaches like color histograms, edge detection, and macroblocks. The proposed method is described, which uses block processing to extract image features from video frames, compare edges between frames, and detect scene changes based on a threshold. The method is implemented in Simulink blocks to read video, perform feature extraction, edge comparison, and display the number of detected change blocks. Experimental results found the method can accurately detect scene changes.
Video Key-Frame Extraction using Unsupervised Clustering and Mutual ComparisonCSCJournals
Key-frame extraction is one of the important steps in semantic concept based video indexing and retrieval and accuracy of video concept detection highly depends on the effectiveness of keyframe extraction method. Therefore, extracting key-frames efficiently and effectively from video shots is considered to be a very challenging research problem in video retrieval systems. One of many approaches to extract key-frames from a shot is to make use of unsupervised clustering. Depending on the salient content of the shot and results of clustering, key-frames can be extracted. But usually, because of the visual complexity and/or the content of the video shot, we tend to get near duplicate or repetitive key-frames having the same semantic content in the output and hence accuracy of key-frame extraction decreases. In an attempt to improve accuracy, we proposed a novel key-frame extraction method based on unsupervised clustering and mutual comparison where we assigned 70% weightage to color component (HSV histogram) and 30% to texture (GLCM), while computing a combined frame similarity index used for clustering. We suggested a mutual comparison of the key-frames extracted from the output of the clustering where each key-frame is compared with every other to remove near duplicate keyframes. The proposed algorithm is both computationally simple and able to detect non-redundant and unique key-frames for the shot and as a result improving concept detection rate. The efficiency and effectiveness are validated by open database videos.
A Framework for Human Action Detection via Extraction of Multimodal FeaturesCSCJournals
This work discusses the application of an Artificial Intelligence technique called data extraction and a process-based ontology in constructing experimental qualitative models for video retrieval and detection. We present a framework architecture that uses multimodality features as the knowledge representation scheme to model the behaviors of a number of human actions in the video scenes. The main focus of this paper placed on the design of two main components (model classifier and inference engine) for a tool abbreviated as VASD (Video Action Scene Detector) for retrieving and detecting human actions from video scenes. The discussion starts by presenting the workflow of the retrieving and detection process and the automated model classifier construction logic. We then move on to demonstrate how the constructed classifiers can be used with multimodality features for detecting human actions. Finally, behavioral explanation manifestation is discussed. The simulator is implemented in bilingual; Math Lab and C++ are at the backend supplying data and theories while Java handles all front-end GUI and action pattern updating. To compare the usefulness of the proposed framework, several experiments were conducted and the results were obtained by using visual features only (77.89% for precision; 72.10% for recall), audio features only (62.52% for precision; 48.93% for recall) and combined audiovisual (90.35% for precision; 90.65% for recall).
Our paper on homogeneous motion discovery oriented reference frame for high efficiency video coding talks about the idea of segmenting the current frame into cohesive motion regions made of blocks and then using these regions to form a motion compensated prediction. This prediction when used as an additional reference frame for the current frame, shows encouraging savings in bit rate over standalone HEVC reference coder.
Video Key-Frame Extraction using Unsupervised Clustering and Mutual ComparisonCSCJournals
Key-frame extraction is one of the important steps in semantic concept based video indexing and retrieval and accuracy of video concept detection highly depends on the effectiveness of keyframe extraction method. Therefore, extracting key-frames efficiently and effectively from video shots is considered to be a very challenging research problem in video retrieval systems. One of many approaches to extract key-frames from a shot is to make use of unsupervised clustering. Depending on the salient content of the shot and results of clustering, key-frames can be extracted. But usually, because of the visual complexity and/or the content of the video shot, we tend to get near duplicate or repetitive key-frames having the same semantic content in the output and hence accuracy of key-frame extraction decreases. In an attempt to improve accuracy, we proposed a novel key-frame extraction method based on unsupervised clustering and mutual comparison where we assigned 70% weightage to color component (HSV histogram) and 30% to texture (GLCM), while computing a combined frame similarity index used for clustering. We suggested a mutual comparison of the key-frames extracted from the output of the clustering where each key-frame is compared with every other to remove near duplicate keyframes. The proposed algorithm is both computationally simple and able to detect non-redundant and unique key-frames for the shot and as a result improving concept detection rate. The efficiency and effectiveness are validated by open database videos.
A Framework for Human Action Detection via Extraction of Multimodal FeaturesCSCJournals
This work discusses the application of an Artificial Intelligence technique called data extraction and a process-based ontology in constructing experimental qualitative models for video retrieval and detection. We present a framework architecture that uses multimodality features as the knowledge representation scheme to model the behaviors of a number of human actions in the video scenes. The main focus of this paper placed on the design of two main components (model classifier and inference engine) for a tool abbreviated as VASD (Video Action Scene Detector) for retrieving and detecting human actions from video scenes. The discussion starts by presenting the workflow of the retrieving and detection process and the automated model classifier construction logic. We then move on to demonstrate how the constructed classifiers can be used with multimodality features for detecting human actions. Finally, behavioral explanation manifestation is discussed. The simulator is implemented in bilingual; Math Lab and C++ are at the backend supplying data and theories while Java handles all front-end GUI and action pattern updating. To compare the usefulness of the proposed framework, several experiments were conducted and the results were obtained by using visual features only (77.89% for precision; 72.10% for recall), audio features only (62.52% for precision; 48.93% for recall) and combined audiovisual (90.35% for precision; 90.65% for recall).
Our paper on homogeneous motion discovery oriented reference frame for high efficiency video coding talks about the idea of segmenting the current frame into cohesive motion regions made of blocks and then using these regions to form a motion compensated prediction. This prediction when used as an additional reference frame for the current frame, shows encouraging savings in bit rate over standalone HEVC reference coder.
Efficient video indexing for fast motion videoijcga
Due to advances in recent multimedia technologies, various digital video contents become available from different multimedia sources. Efficient management, storage, coding, and indexing of video are required because video contains lots of visual information and requires a large amount of memory. This paper proposes an efficient video indexing method for video with rapid motion or fast illumination change, in which motion information and feature points of specific objects are used. For accurate shot boundary detection, we make use of two steps: block matching algorithm to obtain accurate motion information and modified displaced frame difference to compensate for the error in existing methods. We also propose an object matching algorithm based on the scale invariant feature transform, which uses feature points to group shots semantically. Computer simulation with five fast-motion video shows the effectiveness of the proposed video indexing method.
A survey on passive digital video forgery detection techniquesIJECEIAES
Digital media devices such as smartphones, cameras, and notebooks are becoming increasingly popular. Through digital platforms such as Facebook, WhatsApp, Twitter, and others, people share digital images, videos, and audio in large quantities. Especially in a crime scene investigation, digital evidence plays a crucial role in a courtroom. Manipulating video content with high-quality software tools is easier, which helps fabricate video content more efficiently. It is therefore necessary to develop an authenticating method for detecting and verifying manipulated videos. The objective of this paper is to provide a comprehensive review of the passive methods for detecting video forgeries. This survey has the primary goal of studying and analyzing the existing passive techniques for detecting video forgeries. First, an overview of the basic information needed to understand video forgery detection is presented. Later, it provides an in-depth understanding of the techniques used in the spatial, temporal, and
spatio-temporal domain analysis of videos, datasets used, and their limitations are reviewed. In the following sections, standard benchmark video forgery datasets and the generalized architecture for passive video forgery detection techniques are discussed in more depth. Finally, identifying loopholes in existing surveys so detecting forged videos much more effectively in the future are discussed.
Video saliency-recognition by applying custom spatio temporal fusion techniqueIAESIJAI
Video saliency detection is a major growing field with quite few contributions to it. The general method available today is to conduct frame wise saliency detection and this leads to several complications, including an incoherent pixel-based saliency map, making it not so useful. This paper provides a novel solution to saliency detection and mapping with its custom spatio-temporal fusion method that uses frame wise overall motion colour saliency along with pixel-based consistent spatio-temporal diffusion for its temporal uniformity. In the proposed method section, it has been discussed how the video is fragmented into groups of frames and each frame undergoes diffusion and integration in a temporary fashion for the colour saliency mapping to be computed. Then the inter group frame are used to format the pixel-based saliency fusion, after which the features, that is, fusion of pixel saliency and colour information, guide the diffusion of the spatio temporal saliency. With this, the result has been tested with 5 publicly available global saliency evaluation metrics and it comes to conclusion that the proposed algorithm performs better than several state-of-the-art saliency detection methods with increase in accuracy with a good value margin. All the results display the robustness, reliability, versatility and accuracy.
The ubiquitous and connected nature of camera loaded mobile devices has greatly estimated the value and importance of visual information they capture. Today, sending videos from camera phones uploaded by unknown users is relevant on news networks, and banking customers expect to be able to deposit checks using mobile devices. In this paper we represent Movee, a system that addresses the fundamental question of whether the visual stream exchange by a user has been captured live on a mobile device, and has not been tampered with by an adversary. Movee leverages the mobile device motion sensors and the inherent user movements during the shooting of the video. Movee exploits the observation that the movement of the scene recorded on the video stream should be related to the movement of the device simultaneously captured by the accelerometer. the last decade e-lecturing has become more and more popular. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a twostep scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Cyrus Deboo | Shubham Kshatriya | Rajat Bhat"Video Liveness Verification" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-3 , April 2018, URL: http://www.ijtsrd.com/papers/ijtsrd12772.pdf http://www.ijtsrd.com/computer-science/other/12772/video-liveness-verification/cyrus-deboo
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
PERFORMANCE ANALYSIS OF FINGERPRINTING EXTRACTION ALGORITHM IN VIDEO COPY DET...IJCSEIT Journal
A video fingerprint is a recognizer that is derived from a piece of video content. The video fingerprinting
methods obtain unique features of a video that differentiates one video clip from another. It aims to identify
whether a query video segment is a copy of video from the video database or not based on the signature of
the video. It is difficult to find whether a video is a copied video or a similar video, since the features of the
content are very similar from one video to the other. The main focus of this paper is to detect that the query
video is present in the video database with robustness depending on the content of video and also by fast
search of fingerprints. The Fingerprint Extraction Algorithm and Fast Search Algorithms are adopted in
this paper to achieve robust, fast, efficient and accurate video copy detection. As a first step, the
Fingerprint Extraction algorithm is employed which extracts a fingerprint through the features from the
image content of video. The images are represented as Temporally Informative Representative Images
(TIRI). Then, the second step is to find the presence of copy of a query video in a video database, in which
a close match of its fingerprint in the corresponding fingerprint database is searched using inverted-filebased
method. The proposed system is tested against various attacks like noise, brightness, contrast,
rotation and frame drop. Thus the performance of the proposed system on an average shows high true
positive rate of 98% and low false positive rate of 1.3% for different attacks.
Abstract: Watermarking is mainly projected for copy right protection, data safeguard, and data thrashing, etc. Nowadays all the communication requires protection. Estimation of video quality has a major role in today’s video distribution, communication control and e-commerce. Consumer fulfillment is achieved by providing good quality. Here the video input is changed into frames and the image set as watermark is embedded into the frames. The embedding process is carried out using DWT, then the embedded frame and other remaining frames are again changed into video file and it is transmitted. At the receiver side watermark image is extracted from the video. Finally, by using metrics such as TDR, PSNR the quality of watermark image is estimated under distortion. All experiments and tests are carried out using MATLAB.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Digital video watermarking scheme using discrete wavelet transform and standa...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications.
An Exploration based on Multifarious Video Copy Detection Strategiesidescitation
We co-exist in an era, where tonnes and tonnes of
videos are uploaded every day. Video copy detection has become
the need for the hour as most of them are user generated
Internet videos through popular sites such as YouTube. It acts
as a medium to restrain piracy and prove whether the contents
are legitimate. The usual procedure adopted in video copy
detection techniques include discovering whether a query
video is copied from a database of videos or not. This paper
acquaints different Video copy detection techniques that have
been adopted to ensure robust and secure videos along some
applications of video fingerprinting.
Secure IoT Systems Monitor Framework using Probabilistic Image EncryptionIJAEMSJORNAL
In recent years, the modeling of human behaviors and patterns of activity for recognition or detection of special events has attracted considerable research interest. Various methods abounding to build intelligent vision systems aimed at understanding the scene and making correct semantic inferences from the observed dynamics of moving targets. Many systems include detection, storage of video information, and human-computer interfaces. Here we present not only an update that expands previous similar surveys but also a emphasis on contextual abnormal detection of human activity , especially in video surveillance applications. The main purpose of this survey is to identify existing methods extensively, and to characterize the literature in a manner that brings to attention key challenges.
Inverted File Based Search Technique for Video Copy Retrievalijcsa
A video copy detection system is a content-based search engine focusing on Spatio-temporal features. It
aims to find whether a query video segment is a copy of video from the video database or not based on the
signature of the video. It is hard to find whether a video is a copied video or a similar video since the
features of the content are very similar from one video to the other. The main focus is to detect that the
query video is present in the video database with robustness depending on the content of video and also by
fast search of fingerprints. The Fingerprint Extraction Algorithm and Fast Search Algorithm are adopted
to achieve robust, fast, efficient and accurate video copy detection. As a first step, the Fingerprint
Extraction algorithm is employed which extracts a fingerprint through the features from the image content
of video. The images are represented as Temporally Informative Representative Images (TIRI). Then the
next step is to find the presence of copy of a query video in a video database, in which a close match of its
fingerprint in the corresponding fingerprint database is searched using inverted-file-based method.
An Efficient Method For Gradual Transition Detection In Presence Of Camera Mo...ijafrc
Gradual transition detection is one of the most important issues in the field of video indexing and retrieval. Among the various types of gradual transitions, the fade and dissolve type the gradual transition is considered the most common one, but it is most difficult one to detect. In most of the existing fade and dissolve detection algorithms, the false detection problem caused by motion is very serious. In this paper we present a novel gradual transition detection algorithm using local key-point integrated with twin comparison method that can correctly distinguish fades and dissolve from object and camera motion.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Efficient video indexing for fast motion videoijcga
Due to advances in recent multimedia technologies, various digital video contents become available from different multimedia sources. Efficient management, storage, coding, and indexing of video are required because video contains lots of visual information and requires a large amount of memory. This paper proposes an efficient video indexing method for video with rapid motion or fast illumination change, in which motion information and feature points of specific objects are used. For accurate shot boundary detection, we make use of two steps: block matching algorithm to obtain accurate motion information and modified displaced frame difference to compensate for the error in existing methods. We also propose an object matching algorithm based on the scale invariant feature transform, which uses feature points to group shots semantically. Computer simulation with five fast-motion video shows the effectiveness of the proposed video indexing method.
A survey on passive digital video forgery detection techniquesIJECEIAES
Digital media devices such as smartphones, cameras, and notebooks are becoming increasingly popular. Through digital platforms such as Facebook, WhatsApp, Twitter, and others, people share digital images, videos, and audio in large quantities. Especially in a crime scene investigation, digital evidence plays a crucial role in a courtroom. Manipulating video content with high-quality software tools is easier, which helps fabricate video content more efficiently. It is therefore necessary to develop an authenticating method for detecting and verifying manipulated videos. The objective of this paper is to provide a comprehensive review of the passive methods for detecting video forgeries. This survey has the primary goal of studying and analyzing the existing passive techniques for detecting video forgeries. First, an overview of the basic information needed to understand video forgery detection is presented. Later, it provides an in-depth understanding of the techniques used in the spatial, temporal, and
spatio-temporal domain analysis of videos, datasets used, and their limitations are reviewed. In the following sections, standard benchmark video forgery datasets and the generalized architecture for passive video forgery detection techniques are discussed in more depth. Finally, identifying loopholes in existing surveys so detecting forged videos much more effectively in the future are discussed.
Video saliency-recognition by applying custom spatio temporal fusion techniqueIAESIJAI
Video saliency detection is a major growing field with quite few contributions to it. The general method available today is to conduct frame wise saliency detection and this leads to several complications, including an incoherent pixel-based saliency map, making it not so useful. This paper provides a novel solution to saliency detection and mapping with its custom spatio-temporal fusion method that uses frame wise overall motion colour saliency along with pixel-based consistent spatio-temporal diffusion for its temporal uniformity. In the proposed method section, it has been discussed how the video is fragmented into groups of frames and each frame undergoes diffusion and integration in a temporary fashion for the colour saliency mapping to be computed. Then the inter group frame are used to format the pixel-based saliency fusion, after which the features, that is, fusion of pixel saliency and colour information, guide the diffusion of the spatio temporal saliency. With this, the result has been tested with 5 publicly available global saliency evaluation metrics and it comes to conclusion that the proposed algorithm performs better than several state-of-the-art saliency detection methods with increase in accuracy with a good value margin. All the results display the robustness, reliability, versatility and accuracy.
The ubiquitous and connected nature of camera loaded mobile devices has greatly estimated the value and importance of visual information they capture. Today, sending videos from camera phones uploaded by unknown users is relevant on news networks, and banking customers expect to be able to deposit checks using mobile devices. In this paper we represent Movee, a system that addresses the fundamental question of whether the visual stream exchange by a user has been captured live on a mobile device, and has not been tampered with by an adversary. Movee leverages the mobile device motion sensors and the inherent user movements during the shooting of the video. Movee exploits the observation that the movement of the scene recorded on the video stream should be related to the movement of the device simultaneously captured by the accelerometer. the last decade e-lecturing has become more and more popular. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a twostep scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Cyrus Deboo | Shubham Kshatriya | Rajat Bhat"Video Liveness Verification" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-3 , April 2018, URL: http://www.ijtsrd.com/papers/ijtsrd12772.pdf http://www.ijtsrd.com/computer-science/other/12772/video-liveness-verification/cyrus-deboo
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
PERFORMANCE ANALYSIS OF FINGERPRINTING EXTRACTION ALGORITHM IN VIDEO COPY DET...IJCSEIT Journal
A video fingerprint is a recognizer that is derived from a piece of video content. The video fingerprinting
methods obtain unique features of a video that differentiates one video clip from another. It aims to identify
whether a query video segment is a copy of video from the video database or not based on the signature of
the video. It is difficult to find whether a video is a copied video or a similar video, since the features of the
content are very similar from one video to the other. The main focus of this paper is to detect that the query
video is present in the video database with robustness depending on the content of video and also by fast
search of fingerprints. The Fingerprint Extraction Algorithm and Fast Search Algorithms are adopted in
this paper to achieve robust, fast, efficient and accurate video copy detection. As a first step, the
Fingerprint Extraction algorithm is employed which extracts a fingerprint through the features from the
image content of video. The images are represented as Temporally Informative Representative Images
(TIRI). Then, the second step is to find the presence of copy of a query video in a video database, in which
a close match of its fingerprint in the corresponding fingerprint database is searched using inverted-filebased
method. The proposed system is tested against various attacks like noise, brightness, contrast,
rotation and frame drop. Thus the performance of the proposed system on an average shows high true
positive rate of 98% and low false positive rate of 1.3% for different attacks.
Abstract: Watermarking is mainly projected for copy right protection, data safeguard, and data thrashing, etc. Nowadays all the communication requires protection. Estimation of video quality has a major role in today’s video distribution, communication control and e-commerce. Consumer fulfillment is achieved by providing good quality. Here the video input is changed into frames and the image set as watermark is embedded into the frames. The embedding process is carried out using DWT, then the embedded frame and other remaining frames are again changed into video file and it is transmitted. At the receiver side watermark image is extracted from the video. Finally, by using metrics such as TDR, PSNR the quality of watermark image is estimated under distortion. All experiments and tests are carried out using MATLAB.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Digital video watermarking scheme using discrete wavelet transform and standa...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications.
An Exploration based on Multifarious Video Copy Detection Strategiesidescitation
We co-exist in an era, where tonnes and tonnes of
videos are uploaded every day. Video copy detection has become
the need for the hour as most of them are user generated
Internet videos through popular sites such as YouTube. It acts
as a medium to restrain piracy and prove whether the contents
are legitimate. The usual procedure adopted in video copy
detection techniques include discovering whether a query
video is copied from a database of videos or not. This paper
acquaints different Video copy detection techniques that have
been adopted to ensure robust and secure videos along some
applications of video fingerprinting.
Secure IoT Systems Monitor Framework using Probabilistic Image EncryptionIJAEMSJORNAL
In recent years, the modeling of human behaviors and patterns of activity for recognition or detection of special events has attracted considerable research interest. Various methods abounding to build intelligent vision systems aimed at understanding the scene and making correct semantic inferences from the observed dynamics of moving targets. Many systems include detection, storage of video information, and human-computer interfaces. Here we present not only an update that expands previous similar surveys but also a emphasis on contextual abnormal detection of human activity , especially in video surveillance applications. The main purpose of this survey is to identify existing methods extensively, and to characterize the literature in a manner that brings to attention key challenges.
Inverted File Based Search Technique for Video Copy Retrievalijcsa
A video copy detection system is a content-based search engine focusing on Spatio-temporal features. It
aims to find whether a query video segment is a copy of video from the video database or not based on the
signature of the video. It is hard to find whether a video is a copied video or a similar video since the
features of the content are very similar from one video to the other. The main focus is to detect that the
query video is present in the video database with robustness depending on the content of video and also by
fast search of fingerprints. The Fingerprint Extraction Algorithm and Fast Search Algorithm are adopted
to achieve robust, fast, efficient and accurate video copy detection. As a first step, the Fingerprint
Extraction algorithm is employed which extracts a fingerprint through the features from the image content
of video. The images are represented as Temporally Informative Representative Images (TIRI). Then the
next step is to find the presence of copy of a query video in a video database, in which a close match of its
fingerprint in the corresponding fingerprint database is searched using inverted-file-based method.
An Efficient Method For Gradual Transition Detection In Presence Of Camera Mo...ijafrc
Gradual transition detection is one of the most important issues in the field of video indexing and retrieval. Among the various types of gradual transitions, the fade and dissolve type the gradual transition is considered the most common one, but it is most difficult one to detect. In most of the existing fade and dissolve detection algorithms, the false detection problem caused by motion is very serious. In this paper we present a novel gradual transition detection algorithm using local key-point integrated with twin comparison method that can correctly distinguish fades and dissolve from object and camera motion.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
2. D. Shukla et al., The Journal of Applied Sciences Research. 2(2): 103-110. 2015
104
necessary. Segmentation is done with the help
of scene change detection. In multimedia
communication, digital video consists of
frames, scene and shots. In any shot one
continuous action is going on, captured by a
camera, and is a sequence of several frames.
For scene change detection two consecutive
frames gets matched with each other. Due to
the large amount of data contained by video, it
is often compressed for efficient storage and
transmission. One of the most efficient and
popular technique through which we can
reduce the temporal redundancy between
adjacent frames of a video sequence is called
motion estimation (ME) (P. Chauhan et al.,
2013). Typically scene changes can be
detected by inspecting the differences of
successive frames. Due to the nature of MPEG
compression, the encoded DCT values and
motion vectors of a frame already contain
information about these differences. Thus,
when working in the compressed domain, we
can reuse this existing information to decide
about the existence of a scene change on a
frame by frame basis. When analyzing the
DCT values as well as the motion information
we can also detect special movements like
rotations or zooms within the video stream.
For the adaptation of compressed video,
detection of scene changes and special
movements can provide helpful information to
determine suitable adaptation parameters as
well as to decide whether a certain frame
should be skipped (Brandt, Trotzky and Wolf,
2008).The H.264 advanced video coding
(H.264/AVC) standard provides several
advanced features such as improved coding
efficiency and error robustness for video
storage and transmission. In order to improve
the coding performance of H.264/AVC,
coding control parameters such as group-of-
pictures (GOP) sizes should be adaptively
adjusted according to different video content
variations (VCVs), which can be extracted
from temporal deviation between two
consecutive frames (Ding and Yang, 2008).
The H.264 is the newest compression digital
video codec standard issued by the ITU-T
Video Coding Experts Group (VCEG)
together with the ISO/IEC Moving Picture
Experts Group (MPEG). Because of its many
new features and excellent performance,
H.264 achieves higher compression efficiency
than previous video standards. This makes
H.264 well suited for transmission over
bandwidth limited systems, such as mobile
wireless systems (Ding and Yang, 2008).
Images may be two-dimensional, such as
photograph, screen display, and as well as a
three-dimensional, such as hologram. They
may be captured by optical such as cameras,
mirror, lenses, telescope etc. The pixel (a word
invented from "picture element") is the basic
unit of programmable color on a computer
display or in a computer image. A typical
video sequence it’s organized into a sequence
of group of pictures (GOP) (fig. 1). A video
can be broken down in scene, shot and frames.
Each GOP consists of one I (intra-coded) and a
few P (predicted) and B (bi-directionally
interpolated) frames. A scene is a logical
grouping of shots into a semantic unit. A shot
is a sequence of frames captured by a single
camera in a single continuous action. To create
indexed video databases, video sequences are
first segmented into scenes, a process that is
referred to as scene change detection (M. Ford
et al., 1997). Automatic annotation of the input
video needs to be developed. Content-based
temporal video segmentation is mostly
achieved by detection and classification of
scene changes (transitions). Basically,
transitions can be divided into two categories:
abrupt transitions and gradual transitions.
Gradual transitions include camera movements
such as panning, tilting and zooming, and
video editing special effects such as fade-in,
fade-out, dissolving and wiping. Abrupt
transitions are very easy to detect, as the two
frames are completely uncorrelated (Fernando,
Canagarajah and Bull, 1999). A shot boundary
is the transition between two shots .A digital
video consists of frames that are a single frame
consists of pixels (Adhikari et al., 2008).A
video shot is a sequence of successive frames
with similar visual content in the same
physical location. Therefore, abrupt scene
change can be identified when there is a large
amount of visual content change across two
successive frames while gradual transition is
much difficult to detect since the successive
frame difference during transition is
substantially reduced (Chau et al., 2005).The
segmentation of a video sequence into
individual shots by scene change detection is
fundamental for automatic video indexing,
scene browsing and retrievals. Once individual
video shots are identified, we can index and
query video contents by using content-based
indexing. It can be expected that large amount
of video materiel will be available only in
compressed form very soon. Thus it is
necessary to develop techniques for scene
change detection with direct manipulation of
3. D. Shukla et al., The Journal of Applied Sciences Research. 2(2): 103-110. 2015
105
compressed video sequences on the
compression domain(Shin et al., 1998).The
segmentation process is generally referred to
as shot boundary detection or scene change
detection. A shot is sequence of frames
generated during a continuous camera
operation and represents a continuous action in
time and space. Video editing procedures
produce abrupt and gradual scene changes. A
cut is an abrupt scene change that occurs in a
single frame. Gradual change occurs over
multiple frames and is the product of fade-ins,
fade-outs, or dissolves (where two shots are
superposed (Lee et al., 2006).A variety of data
in the form of image and video are being
generated, stored, transmitted, analyzed, and
accessed with advances in computer
technology and communication network. To
make use of these data, an efficient and
effective technique needs to be developed for
the retrieval of multimedia information. A
video stream consists of a sequences of images
accompanied with an optional audio stream.
The video stream can have arbitrary length.
The illusion of moving picture is generated by
showing rapidly a sequential series of still
images. The human eye is too slow to see the
individual still images, and interprets the series
of images as continuous movement. Typically
the frame rates of videos are about 24 orm30
images per second, which is enough to
generate the illusion of continuous movement.
Edited video typically consists of multiple
shots. A shot is a series of sequential frames
that have been filmed in a single camera run.
Shots are bound together using several kinds
of cuts or transition effects. In movie
terminology one or more shots form a scene,
which is usually defined to be a set of
consecutive shots that form an entity that
seems to be continuous in time, or is filmed in
a single location (Chhasatia, Trivedi and Shah,
2013). A fade is a gradual transition between a
scene and a constant image (fade out) or
between a constant image and a scene (fade
in). A dissolve is a gradual transition from one
scene to another, in which the first scene fades
out and the second scene fades in (Huang and
Liao, 2001). For the copy protection of video
using watermarking, scene change detection is
important step (Shukla and Sharma, 2012). To
detect scene change, a dissimilarity measure
between consecutive frames is often used. The
measure must distinguish true scene changes
from those that are not. A simple measure is
mean absolute frame differences (MAFD) for
the nth frame (Yi and Ling, 2005).
DISPLAY ORDER
B
P
P
P
B
B
Figure 1: Group of Picture (GOP)
Scene Detection
a) Color Histograms. frame-to-frame
similarities based on colors which appeared
within them and positions of those colors in
the frame. After computing the inter-frame
similarities, a threshold can be used to indicate
shot boundaries.
b) Edge Detection: Edge Detection is based
on detecting edges in two neighboring images
and comparing these images (Adhikari et al.,
2008).
c) Macroblocks: Besides, they investigated the
shot boundary detection using macroblocks.
Depending on the types of the macroblock the
MPEG pictures have different attributes
corresponding to the macroblock. Macroblock
types can be divided into forward prediction,
backward prediction or no prediction at all.
The classification of different blocks happens
while encoding the video file based on the
motion estimation and efficiency of the
encoding. If a frame contains backward
predicted blocks and suddenly does not have
any, it could mean that the following frame
has changed drastically which would point to a
cut. This approach, however, becomes difficult
to implement when there is a shot change, and
the frame in the next shot contains similar
blocks as the frame before.
PROPOSED METHOD
In the method proposed, a video is used as
input signal. The video is first converted into
image frames. These input image frames is
pass through the feature extraction block set.
Feature Extraction Block set convert input
image frame into edge image signal. This
image signal is input of the Edge Comparison
block set, compared Edge is greater the some
specific threshold value then scene change
occurs, Number of change block is displayed
in time offset parameter.
4. D. Shukla et al., The Journal of Applied Sciences Research. 2(2): 103-110. 2015
106
Video sequences
Scene 1
Frame 1
Scene 2
Frame NFrame 2
Scene N
Figure 2: General Video Structure
Boundaries Approaches
Scene Boundaries Approaches
Color Histograms Macro BlocksEdge Detection
Figure 3: Scene Boundaries Approaches
A. Input unit
The Input Block is From Multimedia File
block that reads audio frames, video frames, or
both from a multimedia file. The block imports
data from the file into a Simulink model. The
block support following video files (.qt, .mov,
.avi, .asf, .asx, .wmv, .mpg, .mpeg, .mp2,
.mp4).
Sample Rate: It is rate of video in frames per
second (FPS) i.e.
=
1
RBG Component: Matrix ( × ) that
represents one plane of the RGB video stream.
Outputs from the R, G, or B ports must have
same dimensions ( × ).
B. Feature Extraction unit
One of the RGB component is chosen as
input of the feature extraction block.we select
different Edge Detection method like Sobel,
Prewitt, or Roberts.
The Edge Detection method finds the edges
in an input image by approximating the
gradient magnitude of the image. The block
convolves the input matrix with the Sobel,
Prewitt, or Roberts kernel. The outputs two
gradient components of the image, which are
the result of this convolution operation.
Alternatively, the block can perform a
thresholding operation on the gradient
magnitudes and output a binary image, which
is a matrix of Boolean values. If a pixel value
is 1, it is an edge. The Edge Detection block
computes the automatic threshold values using
an approximation of the number of weak and
non edge image pixels.
Binary output image is converted into
image signal with the help of Data type
conversion block. This output is an input of
Edge Comparison method and edge of video
output block.
C. Edge Comparison Unit
a) Block Processing: This block extracts sub
matrices of a user-specified size from the
input matrix. It sends each sub matrix to a
subsystem for processing, and then
reassembles each subsystem output into the
output matrix. We are using Block
size 32 × 32.
b) Edge Difference: This Subsystem block
represents a subsystem of edge difference
that contains it, that is absolute value of
current and previous frames.
c) Edge Matching: This Subsystem block
represents a edge matching. In this unit the
of mean value is taken as input and
comparing to a constant value, reshape the
dimensions of a vector or matrix input
signal in two dimensional Column vector.
The sum is performs addition on its inputs
vector. This block can add vector, or matrix
inputs.
d) Thresholding: comparing specific
threshold value, if changes occurs scene
change detected and change block is
detected. One of the output is connected to
the number of change block and video
display unit.
D. Number of Change Block
This unit is nothing but a Scope block that
work to displays its input with respect to
simulation time. The Scope block can have
multiple axes (one per port) and all axes have
a common time range with independent y-
axes. The Scope block allows to adjust the
amount of time and the range of input values
displayed. we can move and resize the Scope
window and you can modify the Scope's
parameter values during the simulation.
E. Output unit
The output Block is to Multimedia File
block that reads audio frames, video frames,
or both from a multimedia file. The block
5. D. Shukla et al., The Journal of Applied Sciences Research. 2(2): 103-110. 2015
107
display data from the file into a Simulink
model. The block support following video files
(.qt, .mov, .avi, .asf, .asx, .wmv, .mpg, .mpeg,
.mp2, .mp4).input of this unit is connected
with Feature Extraction unit output and Edge
Comparison Unit output.
Figure 4: Simulink block of Scene Change Detection
EXPERIMENTAL RESULT
In this experiment, we observed the video
scene change detection method under
MATLAB 7.10.0(R210a) environment. The
size of the video files (.qt, .mov, .avi, .asf,
.asx, .wmv, .mpg, .mpeg, .mp2, .mp4) used is
270 × 1280 and Block Processing in each
frames 32 × 32 block size. The scene Change
Detection is shown in figure.
For the evaluation of the proposed method,
the video data base which are generally with in
ITU-T study Group 12 for standardization of
the quality models are used. The resolution of
the video sequences varied from SD
(720×576), to HD-720p (720×1280) and HD-
1080p (1920×1080), in interlaced and
progressive formats and frame rates ranging
from 25 FPS to 60 FPS. We are used HD-720p
(720× 1280) size of video sequences. Table 1
shows the various parameter which contain the
video sequences. This parameters are File
Type, Size, Length, Frame Per Second (FPS)
and Sample Time ( ).
We are taken the three different video
sequence and pass through the Frame rate
display unit within 0 to 10 sec offset time
interval and analyzing the frame rate with each
2 sec interval that is shown in figure 2.
Figure 5: Frame Rate graph of Three Different Video
Sequences
The result are reported based on Precision
and Recall score, which are widely used for
evaluating classification of a
system."Precision" defines how reliable the
detected by the algorithm is while "Recall"
defines the overall performance of the
algorithm. Result are reported using F-measure
which is combine precision and recall in a
single measure. These measure are computed
by
=
+
=
+
= 2 ×
×
+
The selected parameters for the Scene
Change Detection algorithm mention in
section- III in a table 3 with two different
video sequences.
Figure 6: Precision percentage value of two different
video sequence
0
2
4
0-2
sec
2-4
sec
4-6
sec
6-8
sec
8-10
sec
framerate
Time offset
Frame Rate Display
The Ride
Wild Ocean
Row
stunt show
Shiv
Khera,
72.72
Arunima
siha,
78.94
70
75
80
0 2 4
Precision
Video Sequence
Precision (%)
Precision
(%)
6. D. Shukla et al., The Journal of Applied Sciences Research. 2(2): 103-110. 2015
108
Figure 7: Recall percentage value of two different video sequence
Table 1: HD-720p (720×1280) Video Sequence Parameters
S N Video Sequence File Type
Size (MB)
(PIX)
Lengh
Fraes Per
Second
(FPS)
Sample Time ( )
Sec
1 The Ride .MKV
74MB
(720 × 1280)
00:01:00 30 0.0333
2 Wild Ocean Row MP4
48.6MB
(720 × 1280)
00:01:36 24 0.025
3
Team No Limit
Motorcycle Stunt
Show
MP4
91.4MB
720 × 1280)
00:13:32 30 0.0333
Table 2: Average Frame Rate of Video Sequences
S N
Frame Rate
Display/ Time
Offset
The Ride Wild Ocean Row
Team No Limit
Motorcycle Stunt Show
1 0-2 2.2 3.0 2.9
2 2-4 2.2 2.5 2.9
3 4-6 2.1 1.9 2.8
4 6-8 2.1 1.8 2.1
5 8-10 2.4 1.8 2.1
Average Frame Rate
Display
2.2 2.2 2.56
Table 3: Performance accuracy of the Scene Change Detection method
S. N.
Video
Sequence
True False Missed
Recall
(%)
Precision
(%)
F-measure
(%)
1
Shiv Khera
Motivational
Video
8 5 3 61.53 72.72 66.04
2
Arunima
Sinha
Motivational
Video
15 0 4 100 78.94 88.20
shiv
khera;
61.53
arunima
sinha;
100
0
50
100
150
0 1 2 3
Recall
video sequence
Recall (%)
7. D. Shukla et al., The Journal of Applied Sciences Research. 2(2): 103-110. 2015
109
Simulation of MATLAB results of two above video sequences in Table 3 are as shown in figure 8,
figure 9 and figure 10.
(a) (a)
(b)
figure 8: Original video sequences and Edge detected
images
(b)
Figure 9: Image matrix viewer
(a)
(b)
Figure 10: Sequences of video frames
8. D. Shukla et al., The Journal of Applied Sciences Research. 2(2): 103-110. 2015
110
CONCLUSION
The performance of SCD model in the
video is giving the expected results better than
other techniques in its category.
The threshold is dynamically allotted to
each video sequences. Thus, proposed method
is stable in scene change detection ability on
various video sequences. This paper
concentrate on developing a fast, as well as a
well-performed video SCD method. The
experimental results on several database
showed that the proposed method can achieve
a precision rate of greater than 70% and recall
rate is achieve up 100% result, But it is not for
true for all video sequences. Some basic input
is needed to improve the performance of the
model.
Some issues are considered into account for
future research directions. Different video
features can be used in order to improve the
results.
REFERENCES
Ankita, P., Chauhan, Rohit R. Parmar, Shankar K
.Parmar & Shahida G. Chauhan.
(2013).Hybrid approach for video
compression based on scene change detection.
Signal Processing, Computing and Control
(ISPCC), IEEE, 1-5.
Brandt, J., Trotzky, J. & Wolf, L. (2008). Fast
frame-based scene change detection in the
compressed domain for MPEG-4 video. The
Second International Conference on Next
Generation Mobile Applications, Services,
and Technologies, IEEE, 514 -520.
Chung-Lin Huang & Bing-Yao Liao. (2001). A
Robust scene-change detection method for
video segmentation. IEEE Transactions on
Circuits and Systems for Video Technology,
11:12.
Ding J. R. & Yang, J.F. (2008).Adaptive group-of-
pictures and scene change detection methods
based on existing H.264 advanced video
coding information. Image Processing, IET,
2(2), 85 - 94
Ding J.-R. & J.-F. Yang. (2008). An effective error
concealment framework for h.264 decoder
based onvideo scene change detection. IET
Image Process, 2(2):85–94.
Fernando W.A.C., Canagarajah, C.N., & Bull, D.
R. (1999).Wipe scene change detection in
video sequences. Image Processing IEEE, 3,
294-298.
Fernando, W. A. C., Canagarajah, C. N., & Bull, D.
R. (2001). Scene change detection algorithms
for content based video indexing and retrieval.
Electronics & Communication Engineering
Journal, 117-126.
Man-Hee Lee, Hun-Woo Yoo, & Dong-Sik Jang.
(2006). Video scene change detection using
neural network: improved art2. Elsevier Ltd.,
31(1), 13–25.
Neetirajsinh, J., Chhasatia, C., Trivedi, U., &
Komal A. Shah. (2013). Performance
evaluation of localized person based scene
detection and retrieval in video. Image
Information Processing (ICIIP), IEEE Second
International Conference, 78 – 80.
Priyadarshinee, Adhikari, Neeta Gargote, Jyothi
Digge & Hogade, B.G. (2008).Abrupt scene
change detection. World Academy of Science,
Engineering and Technology, 2, 557-562.
Ralph M. Ford, Craig Robson, Daniel Temple,
&Michael Gerlach (1997).Metrics for scene
change detection in digital video sequences.
International Conference on Multimedia
Computing and Systems IEEE, 610 - 611
Shukla Dolley & Manisha Sharma. (2012).
Overview of Scene Change Detection–
Application Watermarking. International
Journal of Computer Applications, 47, 975-
888.
Taehwan Shin, Jae-Gon Kim, Hankyu Lee &
Jinwoong Kim. (1998). Hierarchical scene
change detection in an MPEG-2 compressed
video sequence.4 Circuts and Systems IEEE,
4, 253 - 256.
Ufuk Sakarya & Ziya Telatar. (2010).Video scene
detection using graph-based
representations.Signal Processing: Image
Communication 25 Elsevier, :774–783.
Wensheng Zhou1 & Asha Vellaikal. (2001).On-
line scene change detection of multicast
video. Journal of Visual Communication and
Image Representation, 35(27), 1-15.
Wing-San Chau, Oscar C. Au, Tak-Song Chong,
Tai-Wai Chan, &Chi-Shun Cheung. (2005).
Efficient scene change detection in mpeg
compressed video using composite block
averaged luminance image sequence. Fifth
International Conference on Information,
Communications and Signal Processing
IEEE, 688 - 691.
Xiaoquan Yi & Nam Ling. (2005).Fast pixel-based
video scene change detection. IEEE, 4, 3443 -
3446.