SlideShare a Scribd company logo
1 of 5
Download to read offline
Analysis of Error in Image Logging between
Subsequent Frames of a Streaming Video
Mr. THANMAY J.S
Assistant Professor, Mechanical Department, G.S.S.S.I.E.T.W Mysore.
Email: thanmay26@rediffmail.com
Abstract— In Image Analysis of a streaming video for certain
applications, it may not be necessary to log every frame provided
by an image acquisition device. In fact, it may be more practical
and resourceful to log frames at certain intervals. To log frames
at a certain interval acquisition device must be configured for
Frame Grab Interval, Trigger property, Frame Rate property,
umber of frames to be logged etc. Configuring the property to
an integer value specifies which frame should be logged.
Generally error inclusions due to delay in logging data is a
critical issue which is well understood in this report. Set of
experimental results with changes in Frame rate and umber of
Frames to be logged along with percentage errors are analyzed in
this report
Whenever captured image is consider for visual analysis
traveling on a fast-moving conveyor belt. The image capture
system must know when they are in the right place and capturing
the image and transmit the result for processing before the next
sample arrives. Selection of trigger sensors is no problem; the
next consideration is timing which can be a sporadic in nature
and depends upon logging data and error in acquisition. This
report explains the importance of adjusting the device parameters
like frame rate and grab interval at which frames were logged
and how number of frames logged per second with error in
logging frames.
Keywords: Image Acquisition, Video Device setting, Accessing
Device Properties, Frame Rate, Grab Interval, Logging Data,
Time difference between frames etc.

I. I

objects that represent the connection to the device. This kind
of Image Acquisition system is well suitable for Artificial
intelligence and Machine vision system in industries. image
acquisition application involves these major steps:
Step 1) Starting the video input object -- You start an object by
calling the start function. Starting an object prepares the object for
data acquisition. For example, starting an object locks the values
of certain object properties (they become read only). Starting
an object does not initiate the acquiring of image frames,
however. The initiation of data logging depends on the execution
of a trigger. The example calls the start function to start the
video input object. Objects stop when they have acquired the
requested number of frames.
Step 2) Triggering the acquisition -- To acquire data, a video
input object must execute a trigger. Triggers can occur in
several ways, depending on how the Trigger Type property is
configured. For example, if you specify an immediate trigger,
the object executes a trigger automatically, immediately after it
starts. If you specify a manual trigger, the object waits for a
call to the trigger function before it initiates data acquisition.
Step 3) Bringing data into the workspace -- The Image analysis
software stores acquired data in a memory buffer, a disk file,
or both, depending on the value of the video input object
Logging Mode property. To work with this data, you must
bring it into the workspace. Once the data is in the workspace,
you can manipulate it as you would any other data.

TRODUCTIO TO IMAGE ACQUISITIO

This document provides an information for the Acquiring
of Image Data after you create the video input object and
configure its properties. The Image Acquisition system enables
you to connect to an image acquisition device from within a
software interface session and based on object technology, the
Image Acquisition system provides functions for creating

Mr. THA MAY J.S, Assistant Professor in Mechanical Department, G.S.S.S
Institute of Engineering and Technology for Women, K.R.S. Road,
Metagalli, Mysore-570016, Karnataka, India.
(E-mail of corresponding author: thanmay26@rediffmail.com).

II. BASIC IMAGE ACQUISITIO PROCEDURE
This section illustrates the basic steps required to create an
image acquisition application by implementing a simple
motion detection application. The application detects
movement in a scene by performing a pixel-to-pixel
comparison in pairs of incoming image frames. If nothing
moves in the scene, pixel values remain the same in each
frame. When something moves in the image, the application
displays the pixels that have changed values.
To use the Image Acquisition system to acquire image data,
you must perform the following basic steps:
Step 1: Install and configure your image acquisition device
Step 2: Configure image acquisition properties
Step 3: Create a video input object
Step 4: Preview the video stream
Step 5: Acquire image data
Step 6: doing Image analysis and displaying results
Step 7: Cleaning up the memory
Certain steps are optional and this kind of system to acquire
streaming images is well suitable for Machine vision system.
III.

IMAGE ACQUISITIO PROPERTIES

1) The frame rate describes how fast an image acquisition
device provides data, typically measured as frames per second
(Figure o 1). Devices that support industry-standard video
formats must provide frames at the rate specified by the
standard. For RS170 and TSC, the standard dictates a frame
rate of 30 frames per second (30 Hz). The CCIR and PAL
standards define a frame rate of 25 Hz. on-standard devices
can be configured to operate at higher rates. Generic Windows
image acquisition devices, such as Webcams, might support
many different frame rates. Depending on the device being
used, the frame rate might be configurable using a devicespecific property of the image acquisition object. The rate at
which the Image Acquisition software can process images
depends on the processor speed, the complexity of the
processing algorithm, and the frame rate. Given a fast
processor, a simple algorithm, and a frame rate tuned to the
acquisition setup, the Image Acquisition software can process
data as it comes in.
2) The Frame Grab Interval property specifies how often
the video input object acquires a frame from the video stream
(Figure o 1). By default, objects acquire every frame in the
video stream, but you can use this property to specify other
acquisition intervals. For example, when you specify a Frame
Grab Interval value of 3, the object acquires every third frame
from the video stream, as illustrated in this figure. The object
acquires the first frame in the video stream before applying the
Frame Grab Interval.
3) There are two methods to use Trigger Properties one is
Automatic and other is Manual (Figure o 1). To use an
Automatic immediate trigger, simply create a video input
object. Immediate triggering is the default trigger type for all
video input objects. With an immediate trigger, the object
executes the trigger immediately after you start the object
running with the start command. To use a manual trigger,
create a video input object and set the value of the Trigger
Type property to 'manual'. A video input object executes a
manual trigger after you issue the trigger function.
4) Frames per Trigger are used to move multiple frames of
data from the memory buffer into the workspace (Figure o 1).
By default, getting data retrieves the number of frames
specified in the Frames per Trigger property but you can
specify any number. In the Image Acquisition system, one can
specify the amount of data intended to be acquired as the
number of frames per trigger. You specify the desired size of
your acquisition as the value of the video input object Frames
per Trigger property.

Figure o 2. Machine Vision application using video stream.

The question arises is “why to understand all these things?” the
reason is “Timing”. Observe the above figure (Figure o 2)
where moving items are constantly monitored by Machine
vision. If logging time and process does not match each other
then caps of a bottle will be places on label and label will be
stuck on caps. Is this possible? “YES”; for this I give you a
following popular experiment.
IV.

U DERSTA DI G PROPERTIES OF IMAGE ACQUISITIO

Experiment umber: 01
The following experiment is done by lighting a match stick in
front of a web cam with 30 frames per second (Figure o 3). This
gives an idea how well the image is acquired. 30 frames per
second means 1 frame is 0.033 second. If I calculate
number of frames then I should get exact timing of a match
stick burning rate. Let us see how?

Figure o 3. Video stream of a Match stick burnt with 30 fps and
number of frames logged is 400.

Please observe the above figure (Figure o 3); 30 fps means
0.033 seconds per frame then the burning of match stick is 146
frames. This gives us the Idea 146/30 is the time required for a
match stick to burn that means 4.866 seconds?
Experiment umber: 02
The following experiment is done by lighting a match stick in
front of a web cam with 30 frames per second (Figure o 4) and
number of frames logged is 300. it appears 155 frames shows
burning rate then 155/30 should be actual burning rate of match
stick which is 5.166 seconds?.

Figure o 4. Video stream of a Match stick burnt with 30 fps and
number of frames logged is 300.
Figure o 1. Image Acquisition properties in a video stream.

ote: Every match stick can burn up to 25 seconds.
V.

REASO S FOR PROPERTIES OF IMAGE ACQUISITIO TO
DISOBEY

When ever there is a queue of data trying to enter the
processor of a system they hung up due to low in processing
speed. This gives rise to an error which is known as “logging
error between two frames”.
One frames which waits to be logged on delays another
frame to wait until it is processed. Once it is processed the
waiting frame does not counts as the video stream is a dynamic
system where data I logged on the basis of number of frame to be
logged. For a perfect flow of data to log and to process the
system understands few rules they are as follows.

Following set of experiments is as shown with their
corresponding results in Tabular forms. Variations to all needs
are specifically noted and the results are tabulated.
VI.

LOGGI G OF TWO FRAMES

The following three tables shows the set of results due to
variation in either frame rate per second or variation in number of
frames to be logged or constants in them.
TABLE I
EXPIRIME T O: 01
To study variations in frame rate per second and corresponding variations in
number of frames to be logged.
Experiment
Frame
umber
Logging
Average
Percent
o

Rule umber: 01
Logging Rate = 1/logged Per Sec
i.e. 30 fps = 1/30 log per sec i.e. 0.033 sec per frame
Rule umber: 02

Average difference between each frame =

EXPIRIME T CASE STUDY FOR ERROR A ALYSIS BETWEE

Rate
Log per
second

of
frames
logged

rate

Diff

Error

1

30

300

0.03

0.1474

11.4098

2
3

25
20

250
200

0.04
0.05

0.1473
0.1458

10.7297
9.5844

4
5
6

15
10
5

first (n-1) frame ~ (n) second frame
Rule umber: 03
Percentage Error = logging Rate-Average
Difference * 100
If these errors are not noted then the results are those which
you have just seen. ow let us solve those two experiments.
Experiment umber: 01
umber of frames per second = 30 fps
umbers of frames to be logged = 400 f
Time based acquired frames = 146 f
Average difference between two frames= 0.1456 sec
Error between two frames (Average) = 11.2231
Time for match stick burnt (Approximate) = 20sec sec
Then the exact time should be
146*0.112231+4.86= 21.245726 Sec
[1]

[2] Experiment umber: 02
umber of frames per second = 30 fps
umbers of frames to be logged = 300 f
Time based acquired frames = 155 f
Average difference between two frames= 0.1464 sec
Error between two frames (Average) = 11.3078
Time for match stick burnt (Approximate) = 23sec sec
Then the exact time should be
155*0.113078+5.16= 22.68709 Sec
ow seeing this result which shows up to nano decimal is
astounding. These results can bring forth the Machine Vision
system to its timing but the conclusion cannot be easily done.
What if number of frames to be logged is 5 to 5000 or what if
number of frames per second is 5 to 30? These kinds of
questions need specific answers. For this several experiments
are done to know the suitable answer. The variation in frames
per second and variation in frames to be logged are constantly
varied and errors in logging are studied. This showed many
important factors which cannot be explained in singular terms.

150
0.06
0.1463
7.9682
100
0.1
0.1422
4.2212
50
0.2
0.1341
6.5878
TABLE II
EXPIRIME T O: 02
To study constant frame rate per second and variation in number of frames to
be logged.
Experiment
Frame
umber
Logging
Average
Percent Error
o
Rate
of
rate
Diff
Log
frames
per
logged
second
1
30
300
0.033
0.1474
11.4098
2
3
4
5
6

30
30
30
30
30

250
200
150
100
50

0.033
0.1472
11.3888
0.033
0.1469
11.3541
0.033
0.1454
11.2056
0.033
0.1438
11.0495
0.033
0.1348
10.1422
TABLE III
EXPIRIME T O: 03
To study variations in frame rate per second and constant number of frames to
be logged.
Experiment
Frame
umber
Logging
Average
Percent Error
o
Rate
of
rate
Diff
Log
frames
per
logged
second
1
30
100
0.03
0.1428
10.9444
2
25
100
0.04
0.1416
10.1646
3
20
100
0.05
0.1412
9.1232
4
15
100
0.06
0.1414
7.4768
5
10
100
0.1
0.1357
3.5737
6
5
100
0.2
0.1413
5.8747

Observe in each table the error percentage shown. What
variation can anyone observe in them unless they are shown as
in umerical values? Remember timing is a question to be
answered. In each of the above three table logging rate is not
our problem this is because intense image has larger logging
rate as every thigh to be logged is in the rate of pixels. Then
the error registered is our main picture. Averaging the logging
rate revokes all our problems and to obtain perfect timing error
percentage is calculated. Machine Vision, Computer Vision or
Image Analysis of any Video stream requires logging of data if the
data has error then the whole Analysis goes wrong.
VII.

CO CLUSIO

EXPIRIME T

O: 03

To study variations in frame rate per second and constant number of frames to
be logged.

120

How can any one believe a match stick can burn only for 3 or
5 second? This shows even video stream registration need
processing speed and this has to be concluded in standards.
The above Tabulation results are now shown of graphical
forms to conclude. Observe the below three bar chart which
clarify all our doubt regarding image acquisition and its error
incurred.

100
80
Frame Per Second
60

Percentage Error
Number of frames logged

40
20

EXPIRIME T O: 01
To study variations in frame rate per second and corresponding variations in
number of frames to be logged.

350

0
1

2

3

4

5

6

EXPIRIME T O: 01 CO CLUSIO

300

Higher frame rate per second and larger frames to be logged
always shows higher error. Decreasing both simultaneously
reduces error progressively.

250
200

Frames per second
Percentage Error
Number of frames logged

150

EXPIRIME T O: 02 CO CLUSIO

Higher frame rate per second and higher frames to be logged
always shows higher error. Decreasing only frame to be logged
and making frames per second constant does not reduce error
much.

100
50
0

EXPIRIME T O: 03 CO CLUSIO
1

2

3

4

5

6

Variation in frame rate per second and frames to be logged
remains constant shows error reduction dramatically.
EXPIRIME T O: 02
To study constant frame rate per second and variation in number of frames to
be logged.

350
300
250
200

Frames per second
Percentage Error
Number of frames logged

150
100
50
0
1

2

3

4

5

6

Final conclusion makes one and all to remember, that
acquiring image data from an video stream does not gives
accurate timing unless error between logging of two
subsequent frames are described. Increasing or decreasing
frame rate per second or number of frames to be logged cannot
be ignored.
Machine Vision attached with Robotics should be
able to detect and understand the multitude of tasks that can
arise in “hard engineering” in industries, where low-tolerance
features giving rise to highly variable images to be processed
along with timing of work handling. What if the timing goes
wrong then the learning ability of artificial intelligence will
start to learn the trial and error method which humans have
mastered it. The culture of developing Machine Vision or the
Artificial Intelligence should not be blinded with error in
Image acquisition and error in processing logged data. If
logging data depends on processor speed then perfection in
analysis of a Image from a streams of data has to answer
average error in processing them, this will lead to a perfection.
VIII. REFERE

CES

References are gathered through primary data of literature
survey through internet sites like http://citeseerx.ist.psu.edu
and by interacting with experts. References and readily
available publications are used only for knowledge purpose
but the experiments and proof for the report are self build.
Samples of the collective formats for various types of
references are given below.
[1] A. Said, W. A. Pearlman, “A ew Fast and Efficient Image Codec
Based on Set Partitioning in Hierarchical Trees”, IEEE Trans. Circuits
and Syst. Video Technol., 6(3), 243-250 (1996).
[2] P. Corriveau, A. Webster, "VQEG Evaluation of Objective Methods of
Video Quality Assessment", SMPTE Journal, 108, 645-648, 1999.
[3] A.M. Rohaly, P. Corriveau, J. Libert, A. Webster, V. Baroncini, J.
Beerends, J.L Blin, L. Contin, T. Hamada, D. Harrison, A. Hekstra, J.
Lubin, Y. ishida, R. ishihara, J. Pearson, A. F. Pessoa, . Pickford,
A. Schertz, M. Visca, A. B. Watson, S. Winkler: "Video Quality Experts
Group: Current results and future directions." Proc. SPIE Visual
Communications and Image Processing, vol. 4067, Perth, Australia,
June 21-23, 2000.
[4] C. B. Lambrecht, Ed., “Special Issue on Image and Video Quality
Metrics”, Signal Processing, vol. 70, (1998).
[5] A. Said and W. A. Pearlman, “A new, fast, and efficient image codec
based on set partitioning in hierarchical trees,” IEEE Trans. Circuits
Syst. Video Technol., vol. 6, pp. 243-250, June 1993.
[6] “Jpeg software codec.” Portable Research Video Group, Stanford
University, 1997. Available via anonymous ftp from
havefun.stanford.edu:pub/jpeg/JPEGv1.1.tar.Z.
[7]
J. Ashley, R. Barber, M. Flickner, J. Hafner, D. Lee, W. iblack, and
D. Petkovic. Automatic and semi-automatic methods for image
annotation and retrieval in QBIC. In W. iblack and R. C. Jain, editors,
Storage and Retrieval for Image and Video Databases III. SPIE - The
International Society for Optical Engineering, 1995
[8] A. Rav-Acha and S. Peleg. Restoration of multiple images with motion
blur in different directions. IEEE Workshop on Applications of
Computer Vision, 2000.
[9] K. Cinkler and A. Mertins, Coding of digital video with the edgesensitive discrete wavelet transform," in IEEE International
Conference on Image Processing, vol. 1, pp. 961-964, 1996.
[10] J. Bach, C. Fuler, A. Gupta, A. Hampapur, B. Horowitz, R. Humphrey,
R. Jain, and C. Shu, “The Virage Image Search Engine: An Open
Framework for Image Management,” Proc. SPIE Conf. Storage and
Retrieval for Image and Video Databases IV, vol. 2670,
[11] pp. 76-87, 1996. R. J. Vidmar. (1992, Aug.). On the use of atmospheric
plasmas as electromagnetic reflectors. IEEE Trans. Plasma Sci.
[Online]. 21(3), pp. 876-880. Available:
http://www.halcyon.com/pub/journals/21ps03-vidmar

More Related Content

What's hot

IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET Journal
 
Visual pattern recognition in robotics
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in roboticsIAEME Publication
 
Secure Image Transfer in The Domain Transform DFT
Secure Image Transfer in The Domain Transform DFTSecure Image Transfer in The Domain Transform DFT
Secure Image Transfer in The Domain Transform DFTijcisjournal
 
IRJET- Real Time Video Object Tracking using Motion Estimation
IRJET- Real Time Video Object Tracking using Motion EstimationIRJET- Real Time Video Object Tracking using Motion Estimation
IRJET- Real Time Video Object Tracking using Motion EstimationIRJET Journal
 
Background differencing algorithm for moving object detection using system ge...
Background differencing algorithm for moving object detection using system ge...Background differencing algorithm for moving object detection using system ge...
Background differencing algorithm for moving object detection using system ge...eSAT Publishing House
 
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving CarsIRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving CarsIRJET Journal
 

What's hot (7)

IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Visual pattern recognition in robotics
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in robotics
 
Secure Image Transfer in The Domain Transform DFT
Secure Image Transfer in The Domain Transform DFTSecure Image Transfer in The Domain Transform DFT
Secure Image Transfer in The Domain Transform DFT
 
IRJET- Real Time Video Object Tracking using Motion Estimation
IRJET- Real Time Video Object Tracking using Motion EstimationIRJET- Real Time Video Object Tracking using Motion Estimation
IRJET- Real Time Video Object Tracking using Motion Estimation
 
Background differencing algorithm for moving object detection using system ge...
Background differencing algorithm for moving object detection using system ge...Background differencing algorithm for moving object detection using system ge...
Background differencing algorithm for moving object detection using system ge...
 
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving CarsIRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
 

Viewers also liked

Global Sustainability Jam Hong Kong 2013
Global Sustainability Jam Hong Kong 2013Global Sustainability Jam Hong Kong 2013
Global Sustainability Jam Hong Kong 2013Beth Liang
 
บทที่ 7
บทที่ 7บทที่ 7
บทที่ 7teerachote
 
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...MM Naina Exports
 
Dinner with Peers
Dinner with PeersDinner with Peers
Dinner with PeersBeth Liang
 
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...MM Naina Exports
 
Applications of cleanrooms in various industry
Applications of cleanrooms in various industryApplications of cleanrooms in various industry
Applications of cleanrooms in various industryMM Naina Exports
 

Viewers also liked (10)

Presentation1
Presentation1Presentation1
Presentation1
 
Global Sustainability Jam Hong Kong 2013
Global Sustainability Jam Hong Kong 2013Global Sustainability Jam Hong Kong 2013
Global Sustainability Jam Hong Kong 2013
 
บทที่ 7
บทที่ 7บทที่ 7
บทที่ 7
 
Kejriwal
KejriwalKejriwal
Kejriwal
 
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...
 
Thanco_Brochure
Thanco_BrochureThanco_Brochure
Thanco_Brochure
 
Dinner with Peers
Dinner with PeersDinner with Peers
Dinner with Peers
 
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...
 
Onetcom
OnetcomOnetcom
Onetcom
 
Applications of cleanrooms in various industry
Applications of cleanrooms in various industryApplications of cleanrooms in various industry
Applications of cleanrooms in various industry
 

Similar to Analysis of Error in Image Logging between Subsequent Frames

Automated Security Surveillance System in Real Time World
Automated Security Surveillance System in Real Time WorldAutomated Security Surveillance System in Real Time World
Automated Security Surveillance System in Real Time WorldIRJET Journal
 
Recognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenesRecognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenesIJCSEA Journal
 
Real-Time Video Copy Detection in Big Data
Real-Time Video Copy Detection in Big DataReal-Time Video Copy Detection in Big Data
Real-Time Video Copy Detection in Big DataIRJET Journal
 
Multimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docxMultimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docxAROCKIAJAYAIECW
 
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV CameraIRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV CameraIRJET Journal
 
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET Journal
 
Recent advances in content based video copy detection (IEEE)
Recent advances in content based video copy detection (IEEE)Recent advances in content based video copy detection (IEEE)
Recent advances in content based video copy detection (IEEE)PACE 2.0
 
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...Journal For Research
 
Graphical Password by Image Segmentation
Graphical Password by Image SegmentationGraphical Password by Image Segmentation
Graphical Password by Image SegmentationIRJET Journal
 
Video Stabilization using Python and open CV
Video Stabilization using Python and open CVVideo Stabilization using Python and open CV
Video Stabilization using Python and open CVIRJET Journal
 
Effective Compression of Digital Video
Effective Compression of Digital VideoEffective Compression of Digital Video
Effective Compression of Digital VideoIRJET Journal
 
IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval IRJET Journal
 
IRJET- Object Detection using Machine Learning Technique
IRJET- Object Detection using Machine Learning TechniqueIRJET- Object Detection using Machine Learning Technique
IRJET- Object Detection using Machine Learning TechniqueIRJET Journal
 
IRJET- A Shoulder-Surfing Resistant Graphical Password System
IRJET- A Shoulder-Surfing Resistant Graphical Password System             IRJET- A Shoulder-Surfing Resistant Graphical Password System
IRJET- A Shoulder-Surfing Resistant Graphical Password System IRJET Journal
 
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...IRJET Journal
 
IRJET- Intrusion Detection through Image Processing and Getting Notified ...
IRJET-  	  Intrusion Detection through Image Processing and Getting Notified ...IRJET-  	  Intrusion Detection through Image Processing and Getting Notified ...
IRJET- Intrusion Detection through Image Processing and Getting Notified ...IRJET Journal
 

Similar to Analysis of Error in Image Logging between Subsequent Frames (20)

Automated Security Surveillance System in Real Time World
Automated Security Surveillance System in Real Time WorldAutomated Security Surveillance System in Real Time World
Automated Security Surveillance System in Real Time World
 
C1 mala1 akila
C1 mala1 akilaC1 mala1 akila
C1 mala1 akila
 
2007-_01-3912
2007-_01-39122007-_01-3912
2007-_01-3912
 
Recognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenesRecognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenes
 
Real-Time Video Copy Detection in Big Data
Real-Time Video Copy Detection in Big DataReal-Time Video Copy Detection in Big Data
Real-Time Video Copy Detection in Big Data
 
Multimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docxMultimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docx
 
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV CameraIRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
 
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
 
Gi3511181122
Gi3511181122Gi3511181122
Gi3511181122
 
Recent advances in content based video copy detection (IEEE)
Recent advances in content based video copy detection (IEEE)Recent advances in content based video copy detection (IEEE)
Recent advances in content based video copy detection (IEEE)
 
Ijetr011814
Ijetr011814Ijetr011814
Ijetr011814
 
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...
 
Graphical Password by Image Segmentation
Graphical Password by Image SegmentationGraphical Password by Image Segmentation
Graphical Password by Image Segmentation
 
Video Stabilization using Python and open CV
Video Stabilization using Python and open CVVideo Stabilization using Python and open CV
Video Stabilization using Python and open CV
 
Effective Compression of Digital Video
Effective Compression of Digital VideoEffective Compression of Digital Video
Effective Compression of Digital Video
 
IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET-Feature Extraction from Video Data for Indexing and Retrieval
 
IRJET- Object Detection using Machine Learning Technique
IRJET- Object Detection using Machine Learning TechniqueIRJET- Object Detection using Machine Learning Technique
IRJET- Object Detection using Machine Learning Technique
 
IRJET- A Shoulder-Surfing Resistant Graphical Password System
IRJET- A Shoulder-Surfing Resistant Graphical Password System             IRJET- A Shoulder-Surfing Resistant Graphical Password System
IRJET- A Shoulder-Surfing Resistant Graphical Password System
 
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...
 
IRJET- Intrusion Detection through Image Processing and Getting Notified ...
IRJET-  	  Intrusion Detection through Image Processing and Getting Notified ...IRJET-  	  Intrusion Detection through Image Processing and Getting Notified ...
IRJET- Intrusion Detection through Image Processing and Getting Notified ...
 

More from THANMAY JS

Multimedia and Animation 20CS21P Portfolio.pdf
Multimedia and Animation 20CS21P Portfolio.pdfMultimedia and Animation 20CS21P Portfolio.pdf
Multimedia and Animation 20CS21P Portfolio.pdfTHANMAY JS
 
Fundamentals of Automation Technology 20EE43P Portfolio.pdf
Fundamentals of Automation Technology 20EE43P Portfolio.pdfFundamentals of Automation Technology 20EE43P Portfolio.pdf
Fundamentals of Automation Technology 20EE43P Portfolio.pdfTHANMAY JS
 
Elements of Industrial Automation Portfolio.pdf
Elements of Industrial Automation Portfolio.pdfElements of Industrial Automation Portfolio.pdf
Elements of Industrial Automation Portfolio.pdfTHANMAY JS
 
Fundamentals of Computer 20CS11T Chapter 5.pdf
Fundamentals of Computer 20CS11T Chapter 5.pdfFundamentals of Computer 20CS11T Chapter 5.pdf
Fundamentals of Computer 20CS11T Chapter 5.pdfTHANMAY JS
 
Fundamentals of Computer 20CS11T Chapter 4.pdf
Fundamentals of Computer 20CS11T Chapter 4.pdfFundamentals of Computer 20CS11T Chapter 4.pdf
Fundamentals of Computer 20CS11T Chapter 4.pdfTHANMAY JS
 
Fundamentals of Computer 20CS11T Chapter 3.pdf
Fundamentals of Computer 20CS11T Chapter 3.pdfFundamentals of Computer 20CS11T Chapter 3.pdf
Fundamentals of Computer 20CS11T Chapter 3.pdfTHANMAY JS
 
Fundamentals of Computer 20CS11T Chapter 2.pdf
Fundamentals of Computer 20CS11T Chapter 2.pdfFundamentals of Computer 20CS11T Chapter 2.pdf
Fundamentals of Computer 20CS11T Chapter 2.pdfTHANMAY JS
 
Fundamentals of Computer 20CS11T.pdf
Fundamentals of Computer 20CS11T.pdfFundamentals of Computer 20CS11T.pdf
Fundamentals of Computer 20CS11T.pdfTHANMAY JS
 
Elements of Industrial Automation Week 09 Notes.pdf
Elements of Industrial Automation Week 09 Notes.pdfElements of Industrial Automation Week 09 Notes.pdf
Elements of Industrial Automation Week 09 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 08 Notes.pdf
Elements of Industrial Automation Week 08 Notes.pdfElements of Industrial Automation Week 08 Notes.pdf
Elements of Industrial Automation Week 08 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 07 Notes.pdf
Elements of Industrial Automation Week 07 Notes.pdfElements of Industrial Automation Week 07 Notes.pdf
Elements of Industrial Automation Week 07 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 06 Notes.pdf
Elements of Industrial Automation Week 06 Notes.pdfElements of Industrial Automation Week 06 Notes.pdf
Elements of Industrial Automation Week 06 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 05 Notes.pdf
Elements of Industrial Automation Week 05 Notes.pdfElements of Industrial Automation Week 05 Notes.pdf
Elements of Industrial Automation Week 05 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 04 Notes.pdf
Elements of Industrial Automation Week 04 Notes.pdfElements of Industrial Automation Week 04 Notes.pdf
Elements of Industrial Automation Week 04 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 03 Notes.pdf
Elements of Industrial Automation Week 03 Notes.pdfElements of Industrial Automation Week 03 Notes.pdf
Elements of Industrial Automation Week 03 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 02 Notes.pdf
Elements of Industrial Automation Week 02 Notes.pdfElements of Industrial Automation Week 02 Notes.pdf
Elements of Industrial Automation Week 02 Notes.pdfTHANMAY JS
 
Elements of Industrial Automation Week 01 Notes.pdf
Elements of Industrial Automation Week 01 Notes.pdfElements of Industrial Automation Week 01 Notes.pdf
Elements of Industrial Automation Week 01 Notes.pdfTHANMAY JS
 
Automation and Robotics Week 08 Theory Notes 20ME51I.pdf
Automation and Robotics Week 08 Theory Notes 20ME51I.pdfAutomation and Robotics Week 08 Theory Notes 20ME51I.pdf
Automation and Robotics Week 08 Theory Notes 20ME51I.pdfTHANMAY JS
 
Automation and Robotics Week 07 Theory Notes 20ME51I.pdf
Automation and Robotics Week 07 Theory Notes 20ME51I.pdfAutomation and Robotics Week 07 Theory Notes 20ME51I.pdf
Automation and Robotics Week 07 Theory Notes 20ME51I.pdfTHANMAY JS
 
Automation and Robotics Week 06 Theory Notes 20ME51I.pdf
Automation and Robotics Week 06 Theory Notes 20ME51I.pdfAutomation and Robotics Week 06 Theory Notes 20ME51I.pdf
Automation and Robotics Week 06 Theory Notes 20ME51I.pdfTHANMAY JS
 

More from THANMAY JS (20)

Multimedia and Animation 20CS21P Portfolio.pdf
Multimedia and Animation 20CS21P Portfolio.pdfMultimedia and Animation 20CS21P Portfolio.pdf
Multimedia and Animation 20CS21P Portfolio.pdf
 
Fundamentals of Automation Technology 20EE43P Portfolio.pdf
Fundamentals of Automation Technology 20EE43P Portfolio.pdfFundamentals of Automation Technology 20EE43P Portfolio.pdf
Fundamentals of Automation Technology 20EE43P Portfolio.pdf
 
Elements of Industrial Automation Portfolio.pdf
Elements of Industrial Automation Portfolio.pdfElements of Industrial Automation Portfolio.pdf
Elements of Industrial Automation Portfolio.pdf
 
Fundamentals of Computer 20CS11T Chapter 5.pdf
Fundamentals of Computer 20CS11T Chapter 5.pdfFundamentals of Computer 20CS11T Chapter 5.pdf
Fundamentals of Computer 20CS11T Chapter 5.pdf
 
Fundamentals of Computer 20CS11T Chapter 4.pdf
Fundamentals of Computer 20CS11T Chapter 4.pdfFundamentals of Computer 20CS11T Chapter 4.pdf
Fundamentals of Computer 20CS11T Chapter 4.pdf
 
Fundamentals of Computer 20CS11T Chapter 3.pdf
Fundamentals of Computer 20CS11T Chapter 3.pdfFundamentals of Computer 20CS11T Chapter 3.pdf
Fundamentals of Computer 20CS11T Chapter 3.pdf
 
Fundamentals of Computer 20CS11T Chapter 2.pdf
Fundamentals of Computer 20CS11T Chapter 2.pdfFundamentals of Computer 20CS11T Chapter 2.pdf
Fundamentals of Computer 20CS11T Chapter 2.pdf
 
Fundamentals of Computer 20CS11T.pdf
Fundamentals of Computer 20CS11T.pdfFundamentals of Computer 20CS11T.pdf
Fundamentals of Computer 20CS11T.pdf
 
Elements of Industrial Automation Week 09 Notes.pdf
Elements of Industrial Automation Week 09 Notes.pdfElements of Industrial Automation Week 09 Notes.pdf
Elements of Industrial Automation Week 09 Notes.pdf
 
Elements of Industrial Automation Week 08 Notes.pdf
Elements of Industrial Automation Week 08 Notes.pdfElements of Industrial Automation Week 08 Notes.pdf
Elements of Industrial Automation Week 08 Notes.pdf
 
Elements of Industrial Automation Week 07 Notes.pdf
Elements of Industrial Automation Week 07 Notes.pdfElements of Industrial Automation Week 07 Notes.pdf
Elements of Industrial Automation Week 07 Notes.pdf
 
Elements of Industrial Automation Week 06 Notes.pdf
Elements of Industrial Automation Week 06 Notes.pdfElements of Industrial Automation Week 06 Notes.pdf
Elements of Industrial Automation Week 06 Notes.pdf
 
Elements of Industrial Automation Week 05 Notes.pdf
Elements of Industrial Automation Week 05 Notes.pdfElements of Industrial Automation Week 05 Notes.pdf
Elements of Industrial Automation Week 05 Notes.pdf
 
Elements of Industrial Automation Week 04 Notes.pdf
Elements of Industrial Automation Week 04 Notes.pdfElements of Industrial Automation Week 04 Notes.pdf
Elements of Industrial Automation Week 04 Notes.pdf
 
Elements of Industrial Automation Week 03 Notes.pdf
Elements of Industrial Automation Week 03 Notes.pdfElements of Industrial Automation Week 03 Notes.pdf
Elements of Industrial Automation Week 03 Notes.pdf
 
Elements of Industrial Automation Week 02 Notes.pdf
Elements of Industrial Automation Week 02 Notes.pdfElements of Industrial Automation Week 02 Notes.pdf
Elements of Industrial Automation Week 02 Notes.pdf
 
Elements of Industrial Automation Week 01 Notes.pdf
Elements of Industrial Automation Week 01 Notes.pdfElements of Industrial Automation Week 01 Notes.pdf
Elements of Industrial Automation Week 01 Notes.pdf
 
Automation and Robotics Week 08 Theory Notes 20ME51I.pdf
Automation and Robotics Week 08 Theory Notes 20ME51I.pdfAutomation and Robotics Week 08 Theory Notes 20ME51I.pdf
Automation and Robotics Week 08 Theory Notes 20ME51I.pdf
 
Automation and Robotics Week 07 Theory Notes 20ME51I.pdf
Automation and Robotics Week 07 Theory Notes 20ME51I.pdfAutomation and Robotics Week 07 Theory Notes 20ME51I.pdf
Automation and Robotics Week 07 Theory Notes 20ME51I.pdf
 
Automation and Robotics Week 06 Theory Notes 20ME51I.pdf
Automation and Robotics Week 06 Theory Notes 20ME51I.pdfAutomation and Robotics Week 06 Theory Notes 20ME51I.pdf
Automation and Robotics Week 06 Theory Notes 20ME51I.pdf
 

Recently uploaded

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 

Recently uploaded (20)

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 

Analysis of Error in Image Logging between Subsequent Frames

  • 1. Analysis of Error in Image Logging between Subsequent Frames of a Streaming Video Mr. THANMAY J.S Assistant Professor, Mechanical Department, G.S.S.S.I.E.T.W Mysore. Email: thanmay26@rediffmail.com Abstract— In Image Analysis of a streaming video for certain applications, it may not be necessary to log every frame provided by an image acquisition device. In fact, it may be more practical and resourceful to log frames at certain intervals. To log frames at a certain interval acquisition device must be configured for Frame Grab Interval, Trigger property, Frame Rate property, umber of frames to be logged etc. Configuring the property to an integer value specifies which frame should be logged. Generally error inclusions due to delay in logging data is a critical issue which is well understood in this report. Set of experimental results with changes in Frame rate and umber of Frames to be logged along with percentage errors are analyzed in this report Whenever captured image is consider for visual analysis traveling on a fast-moving conveyor belt. The image capture system must know when they are in the right place and capturing the image and transmit the result for processing before the next sample arrives. Selection of trigger sensors is no problem; the next consideration is timing which can be a sporadic in nature and depends upon logging data and error in acquisition. This report explains the importance of adjusting the device parameters like frame rate and grab interval at which frames were logged and how number of frames logged per second with error in logging frames. Keywords: Image Acquisition, Video Device setting, Accessing Device Properties, Frame Rate, Grab Interval, Logging Data, Time difference between frames etc. I. I objects that represent the connection to the device. This kind of Image Acquisition system is well suitable for Artificial intelligence and Machine vision system in industries. image acquisition application involves these major steps: Step 1) Starting the video input object -- You start an object by calling the start function. Starting an object prepares the object for data acquisition. For example, starting an object locks the values of certain object properties (they become read only). Starting an object does not initiate the acquiring of image frames, however. The initiation of data logging depends on the execution of a trigger. The example calls the start function to start the video input object. Objects stop when they have acquired the requested number of frames. Step 2) Triggering the acquisition -- To acquire data, a video input object must execute a trigger. Triggers can occur in several ways, depending on how the Trigger Type property is configured. For example, if you specify an immediate trigger, the object executes a trigger automatically, immediately after it starts. If you specify a manual trigger, the object waits for a call to the trigger function before it initiates data acquisition. Step 3) Bringing data into the workspace -- The Image analysis software stores acquired data in a memory buffer, a disk file, or both, depending on the value of the video input object Logging Mode property. To work with this data, you must bring it into the workspace. Once the data is in the workspace, you can manipulate it as you would any other data. TRODUCTIO TO IMAGE ACQUISITIO This document provides an information for the Acquiring of Image Data after you create the video input object and configure its properties. The Image Acquisition system enables you to connect to an image acquisition device from within a software interface session and based on object technology, the Image Acquisition system provides functions for creating Mr. THA MAY J.S, Assistant Professor in Mechanical Department, G.S.S.S Institute of Engineering and Technology for Women, K.R.S. Road, Metagalli, Mysore-570016, Karnataka, India. (E-mail of corresponding author: thanmay26@rediffmail.com). II. BASIC IMAGE ACQUISITIO PROCEDURE This section illustrates the basic steps required to create an image acquisition application by implementing a simple motion detection application. The application detects movement in a scene by performing a pixel-to-pixel comparison in pairs of incoming image frames. If nothing moves in the scene, pixel values remain the same in each frame. When something moves in the image, the application displays the pixels that have changed values. To use the Image Acquisition system to acquire image data, you must perform the following basic steps: Step 1: Install and configure your image acquisition device Step 2: Configure image acquisition properties Step 3: Create a video input object Step 4: Preview the video stream Step 5: Acquire image data Step 6: doing Image analysis and displaying results Step 7: Cleaning up the memory Certain steps are optional and this kind of system to acquire streaming images is well suitable for Machine vision system.
  • 2. III. IMAGE ACQUISITIO PROPERTIES 1) The frame rate describes how fast an image acquisition device provides data, typically measured as frames per second (Figure o 1). Devices that support industry-standard video formats must provide frames at the rate specified by the standard. For RS170 and TSC, the standard dictates a frame rate of 30 frames per second (30 Hz). The CCIR and PAL standards define a frame rate of 25 Hz. on-standard devices can be configured to operate at higher rates. Generic Windows image acquisition devices, such as Webcams, might support many different frame rates. Depending on the device being used, the frame rate might be configurable using a devicespecific property of the image acquisition object. The rate at which the Image Acquisition software can process images depends on the processor speed, the complexity of the processing algorithm, and the frame rate. Given a fast processor, a simple algorithm, and a frame rate tuned to the acquisition setup, the Image Acquisition software can process data as it comes in. 2) The Frame Grab Interval property specifies how often the video input object acquires a frame from the video stream (Figure o 1). By default, objects acquire every frame in the video stream, but you can use this property to specify other acquisition intervals. For example, when you specify a Frame Grab Interval value of 3, the object acquires every third frame from the video stream, as illustrated in this figure. The object acquires the first frame in the video stream before applying the Frame Grab Interval. 3) There are two methods to use Trigger Properties one is Automatic and other is Manual (Figure o 1). To use an Automatic immediate trigger, simply create a video input object. Immediate triggering is the default trigger type for all video input objects. With an immediate trigger, the object executes the trigger immediately after you start the object running with the start command. To use a manual trigger, create a video input object and set the value of the Trigger Type property to 'manual'. A video input object executes a manual trigger after you issue the trigger function. 4) Frames per Trigger are used to move multiple frames of data from the memory buffer into the workspace (Figure o 1). By default, getting data retrieves the number of frames specified in the Frames per Trigger property but you can specify any number. In the Image Acquisition system, one can specify the amount of data intended to be acquired as the number of frames per trigger. You specify the desired size of your acquisition as the value of the video input object Frames per Trigger property. Figure o 2. Machine Vision application using video stream. The question arises is “why to understand all these things?” the reason is “Timing”. Observe the above figure (Figure o 2) where moving items are constantly monitored by Machine vision. If logging time and process does not match each other then caps of a bottle will be places on label and label will be stuck on caps. Is this possible? “YES”; for this I give you a following popular experiment. IV. U DERSTA DI G PROPERTIES OF IMAGE ACQUISITIO Experiment umber: 01 The following experiment is done by lighting a match stick in front of a web cam with 30 frames per second (Figure o 3). This gives an idea how well the image is acquired. 30 frames per second means 1 frame is 0.033 second. If I calculate number of frames then I should get exact timing of a match stick burning rate. Let us see how? Figure o 3. Video stream of a Match stick burnt with 30 fps and number of frames logged is 400. Please observe the above figure (Figure o 3); 30 fps means 0.033 seconds per frame then the burning of match stick is 146 frames. This gives us the Idea 146/30 is the time required for a match stick to burn that means 4.866 seconds? Experiment umber: 02 The following experiment is done by lighting a match stick in front of a web cam with 30 frames per second (Figure o 4) and number of frames logged is 300. it appears 155 frames shows burning rate then 155/30 should be actual burning rate of match stick which is 5.166 seconds?. Figure o 4. Video stream of a Match stick burnt with 30 fps and number of frames logged is 300. Figure o 1. Image Acquisition properties in a video stream. ote: Every match stick can burn up to 25 seconds.
  • 3. V. REASO S FOR PROPERTIES OF IMAGE ACQUISITIO TO DISOBEY When ever there is a queue of data trying to enter the processor of a system they hung up due to low in processing speed. This gives rise to an error which is known as “logging error between two frames”. One frames which waits to be logged on delays another frame to wait until it is processed. Once it is processed the waiting frame does not counts as the video stream is a dynamic system where data I logged on the basis of number of frame to be logged. For a perfect flow of data to log and to process the system understands few rules they are as follows. Following set of experiments is as shown with their corresponding results in Tabular forms. Variations to all needs are specifically noted and the results are tabulated. VI. LOGGI G OF TWO FRAMES The following three tables shows the set of results due to variation in either frame rate per second or variation in number of frames to be logged or constants in them. TABLE I EXPIRIME T O: 01 To study variations in frame rate per second and corresponding variations in number of frames to be logged. Experiment Frame umber Logging Average Percent o Rule umber: 01 Logging Rate = 1/logged Per Sec i.e. 30 fps = 1/30 log per sec i.e. 0.033 sec per frame Rule umber: 02 Average difference between each frame = EXPIRIME T CASE STUDY FOR ERROR A ALYSIS BETWEE Rate Log per second of frames logged rate Diff Error 1 30 300 0.03 0.1474 11.4098 2 3 25 20 250 200 0.04 0.05 0.1473 0.1458 10.7297 9.5844 4 5 6 15 10 5 first (n-1) frame ~ (n) second frame Rule umber: 03 Percentage Error = logging Rate-Average Difference * 100 If these errors are not noted then the results are those which you have just seen. ow let us solve those two experiments. Experiment umber: 01 umber of frames per second = 30 fps umbers of frames to be logged = 400 f Time based acquired frames = 146 f Average difference between two frames= 0.1456 sec Error between two frames (Average) = 11.2231 Time for match stick burnt (Approximate) = 20sec sec Then the exact time should be 146*0.112231+4.86= 21.245726 Sec [1] [2] Experiment umber: 02 umber of frames per second = 30 fps umbers of frames to be logged = 300 f Time based acquired frames = 155 f Average difference between two frames= 0.1464 sec Error between two frames (Average) = 11.3078 Time for match stick burnt (Approximate) = 23sec sec Then the exact time should be 155*0.113078+5.16= 22.68709 Sec ow seeing this result which shows up to nano decimal is astounding. These results can bring forth the Machine Vision system to its timing but the conclusion cannot be easily done. What if number of frames to be logged is 5 to 5000 or what if number of frames per second is 5 to 30? These kinds of questions need specific answers. For this several experiments are done to know the suitable answer. The variation in frames per second and variation in frames to be logged are constantly varied and errors in logging are studied. This showed many important factors which cannot be explained in singular terms. 150 0.06 0.1463 7.9682 100 0.1 0.1422 4.2212 50 0.2 0.1341 6.5878 TABLE II EXPIRIME T O: 02 To study constant frame rate per second and variation in number of frames to be logged. Experiment Frame umber Logging Average Percent Error o Rate of rate Diff Log frames per logged second 1 30 300 0.033 0.1474 11.4098 2 3 4 5 6 30 30 30 30 30 250 200 150 100 50 0.033 0.1472 11.3888 0.033 0.1469 11.3541 0.033 0.1454 11.2056 0.033 0.1438 11.0495 0.033 0.1348 10.1422 TABLE III EXPIRIME T O: 03 To study variations in frame rate per second and constant number of frames to be logged. Experiment Frame umber Logging Average Percent Error o Rate of rate Diff Log frames per logged second 1 30 100 0.03 0.1428 10.9444 2 25 100 0.04 0.1416 10.1646 3 20 100 0.05 0.1412 9.1232 4 15 100 0.06 0.1414 7.4768 5 10 100 0.1 0.1357 3.5737 6 5 100 0.2 0.1413 5.8747 Observe in each table the error percentage shown. What variation can anyone observe in them unless they are shown as in umerical values? Remember timing is a question to be answered. In each of the above three table logging rate is not our problem this is because intense image has larger logging rate as every thigh to be logged is in the rate of pixels. Then the error registered is our main picture. Averaging the logging
  • 4. rate revokes all our problems and to obtain perfect timing error percentage is calculated. Machine Vision, Computer Vision or Image Analysis of any Video stream requires logging of data if the data has error then the whole Analysis goes wrong. VII. CO CLUSIO EXPIRIME T O: 03 To study variations in frame rate per second and constant number of frames to be logged. 120 How can any one believe a match stick can burn only for 3 or 5 second? This shows even video stream registration need processing speed and this has to be concluded in standards. The above Tabulation results are now shown of graphical forms to conclude. Observe the below three bar chart which clarify all our doubt regarding image acquisition and its error incurred. 100 80 Frame Per Second 60 Percentage Error Number of frames logged 40 20 EXPIRIME T O: 01 To study variations in frame rate per second and corresponding variations in number of frames to be logged. 350 0 1 2 3 4 5 6 EXPIRIME T O: 01 CO CLUSIO 300 Higher frame rate per second and larger frames to be logged always shows higher error. Decreasing both simultaneously reduces error progressively. 250 200 Frames per second Percentage Error Number of frames logged 150 EXPIRIME T O: 02 CO CLUSIO Higher frame rate per second and higher frames to be logged always shows higher error. Decreasing only frame to be logged and making frames per second constant does not reduce error much. 100 50 0 EXPIRIME T O: 03 CO CLUSIO 1 2 3 4 5 6 Variation in frame rate per second and frames to be logged remains constant shows error reduction dramatically. EXPIRIME T O: 02 To study constant frame rate per second and variation in number of frames to be logged. 350 300 250 200 Frames per second Percentage Error Number of frames logged 150 100 50 0 1 2 3 4 5 6 Final conclusion makes one and all to remember, that acquiring image data from an video stream does not gives accurate timing unless error between logging of two subsequent frames are described. Increasing or decreasing frame rate per second or number of frames to be logged cannot be ignored. Machine Vision attached with Robotics should be able to detect and understand the multitude of tasks that can arise in “hard engineering” in industries, where low-tolerance features giving rise to highly variable images to be processed along with timing of work handling. What if the timing goes wrong then the learning ability of artificial intelligence will start to learn the trial and error method which humans have mastered it. The culture of developing Machine Vision or the Artificial Intelligence should not be blinded with error in Image acquisition and error in processing logged data. If logging data depends on processor speed then perfection in analysis of a Image from a streams of data has to answer average error in processing them, this will lead to a perfection.
  • 5. VIII. REFERE CES References are gathered through primary data of literature survey through internet sites like http://citeseerx.ist.psu.edu and by interacting with experts. References and readily available publications are used only for knowledge purpose but the experiments and proof for the report are self build. Samples of the collective formats for various types of references are given below. [1] A. Said, W. A. Pearlman, “A ew Fast and Efficient Image Codec Based on Set Partitioning in Hierarchical Trees”, IEEE Trans. Circuits and Syst. Video Technol., 6(3), 243-250 (1996). [2] P. Corriveau, A. Webster, "VQEG Evaluation of Objective Methods of Video Quality Assessment", SMPTE Journal, 108, 645-648, 1999. [3] A.M. Rohaly, P. Corriveau, J. Libert, A. Webster, V. Baroncini, J. Beerends, J.L Blin, L. Contin, T. Hamada, D. Harrison, A. Hekstra, J. Lubin, Y. ishida, R. ishihara, J. Pearson, A. F. Pessoa, . Pickford, A. Schertz, M. Visca, A. B. Watson, S. Winkler: "Video Quality Experts Group: Current results and future directions." Proc. SPIE Visual Communications and Image Processing, vol. 4067, Perth, Australia, June 21-23, 2000. [4] C. B. Lambrecht, Ed., “Special Issue on Image and Video Quality Metrics”, Signal Processing, vol. 70, (1998). [5] A. Said and W. A. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Trans. Circuits Syst. Video Technol., vol. 6, pp. 243-250, June 1993. [6] “Jpeg software codec.” Portable Research Video Group, Stanford University, 1997. Available via anonymous ftp from havefun.stanford.edu:pub/jpeg/JPEGv1.1.tar.Z. [7] J. Ashley, R. Barber, M. Flickner, J. Hafner, D. Lee, W. iblack, and D. Petkovic. Automatic and semi-automatic methods for image annotation and retrieval in QBIC. In W. iblack and R. C. Jain, editors, Storage and Retrieval for Image and Video Databases III. SPIE - The International Society for Optical Engineering, 1995 [8] A. Rav-Acha and S. Peleg. Restoration of multiple images with motion blur in different directions. IEEE Workshop on Applications of Computer Vision, 2000. [9] K. Cinkler and A. Mertins, Coding of digital video with the edgesensitive discrete wavelet transform," in IEEE International Conference on Image Processing, vol. 1, pp. 961-964, 1996. [10] J. Bach, C. Fuler, A. Gupta, A. Hampapur, B. Horowitz, R. Humphrey, R. Jain, and C. Shu, “The Virage Image Search Engine: An Open Framework for Image Management,” Proc. SPIE Conf. Storage and Retrieval for Image and Video Databases IV, vol. 2670, [11] pp. 76-87, 1996. R. J. Vidmar. (1992, Aug.). On the use of atmospheric plasmas as electromagnetic reflectors. IEEE Trans. Plasma Sci. [Online]. 21(3), pp. 876-880. Available: http://www.halcyon.com/pub/journals/21ps03-vidmar