The document proposes a real-time object localization and tracking method from image sequences using a dynamic model. It integrates salient object detection and visual tracking techniques. At the detection stage, an object is detected from a saliency map computed using boundary connectivity cues. Then, a Kalman filter tracks the object across frames, providing a coarse prediction to refine using local detection on a search region. Experiments show the method runs faster than state-of-the-art trackers while achieving comparable accuracy without manual initialization.
A NOVEL GRAPH REPRESENTATION FOR SKELETON-BASED ACTION RECOGNITIONsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so
that it can effectively capture the features of related nodes and improve the performance of model. More
attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges
with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored
due to extracting features node by node and frame by frame. We design a generic representation of
skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks
(TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph
describing the relation of joints are mostly depended on the physical connection between joints. We
propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton
graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features
of each joint and the contour features of important joints. Extensive experiments are conducted on two
large datasets including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN
with our graph strategy outperforms other state-of-the-art methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
Exploration of Normalized Cross Correlation to Track the Object through Vario...iosrjce
Object tracking is a process devoted to locate the pathway of moving object in the succession of
frames. The tracking of the object has been emerged as a challenging facet in the fields of robot navigation,
military, traffic monitoring and video surveillance etc. In the first phase of contributions, the tracking of object
is exercised by means of matching between the template and exhaustive image through the Normalized Cross
Correlation (NCCR). In order to update the template, the moving objects are detected using frame difference
technique at regular interval of frames. Subsequently, NCCR or Principal Component Analysis (PCA) or
Histogram Regression Line (HRL) of the template and moving objects are estimated to find the best match to
update the template. The second phase discusses the tracking of object between the template and partitioned
image through the NCCR with reduced computational aspects. However, the updating schemes remain same.
Here, an exploration with varied bench mark dataset has been carried out. Further, the comparative analysis of
the proposed systems with different updating schemes such as NCCR, PCA and HRL has been succeeded. The
offered systems considerably reveal the capability to track an object indisputably under diverse illumination conditions.
A NOVEL GRAPH REPRESENTATION FOR SKELETON-BASED ACTION RECOGNITIONsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so
that it can effectively capture the features of related nodes and improve the performance of model. More
attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges
with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored
due to extracting features node by node and frame by frame. We design a generic representation of
skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks
(TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph
describing the relation of joints are mostly depended on the physical connection between joints. We
propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton
graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features
of each joint and the contour features of important joints. Extensive experiments are conducted on two
large datasets including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN
with our graph strategy outperforms other state-of-the-art methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
Exploration of Normalized Cross Correlation to Track the Object through Vario...iosrjce
Object tracking is a process devoted to locate the pathway of moving object in the succession of
frames. The tracking of the object has been emerged as a challenging facet in the fields of robot navigation,
military, traffic monitoring and video surveillance etc. In the first phase of contributions, the tracking of object
is exercised by means of matching between the template and exhaustive image through the Normalized Cross
Correlation (NCCR). In order to update the template, the moving objects are detected using frame difference
technique at regular interval of frames. Subsequently, NCCR or Principal Component Analysis (PCA) or
Histogram Regression Line (HRL) of the template and moving objects are estimated to find the best match to
update the template. The second phase discusses the tracking of object between the template and partitioned
image through the NCCR with reduced computational aspects. However, the updating schemes remain same.
Here, an exploration with varied bench mark dataset has been carried out. Further, the comparative analysis of
the proposed systems with different updating schemes such as NCCR, PCA and HRL has been succeeded. The
offered systems considerably reveal the capability to track an object indisputably under diverse illumination conditions.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
Last decade has witnessed an ever increasing number of video surveillance installa- tions due to the rise of security concerns worldwide. With this comes the need for video analysis for fraud detection, crime investigation, traffic monitoring to name a few. For any kind of video analysis application, detection of moving objects in videos is a fundamental step. In this paper, an efficient foreground modelling method to segment multiple moving objects is implemented. Proposed method significantly reduces noise thereby accurately segmenting region of interest under dynamic conditions while handling occlusion to a large extent. Extensive performance analysis shows that the proposed method was found to give far better results when compared to the de facto standard as well as relatively new approaches used for moving object detection.
Fusion of Multispectral And Full Polarimetric SAR Images In NSST DomainCSCJournals
Polarimetric SAR (POLSAR) and multispectral images provide different characteristics of the imaged objects. Multispectral provides information about surface material while POLSAR provides information about geometrical and physical properties of the objects. Merging both should resolve many of object recognition problems that exist when they are used separately. Through this paper, we propose a new scheme for image fusion of full polarization radar image (POLSAR) with multispectral optical satellite image (Egyptsat). The proposed scheme is based on Non-Subsampled Shearlet Transform (NSST) and multi-channel Pulse Coupled Neural Network (m-PCNN). We use NSST to decompose images into low frequency and band-pass sub- band coefficients. With respect to low frequency coefficients, a fusion rule is proposed based on local energy and dispersion index. In respect of sub-band coefficients, m-PCNN is used to guide how the fused sub-band coefficients are calculated using image textural information.
The proposed method is applied on three batches of Egyptsat (Red-Green-infra-red) and radarsat2 (C-band full-polarimetric HH-HV and VV-polarization) images. The batches are selected to react differently with different polarization. Visual assessment of the obtained fused image gives excellent information on clarity and delineation of different objects. Quantitative evaluations show the proposed method can superior the other data fusion methods.
System for Prediction of Non Stationary Time Series based on the Wavelet Radi...IJECEIAES
This paper proposes and examines the performance of a hybrid model called the wavelet radial bases function neural networks (WRBFNN). The model will be compared its performance with the wavelet feed forward neural networks (WFFN model by developing a prediction or forecasting system that considers two types of input formats: input9 and input17, and also considers 4 types of non-stationary time series data. The MODWT transform is used to generate wavelet and smooth coefficients, in which several elements of both coefficients are chosen in a particular way to serve as inputs to the NN model in both RBFNN and FFNN models. The performance of both WRBFNN and WFFNN models is evaluated by using MAPE and MSE value indicators, while the computation process of the two models is compared using two indicators, many epoch, and length of training. In stationary benchmark data, all models have a performance with very high accuracy. The WRBFNN9 model is the most superior model in nonstationary data containing linear trend elements, while the WFFNN17 model performs best on non-stationary data with the non-linear trend and seasonal elements. In terms of speed in computing, the WRBFNN model is superior with a much smaller number of epochs and much shorter training time.
RegNet: Multimodal Sensor Registration Using Deep Neural Networks
CalibNet: Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks
RGGNet: Tolerance Aware LiDAR-Camera Online Calibration with Geometric Deep Learning and Generative Model
CalibRCNN: Calibrating Camera and LiDAR by Recurrent Convolutional Neural Network and Geometric Constraints
LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network
CFNet: LiDAR-Camera Registration Using Calibration Flow Network
Motion Detection and Clustering Using PCA and NN in Color Image SequenceTELKOMNIKA JOURNAL
This paper presents a motion detection method with the use of the Principal Component
Analysis. This method is able to detect and track moving objects in a sequence of images. The tested
sequence is segmented within the meaning of movement. In this paper, the concept of extracting
significant information from a large number of data is adopted to provide an effective method for tracking
moving objects on the video image. The principal components are different in term of getting significant
information, the nature of motion (the nature of information) is responsible of this difference, the algorithm
in this paper distinguish the motion nature and choose the appropriate components to give a best
segmentation.
Latest 2016 IEEE Projects | 2016 Final Year Project Titles - 1 Crore Projects1crore projects
IEEE PROJECTS 2016 - 2017
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2016
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US:-
1 CRORE PROJECTS
Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152
An adaptive gmm approach to background subtraction for application in real ti...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
AN EFFICIENT IMPLEMENTATION OF TRACKING USING KALMAN FILTER FOR UNDERWATER RO...IJCSEIT Journal
The exploration of oceans and sea beds is being made increasingly possible through the development of
Autonomous Underwater Vehicles (AUVs). This is an activity that concerns the marine community and it
must confront the existence of notable challenges. However, an automatic detecting and tracking system is
the first and foremost element for an AUV or an aqueous surveillance network. In this paper a method of
Kalman filter was presented to solve the problems of objects track in sonar images. Region of object was
extracted by threshold segment and morphology process, and the features of invariant moment and area
were analysed. Results show that the method presented has the advantages of good robustness, high
accuracy and real-time characteristic, and it is efficient in underwater target track based on sonar images
and also suited for the purpose of Obstacle avoidance for the AUV to operate in the constrained
underwater environment.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
Last decade has witnessed an ever increasing number of video surveillance installa- tions due to the rise of security concerns worldwide. With this comes the need for video analysis for fraud detection, crime investigation, traffic monitoring to name a few. For any kind of video analysis application, detection of moving objects in videos is a fundamental step. In this paper, an efficient foreground modelling method to segment multiple moving objects is implemented. Proposed method significantly reduces noise thereby accurately segmenting region of interest under dynamic conditions while handling occlusion to a large extent. Extensive performance analysis shows that the proposed method was found to give far better results when compared to the de facto standard as well as relatively new approaches used for moving object detection.
Fusion of Multispectral And Full Polarimetric SAR Images In NSST DomainCSCJournals
Polarimetric SAR (POLSAR) and multispectral images provide different characteristics of the imaged objects. Multispectral provides information about surface material while POLSAR provides information about geometrical and physical properties of the objects. Merging both should resolve many of object recognition problems that exist when they are used separately. Through this paper, we propose a new scheme for image fusion of full polarization radar image (POLSAR) with multispectral optical satellite image (Egyptsat). The proposed scheme is based on Non-Subsampled Shearlet Transform (NSST) and multi-channel Pulse Coupled Neural Network (m-PCNN). We use NSST to decompose images into low frequency and band-pass sub- band coefficients. With respect to low frequency coefficients, a fusion rule is proposed based on local energy and dispersion index. In respect of sub-band coefficients, m-PCNN is used to guide how the fused sub-band coefficients are calculated using image textural information.
The proposed method is applied on three batches of Egyptsat (Red-Green-infra-red) and radarsat2 (C-band full-polarimetric HH-HV and VV-polarization) images. The batches are selected to react differently with different polarization. Visual assessment of the obtained fused image gives excellent information on clarity and delineation of different objects. Quantitative evaluations show the proposed method can superior the other data fusion methods.
System for Prediction of Non Stationary Time Series based on the Wavelet Radi...IJECEIAES
This paper proposes and examines the performance of a hybrid model called the wavelet radial bases function neural networks (WRBFNN). The model will be compared its performance with the wavelet feed forward neural networks (WFFN model by developing a prediction or forecasting system that considers two types of input formats: input9 and input17, and also considers 4 types of non-stationary time series data. The MODWT transform is used to generate wavelet and smooth coefficients, in which several elements of both coefficients are chosen in a particular way to serve as inputs to the NN model in both RBFNN and FFNN models. The performance of both WRBFNN and WFFNN models is evaluated by using MAPE and MSE value indicators, while the computation process of the two models is compared using two indicators, many epoch, and length of training. In stationary benchmark data, all models have a performance with very high accuracy. The WRBFNN9 model is the most superior model in nonstationary data containing linear trend elements, while the WFFNN17 model performs best on non-stationary data with the non-linear trend and seasonal elements. In terms of speed in computing, the WRBFNN model is superior with a much smaller number of epochs and much shorter training time.
RegNet: Multimodal Sensor Registration Using Deep Neural Networks
CalibNet: Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks
RGGNet: Tolerance Aware LiDAR-Camera Online Calibration with Geometric Deep Learning and Generative Model
CalibRCNN: Calibrating Camera and LiDAR by Recurrent Convolutional Neural Network and Geometric Constraints
LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network
CFNet: LiDAR-Camera Registration Using Calibration Flow Network
Motion Detection and Clustering Using PCA and NN in Color Image SequenceTELKOMNIKA JOURNAL
This paper presents a motion detection method with the use of the Principal Component
Analysis. This method is able to detect and track moving objects in a sequence of images. The tested
sequence is segmented within the meaning of movement. In this paper, the concept of extracting
significant information from a large number of data is adopted to provide an effective method for tracking
moving objects on the video image. The principal components are different in term of getting significant
information, the nature of motion (the nature of information) is responsible of this difference, the algorithm
in this paper distinguish the motion nature and choose the appropriate components to give a best
segmentation.
Latest 2016 IEEE Projects | 2016 Final Year Project Titles - 1 Crore Projects1crore projects
IEEE PROJECTS 2016 - 2017
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2016
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US:-
1 CRORE PROJECTS
Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152
An adaptive gmm approach to background subtraction for application in real ti...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
AN EFFICIENT IMPLEMENTATION OF TRACKING USING KALMAN FILTER FOR UNDERWATER RO...IJCSEIT Journal
The exploration of oceans and sea beds is being made increasingly possible through the development of
Autonomous Underwater Vehicles (AUVs). This is an activity that concerns the marine community and it
must confront the existence of notable challenges. However, an automatic detecting and tracking system is
the first and foremost element for an AUV or an aqueous surveillance network. In this paper a method of
Kalman filter was presented to solve the problems of objects track in sonar images. Region of object was
extracted by threshold segment and morphology process, and the features of invariant moment and area
were analysed. Results show that the method presented has the advantages of good robustness, high
accuracy and real-time characteristic, and it is efficient in underwater target track based on sonar images
and also suited for the purpose of Obstacle avoidance for the AUV to operate in the constrained
underwater environment.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A New Algorithm for Tracking Objects in Videos of Cluttered ScenesZac Darcy
The work presented in this paper describes a novel algorithm for automatic video object tracking based on
a process of subtraction of successive frames, where the prediction of the direction of movement of the
object being tracked is carried out by analyzing the changing areas generated as result of the object’s
motion, specifically in regions of interest defined inside the object being tracked in both the current and the
next frame. Simultaneously, it is initiated a minimization process which seeks to determine the location of
the object being tracked in the next frame using a function which measures the grade of dissimilarity
between the region of interest defined inside the object being tracked in the current frame and a moving
region in a next frame. This moving region is displaced in the direction of the object’s motion predicted on
the process of subtraction of successive frames. Finally, the location of the moving region of interest in the
next frame that minimizes the proposed function of dissimilarity corresponds to the predicted location of
the object being tracked in the next frame. On the other hand, it is also designed a testing platform which is
used to create virtual scenarios that allow us to assess the performance of the proposed algorithm. These
virtual scenarios are exposed to heavily cluttered conditions where areas which surround the object being
tracked present a high variability. The results obtained with the proposed algorithm show that the tracking
process was successfully carried out in a set of virtual scenarios under different challenging conditions.
Object tracking with SURF: ARM-Based platform ImplementationEditor IJCATR
Several algorithms for object tracking, are developed, but our method is slightly different, it’s about how to adapt and implement such algorithms on mobile platform.
We started our work by studying and analyzing feature matching algorithms, to highlight the most appropriate implementation technique for our case.
In this paper, we propose a technique of implementation of the algorithm SURF (Speeded Up Robust Features), for purposes of recognition and object tracking in real time. This is achieved by the realization of an application on a mobile platform such a Raspberry pi, when we can select an image containing the object to be tracked, in the scene captured by the live camera pi. Our algorithm calculates the SURF descriptor for the two images to detect the similarity therebetween, and then matching between similar objects. In the second level, we extend our algorithm to achieve a tracking in real time, all that must respect raspberry pi performances. So, the first thing is setting up all libraries that the raspberry pi need, then adapt the algorithm with card’s performances. This paper presents experimental results on a set of evaluation images as well as images obtained in real time.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Novel Approach for Moving Object Detection from Dynamic BackgroundIJERA Editor
In computer vision application, moving object detection is the key technology for intelligent video monitoring
system. Performance of an automated visual surveillance system considerably depends on its ability to detect
moving objects in thermodynamic environment. A subsequent action, such as tracking, analyzing the motion or
identifying objects, requires an accurate extraction of the foreground objects, making moving object detection a
crucial part of the system. The aim of this paper is to detect real moving objects from un-stationary background
regions (such as branches and leafs of a tree or a flag waving in the wind), limiting false negatives (objects
pixels that are not detected) as much as possible. In addition, it is assumed that the models of the target objects
and their motion are unknown, so as to achieve maximum application independence (i.e. algorithm works under
the non-prior training).
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Distance Metric Based Multi-Attribute Seismic Facies Classification to Identi...Pioneer Natural Resources
Conventional reservoirs benefit from a long scientific history that correlates successful plays to seismic measurements through depositional, tectonic, and digenetic models. Unconventional reservoirs are less well understood, however benefit from significantly denser well control. Thus, allowing us to establish statistical rather than model-based correlations between seismic data, geology, and successful completion strategies. One of the more commonly encountered correlation techniques is based on computer assisted pattern recognition. The pattern recognition techniques have found their niche in a plethora of applications ranging from flagging suspicious credit card purchase patterns to rewarding repeating online buying patterns. Classification of a given seismic response as having a “good” or “bad” pattern requires a “distance metric”. Distance metric “learning” uses past experiences (well performance) as training data to develop a distance metric. Alternative distance metrics have demonstrated significant value in the identification and classification of repeated or anomalous behaviors in public health, security, and marketing. In this paper we examine the value of three of these alternative distance metrics of 3D seismic attributes to the identification of sweet spots in a Barnett Shale play.
Distance Metric Based Multi-Attribute Seismic Facies Classification to Identi...
real-time-object
1. 1
Real-Time Object Localization and Tracking from
Image Sequences
Yuanwei Wu, Yao Sui, Arjan Gupta and Guanghui Wang
Abstract—To address the problem of autonomous sense and
avoidance for unmanned aerial vehicle navigation via vision-
based method, in this letter, we propose a real-time object
localization and tracking strategy from monocular image se-
quences. The proposed approach effectively integrates the tech-
niques of object detection, localization, and tracking into a
dynamic model. At the detection stage, the object of interest
is automatically detected and localized from a saliency map
computed via connectivity cue of the frame; at the tracking
stage, a Kalman filter is employed to provide a coarse prediction
of the object position and size, which is further refined via a
local detector using image boundary connectivity cue and context
information between consecutive frames. Compared to existing
methods, the proposed technique does not require any manual
initialization, runs much faster than the state-of-the-art trackers
of its kind, and achieves comparative tracking performance.
Extensive comparative experiments demonstrate the effectiveness
and better performance of the proposed approach.
Index Terms—Salient object detection; visual tracking;
Kalman filter; object localization; real-time tracking;
I. INTRODUCTION
VISUAL object tracking has played important roles in
many computer vision applications, such as human-
computer interaction, surveillance, and video understanding
[1]. Due to emerging real-world applications, like deliver-
ing packages using small unmanned aerial vehicles (UAVs)
[2], there is a huge demand for vision-based autonomous
navigation for UAVs. First of all, the vision-based methods
are robust to electromagnetic interference compared to con-
ventional sensor-based method, e.g. global positional system
(GPS) [3]. Second, vision-based methods are needed due to
strict small size and insufficient power supply of UAVs. Based
on this background, in this letter, we address autonomous sense
and avoidance of obstacles for UAVs during flight via the
integration of object detection and tracking.
The tracking-by-detection methods have become increas-
ingly popular for real-time applications [4] in visual tracking.
The correlation filter-based trackers attract more attention in
recent years due to its high speed performance [5]. However,
those conventional tracking methods [4, 6–13] require manual
initialization with the ground truth at the first frame. Moreover,
they are sensitive to the initialization variation caused by scales
This work is partly supported by the National Aeronautics and Space Ad-
ministration (NASA) LEARN II program under grant number NNX15AN94N.
The authors are with the Department of Electrical Engineering and
Computer Science, University of Kansas, Lawrence, KS 66045 USA.
The source code and dataset will be available on the authors home-
page http://www.ittc.ku.edu/∼ghwang/. (email: wuyuanwei2010@gmail.com,
suiyao@gmail.com, arjangupta@ku.edu, ghwang@ku.edu)
and position errors, and would return useless information once
failed during tracking [14].
Combining a detector with a tracker is a feasible solution
for automatic initialization [15]. The detector, however, needs
to be trained with large amount of training samples, while
the prior information about the object of interest is usually
not available in advance. In [16], Mahadevan et al. proposed
a saliency-based discriminative tracker with automatic initial-
ization, which builds the motion saliency map using optical
flow. This technique, however, is computational intensive and
not suitable for real-time applications.
Some recent techniques on salient object detection and
visual tracking [17, 18] have achieved superior performance
by using deep learning. However, these methods need large
amount of samples for training. The methods of object co-
localization in videos [19, 20] are originally designed to handle
objects of the same class across a set of distinct images or
videos, while for target tracking, we typically focus on a
salient object in a video sequence.
Several recent approaches exploit boundary connectivity
[21, 22] for natural images, which have been shown to be
effective for salient object detection. Since the saliency map ef-
fectively discovers the spatial information of target , it enables
us to improve the target localization accuracy. Inspired by the
salient object detection approach [21], which achieves high
detection speed on individual images, we develop an efficient
method by integrating two complementary processes: salient
object detection and tracking. A Kalman filter is employed to
predict a coarse location of the target object, and the detector
is used to refine the solution.
In summary, our contributions are threefold: 1) The pro-
posed algorithm integrates saliency map into a dynamic model
and adopts the target-specific saliency map as the observation
for tracking; 2) We develop a tracker with automatic initializa-
tion for real-world applications and 3) the proposed technique
achieves better performance than state-of-the-art competing
trackers from extensive real experiments.
II. THE PROPOSED APPROACH
The proposed fast object localization and tracking (FOLT)
algorithm can automatically and quickly localize the salient
object in the scene and track it across the sequence. In this
letter, the object of interest is the salient object in the view, so
the tracking problem is formulated as an unsupervised salient
object detection, which can be automatically obtained from the
saliency map computed from the frame [21]. In the following,
we will present a detailed elaboration of the approach.
2. 2
Fig. 1: A flow-chart of the proposed approach.
A. Overview of the proposed approach
In most tracking scenarios, the linear Gaussian motion
model has been demonstrated to be an effective representation
for the motion behavior of salient object in natural image
sequences [23, 24]. Therefore, an optimal estimator, Kalman
filter [25], has been used to estimate the motion attributes, e.g.
the velocity, position and scale of the object. A flow chart of
the proposed approach is shown in Fig. 1, the bounding box
of the object is initialized from the saliency map of the entire
image [21]. A dynamic model is established to predict the
object position and size at the next frame. Under the constrain
of natural motion, this predicted bounding box provides the
tracking algorithm a coarse solution, which is not far away
from the ground truth [23]. Thus, a reasonable search region
can be automatically attained by expanding the predicted
object window with a fixed percentage. Then, the location and
size of the object is refined by computing the saliency within
the search region. Next, the refined bounding box, as a new
observation, is fed to the Kalman filter to update the dynamic
model in the correction phase. Through this process, the object
in the image sequence is automatically detected and tracked
relying on recursively prediction, observation, and correction.
B. Motion model
In the dynamic model, the salient object in a frame is
defined by a motion state variable S with six variables S =
{x, y, u, v, w, h}, where (x, y) denotes the center coordinates,
(u, v) denotes the velocities, and (w, h) denotes the width and
height of the minimum bounding box. In the t-th frame, the
predicted state ˆS−
t is evolved from the prior state ˆSt−1 in
frame t−1 given knowledge of the process prior to time t−1
according to the following linear stochastic equation
ˆS−
t = F ˆSt−1 + wt−1, (1)
where the variable wt−1 represents the additive, white Gaus-
sian noise with zero mean and known covariance, and F
denotes the state transition matrix. We use the notation St ∼
N(µ, Σ) to denote that state St is a random variable with a
normal probability distribution with mean µ and covariance
Σ in frame t. The covariance is a diagonal matrix, which is
composed by the variances of x, y, u, v, w, and h, respectively.
Let us assume that zt encodes the positions and dimensions of
the minimum bounding box of the observation in frame t. The
observation zt is the output of the fast salient object detector,
Fig. 2: Illustration of updating the search region ROI using (a)
raster scanning and (b) inverse-raster scanning.
which is represented by zt = {x, y, w, h}. The posterior state
of the object in frame t given observation zt is finally updated
by incorporating the observation and the dynamic model via
St = ˆS−
t + Kt(zt − H ˆS−
t ), (2)
where Kt denotes the Kalman gain in frame t with the leverage
of obtaining a posterior state estimation St. The estimation
with minimum mean-square error is obtained by weighting
the difference between the prediction and observation.
C. Salient object detection
It has been shown that the cue of image boundary connectiv-
ity is effective for salient object detection [21, 22]. In natural
images, it is safe to assume that the object regions are much
less connected to the image boundaries.
In this letter, the salient object detection is formulated as
finding the shortest path from pixel wij to the seed set B
from the image boundary, considering all possible paths in the
image. Each pixel in the 2D digital image I is denoted as a
vertex. The neighboring pixels are connected by edges. In this
work, we consider 4-adjacent neighbors, e.g. the neighbors of
wij are wi−1,j, wi+1,j, wi,j−1, and wi,j+1, as shown in Fig.
2. The path p = v(0), v(1), · · · , v(k) on image I denotes a
sequence of consecutive neighboring pixels. Given a loss func-
tion L(p), the problem of finding the salient object in the frame
t is defined as It(wij) = arg minp∈PB,wij
L(p), where PB,wij
denotes all possible paths connecting the seed set B and the
pixel wij in image It. Similar to the work in [21], we formulate
the loss function at the frame t as LIt (p) = maxn
j=0(p(i)) −
minn
j=0(p(j)), where LIt (p) calculates the pixel intensity
difference between the maximum and the minimum values
among all possible paths. Let E(wij, v) denotes the edge
connecting the vertex wij and v, Q(wij) denotes the current
path connecting the pixel wij with the image boundary set B.
We define CIt (Q(wij), E(wij, v)) as the cost of a new path
connecting the vertex v to the image boundary set B by adding
the edge E to Q(wij). CIt
(Q(wij), E(wij, v)) can be calcu-
lated from CIt
(Q(wij), E(wij, v)) = max{U(wij), It(v)} −
min{L(wij), It(v)}, where U(wij) and L(wij) denote the
maximum and the minimum pixel intensity values on the
path Q(wij). A raster scanning method [21] could be used
to calculate the cost CIt
(Q(wij), E(wij, v)). The details will
be discussed in sect. II-D.
3. 3
Algorithm 1: Fast Object Localization Tracking (FOLT)
Input: image It+1, saliency map Dt, search region
ROIt, number of passes N
Output: saliency map Dt+1
Auxiliaries: Ut+1, Lt+1
Inside the search region ROI, set Dt to ∞
Outside the search region, keep the values Dt
Set Lt+1 ← It+1 and Ut+1 ← It+1
for each frame do
Prediction using Eq. (1)
Observation as following:
for i = 1 : N do
if mod(i, 2) = 1 then
Raster Scanning using Eq. (3), (4), (5)
end
else
Inverse-Raster Scanning using Eq.(3), (4), (5)
end
end
Correction using Eq. (2)
Update the complete Dt every ten frames
end
D. Fast object localization tracking
In [21], Zhang et al. provided a solution for individual im-
ages using the minimum barrier distance detection method. In
order to improve the accuracy and speed in image sequences,
we explore the integration of the image boundary connec-
tivity cue with the temporal context information between
consecutive frames. Therefore, we propose a fast salient object
detection and tracking framework as shown in Fig. 1. During
the observation stage, two fast scanning procedures, raster
scanning and inverse-raster scanning, are implemented to find
the location of the salient object between two consecutive
frames. As shown in Fig. 2, the inner window of the target
object is coarsely predicted using the dynamic model. The
search region is obtained by expanding the inner window
with a fixed percentage. The raster scanning and inverse-raster
scanning are used to update the pixel values in the search
region of image It. In the proposed approach, the search region
is dynamically determined based on the predicted position of
the salient object. As shown in Fig. 2 (a), the raster scanning is
used to update all the intensities from the top-left pixel to the
bottom-right pixel, which simultaneously updates two adjacent
neighbors wi,j−1 and wi−1,j. Similarly, in the inverse raster
scanning, the intensities of the two adjacent neighbors wi+1,j
and wi,j+1 in the search region are reversely updated, as shown
in Fig.2 (b). The values outside of the search region are not
updated since they have less contribution to the detection. As
a trade-off between the accuracy and efficiency, a complete
saliency map of the entire image is updated every ten frames.
The updating strategy in the search region is given by
It(wij) ← min(It(wij), Qwij
(v)) (3)
U(wij) ← max(U(v), It(wij)) (4)
L(wij) ← min(L(v), It(wij)) (5)
Fig. 3: Tracking results in representative frames of the pro-
posed and the 7 competing trackers on three challenging
sequences. First row: illumination variation (Skyjumping ce);
Second row: in-plane and out-of-plane rotations (big 2); Third
row: scale variation (motorcycle 006). (best viewed in color)
The implementation details of the above detection and tracking
algorithm is described in Algorithm 1, where the algorithm is
initialized based on the detection result of the first frame, and
the saliency map of the last frame t is fed to the algorithm.
III. EXPERIMENTAL EVALUATIONS
The proposed approach is implemented in C++ with
OpenCV 3.0.0 on a PC with an Intel Xeon W3250 2.67
GHz CPU and 8 GB RAM. The datasets and source code
of the proposed approach will be available on the authors
homepage. The proposed tracker is evaluated on 15 popular
video sequences selected from [14, 26–28] regarding the
salient object in the field of view. In each frame of these
video sequences, the target is labeled manually in a bounding
box, which is used as the ground truth in the quantitative
evaluations.
In our implementation, input images are first resized so that
the maximum dimension is 300 pixels. Three experiments are
designed to evaluate trackers as discussed in [14]: one pass
evaluation (OPE), temporal robustness evaluation (TRE), and
spatial robustness evaluation. For TRE, we randomly select the
starting frame and run a tracker to the end of the sequence.
Spatial robustness evaluation initializes the bounding box in
the first frame by shifting or scaling. As discussed in Section
II, the proposed method manages to automatically initialize
the tracker and is not sensitive to spatial fluctuation. Therefore,
we use the same temporal randomization as in [14], and refer
readers to [14] for more details.
A. Speed performance
In the detection stage, for individual images, the most up-
to-date fast detector MB+ [21] attains a speed of 49 frame-per-
second (fps), in contrast, the proposed method achieves a speed
of 149 fps and accurate performance on image sequences,
which is three times faster than MB+. The average speed
comparison of the proposed and the seven state-of-the-art
competing trackers is provided in Table I. The average speed of
our tracker is 141 fps, which is at the same level as the fastest
tracker KCF [11], however, KCF adopts a fixed tracking box,
4. 4
TABLE I: Quantitative evaluations of the proposed and the 7
competing trackers on the 15 sequences. The best and second
best results are highlighted in bold-face and underline fonts,
respectively.
Ours CT STC CN SAMF DSST CCT KCF
[4] [6] [7] [8] [9] [10] [11]
Precision of TRE 0.79 0.51 0.59 0.64 0.65 0.65 0.66 0.60
Success rate of TRE 0.61 0.45 0.46 0.54 0.58 0.56 0.57 0.52
Precision of OPE 0.83 0.44 0.48 0.44 0.59 0.48 0.66 0.48
Success rate of OPE 0.66 0.34 0.41 0.42 0.52 0.44 0.53 0.38
CLE (in pixel) 14.5 74.4 38.0 55.0 40.8 55.7 23.2 45.6
Average speed (in fps) 141.3 12.0 73.6 87.1 12.9 20.8 21.3 144.8
which could not reflect the scale changes of the object. On
average, our method is more than ten times faster than CT [4]
and SAMF [8], five times faster than DSST [9] and CCT [10]
and about two times faster than STC [6] and CN [7].
B. Comparison with the state-of-the-art trackers
The performance of our approach is quantitatively validated
following the metrics used in [14]. We present the results using
precision, centre location error (CLE) and success rate (SR).
The CLE is defined as the Euclidean distance between the
centers of the tracking and the ground-truth bounding boxes.
The precision is computed from the percentage of frames
where the CLEs are smaller than a threshold. Following [14],
a threshold value of 20 pixels is used for the precision in
our evaluations. A tracking result in a frame is considered
successful if
at ag
at ag
> θ for a threshold θ ∈ [0, 1], where at
and ag denote the areas of the bounding boxes of the tracking
and the ground truth, respectively. Thus, SR is defined as the
percentage of frames where the overlap rates are greater than a
threshold θ. Normally, the threshold θ is set to 0.5. We evaluate
the proposed method by comparing to the seven state-of-the-
art trackers: CT, STC, CN, SAMF, DSST, CCT, and KCF.
The comparison results on the 15 sequences are shown in
Table I. We present the results under one-pass evaluation and
temporal robustness evaluation using the average precision,
success rate, and CLE over all sequences. As shown in the
table, the proposed method outperforms all seven competing
trackers. In is evident that, in the one pass evaluations, the
proposed tracker obtains the best performance in the CLE
(14.5 pixels), and the precision (0.83), which are 8.7 pixels
and 17% superior to the second best tracker, the CCT tracker
(23.2 pixels in CLE and 0.66 in precision,). Meanwhile, in
the success rate, the proposed tracker achieves the best result,
which is 13% improvement against the second best tracker, the
SAMF tracker. Please note that, for the 7 competing trackers,
the average performance in TRE is higher than that in OPE;
while for the proposed tracker, the precision and success scores
in TRE are lower than those in OPE. This is because the
proposed tracker tends to perform well in longer sequences,
while the 7 competing trackers work well in shorter sequences
[14]. In addition, Fig 4 plots the precision and success plots
in the one pass evaluation and temporal robustness evaluation
over all 15 sequences. In the two evaluations, according to both
the precision and the success rate, our approach significantly
Fig. 4: Precision and success rate plots over the 15 sequences
in (top) one pass evaluation (OPE) and (bottom) temporal
robustness evaluation (TRE). (best viewed in color)
outperforms the seven competing trackers. In summary, the
precision plot demonstrates that our approach is superior in
robustness compared to its counterparts in the experiments;
the success rate shows that our method estimates the scale
changes of the target more accurately.
C. Qualitative evaluation
In this section, we present some qualitative comparisons of
our approach with respect to the 7 competing trackers. Fig. 3
(first row) illustrates a sequence with significant illumination
variations as well as gradual out-of-plane rotations. Both CT
and STC can deal with illumination changes very well, but fail
in the presence of pose variations and out-of-plane rotations,
as shown in frames #365, and #666. In contrast, our tracker
accurately estimates both the scale and position of the target.
Fig. 3 (second row) shows the results on a sequence with
significant in-plane and out-of-plane rotations. Our approach
obtains the best performance in these cases. On the sequence,
our approach tracks part of the target due to out-of-plane
rotation, but it accurately reacquires the target in the following
frames, as shown in frames #319 and #369.
Fig. 3 (third row) illustrates the results on a sequence
with large scale variations. STC, SAMF, DSST, and CCT
were capable to handle scale changes, but they failed in this
sequence, as shown in frames #110, #145 and #170. The
competing trackers fail to handle the significant appearance
changes of rotating motions and fast scale variations. In
contrast, our tracker is robust to large and fast scale variations.
IV. CONCLUSIONS
In this paper, we have proposed an effective and efficient
approach for real-time visual object localization and tracking,
which can be applied to UAV navigation, such as obstacle
sence and avoidance. Our method integrates a fast salient
object detector within Kalman filtering framework. Compared
to the state-of-the-art trackers, our approach can not only
initialize automatically, it also achieves the fastest speed and
better performance than competing trackers.
5. 5
REFERENCES
[1] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A
survey,” ACM Comput. Surv., vol. 38, no. 4, pp. 1–45,
2006.
[2] Amazon, “Amazon prime air,” https://www.youtube.com/
watch?v=98BIu9dpwHU, 2013.
[3] M. Fraiwan, A. Alsaleem, H. Abandeh, and O. Aljarrah,
“Obstacle avoidance and navigation in robotic systems:
A land and aerial robots study,” in 5th Int. Conf. Inf.
Commu. Systems (ICICS). IEEE, 2014, pp. 1–5.
[4] K. Zhang, L. Zhang, and M. Yang, “Real-time compres-
sive tracking,” in Eur. Conf. Computer Vision (ECCV),
pp. 864–877. Springer, 2012.
[5] Z. Chen, Z. Hong, and D. Tao, “An experimental survey
on correlation filter-based tracking,” arXiv preprint, pp.
1–13, 2015.
[6] K. Zhang, L. Zhang, Q. Liu, D. Zhang, and M. Yang,
“Fast visual tracking via dense spatio-temporal context
learning,” in Eur. Conf. Computer Vision (ECCV), pp.
127–141. Springer, 2014.
[7] M. Danelljan, F. Khan, M. Felsberg, and J. Weijer,
“Adaptive color attributes for real-time visual tracking,”
in IEEE Trans. Patt. Anal. Mach. Intell., 2014, pp. 1090–
1097.
[8] Y. Li and J. Zhu, “A scale adaptive kernel correlation
filter tracker with feature integration,” in Eur. Conf.
Computer Vision (ECCV) Workshops. Springer, 2014, pp.
254–265.
[9] M. Danelljan, G. H¨ager, F. Khan, and M. Felsberg,
“Accurate scale estimation for robust visual tracking,”
in Proc. Br. Mach. Conf. (BMVC), 2014, pp. 1–11.
[10] G. Zhu, J. Wang, Y. Wu, and H. Lu, “Collaborative
correlation tracking,” in Proc. Br. Mach. Conf. (BMVC),
2015, pp. 1–12.
[11] J. Henriques, R. Caseiro, P. Martins, and J. Batista,
“High-speed tracking with kernelized correlation filters,”
IEEE Trans. Patt. Anal. Mach. Intell., vol. 37, no. 3, pp.
583–596, 2015.
[12] Y. Sui, Z. Zhang, G. Wang, Y. Tang, and L. Zhang,
“Real-time visual tracking: Promoting the robustness of
correlation filter learning,” in Eur. Conf. Computer Vision
(ECCV). Springer, 2016.
[13] Y. Sui and L. Zhang, “Visual tracking via locally
structured gaussian process regression,” IEEE Signal
Process. Lett., vol. 22, no. 9, pp. 1331–1335, 2015.
[14] Y. Wu, J. Lim, and M. Yang, “Online object tracking:
A benchmark,” in IEEE Computer Soc. Conf. Computer
Vision and Pattern Recognition (CVPR), 2013, pp. 2411–
2418.
[15] M. Andriluka, S. Roth, and B. Schiele, “People-
tracking-by-detection and people-detection-by-tracking,”
in IEEE Computer Soc. Conf. Computer Vision and
Pattern Recognition (CVPR). IEEE, 2008, pp. 1–8.
[16] V. Mahadevan and N. Vasconcelos, “Saliency-based
discriminant tracking,” in IEEE Computer Soc. Conf.
Computer Vision and Pattern Recognition (CVPR). IEEE,
2009, pp. 1007–1013.
[17] C. Ma, J. Huang, X. Yang, and M. Yang, “Hierarchical
convolutional features for visual tracking,” in IEEE Int.
Conf. Computer Vision (ICCV), 2015, pp. 3074–3082.
[18] S. Hong, T. You, S. Kwak, and B. Han, “Online
tracking by learning discriminative saliency map with
convolutional neural network,” arXiv preprint, 2015.
[19] S. Gidaris and N. Komodakis, “Locnet: Improving lo-
calization accuracy for object detection,” arXiv preprint,
2015.
[20] K. Tang, A. Joulin, L. Li, and F. Li, “Co-localization
in real-world images,” in IEEE Computer Soc. Conf.
Computer Vision and Pattern Recognition (CVPR). IEEE,
2014, pp. 1464–1471.
[21] J. Zhang, S. Sclaroff, Z. Lin, X. Shen, B. Price, and
R. Mech, “Minimum barrier salient object detection at
80 fps,” in IEEE Int. Conf. Computer Vision (ICCV),
2015, pp. 1404–1412.
[22] W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency
optimization from robust background detection,” in IEEE
Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR), 2014, pp. 2814–2821.
[23] S. Yin, J. Na, J. Choi, and S. Oh, “Hierarchical kalman-
particle filter with adaptation to motion changes for
object tracking,” Comput. Vis. Image Underst., vol. 115,
no. 6, pp. 885–900, 2011.
[24] S. Weng, C. Kuo, and S. Tu, “Video object tracking
using adaptive kalman filter,” J. Vis. Commun. Image R.,
vol. 17, no. 6, pp. 1190–1208, 2006.
[25] G. Welch and G. Bishop, “An introduction to the kalman
filter,” in University of North Carolina at Chapel Hill,
NC, USA, Technique report, 2006, pp. 1–16.
[26] A. Li, M. Lin, Y. Wu, M. Yang, and S. Yan, “Nus-pro: A
new visual tracking challenge,” IEEE Trans. Patt. Anal.
Mach. Intell., vol. 38, no. 2, pp. 335–349, 2016.
[27] P. Liang, E. Blasch, and H. Ling, “Encoding color infor-
mation for visual tracking: Algorithms and benchmark,”
IEEE Trans. Image Process., vol. 24, no. 12, pp. 5630–
5644, 2015.
[28] M. Kristan and et al., “The visual object tracking vot2014
challenge results,” in Eur. Conf. Computer Vision (ECCV)
Workshop, 2014, pp. 191–217.