This document summarizes and compares three feature detection methods: SIFT, PCA-SIFT, and SURF. It finds that SIFT is the slowest but most stable, detecting features accurately across various transformations. SURF is the fastest but less stable than SIFT. PCA-SIFT shows advantages in rotation and illumination changes, detecting features more accurately than SURF in some cases when viewpoint changes are large. The document evaluates the methods using repeatability, matching numbers, and processing time under different image scales, rotations, blurs, illuminations, and affine transformations.
Parameter Optimisation for Automated Feature Point DetectionDario Panada
Parameter optimization for an automated feature point detection model was explored. Increasing the number of random displacements up to 20 improved performance but additional increases did not. Larger patch sizes consistently improved performance. Increasing the number of decision trees did not affect performance for this single-stage model, unlike previous findings for a two-stage model. Overall, some parameter tuning was found to enhance the model's accuracy but not all parameters significantly impacted results.
Usage of Shape From Focus Method For 3D Shape Recovery And Identification of ...CSCJournals
Shape from focus is a method of 3D shape and depth estimation of an object from a sequence of pictures with changing focus settings. In this paper we propose a novel method of shape recovery, which was originally created for shape and position identification of glass pipette in medical hybrid robot. In proposed algorithm, Sum of Modified Laplacian is used as a focus operator. Each step of the algorithm is tested in order to pick the operators with the best results. Reconstruction allows not only to determine shape but also precisely define position of the object. The results of proposed method, performed on real objects, have shown the efficiency of this scheme.
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
This document evaluates the sensitivity of seismic inversion algorithms to wavelets estimated using different statistical methods. It summarizes two wavelet estimation techniques - the Hilbert transform method and smoothing spectra method. It also describes two inversion methods - Narrow-band inversion and a Bayesian approach. Numerical experiments were conducted to analyze the performance of the wavelet estimation methods and sensitivity of the inversion algorithms to estimated wavelets. The smoothing spectra method produced better wavelet estimates. The Bayesian approach yielded superior inversion results and more robust impedance estimates compared to Narrow-band inversion in all tests.
Advanced cosine measures for collaborative filteringLoc Nguyen
Cosine similarity is an important measure to compare two vectors for many researches in data mining and information retrieval. In this research, cosine measure and its advanced variants for collaborating filtering (CF) are evaluated. Cosine measure is effective but it has a drawback that there may be two end points of two vectors which are far from each other according to Euclidean distance, but their cosine is high. This is negative effect of Euclidean distance which decreases accuracy of cosine similarity. Therefore, a so-called triangle area (TA) measure is proposed as an improved version of cosine measure. TA measure uses ratio of basic triangle area to whole triangle area as reinforced factor for Euclidean distance so that it can alleviate negative effect of Euclidean distance whereas it keeps simplicity and effectiveness of both cosine measure and Euclidean distance in making similarity of two vectors. TA is considered as an advanced cosine measure. TA and other advanced cosine measures are tested with other similarity measures. From experimental results, TA is not a preeminent measure but it is better than traditional cosine measures in most cases and it is also adequate to real-time application. Moreover, its formula is simple too.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
An Experiment with Sparse Field and Localized Region Based Active Contour Int...CSCJournals
This paper discusses various experiments conducted on different types of Level Sets interactive segmentation techniques using Matlab software, on select images. The objective is to assess the effectiveness on specific natural images, which have complex image composition in terms of intensity, colour mix, indistinct object boundary, low contrast, etc. Besides visual assessment, measures such as Jaccard Index, Dice Coefficient and Hausdorrf Distance have been computed to assess the accuracy of these techniques, between segmented and ground truth images. This paper particularly discusses Sparse Field Matrix and Localized Region Based Active Contours, both based on Level Sets. These techniques were not found to be effective where object boundary is not very distinct and/or has low contrast with background. Also, the techniques were ineffective on such images where foreground object stretches up to the image boundary.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
Parameter Optimisation for Automated Feature Point DetectionDario Panada
Parameter optimization for an automated feature point detection model was explored. Increasing the number of random displacements up to 20 improved performance but additional increases did not. Larger patch sizes consistently improved performance. Increasing the number of decision trees did not affect performance for this single-stage model, unlike previous findings for a two-stage model. Overall, some parameter tuning was found to enhance the model's accuracy but not all parameters significantly impacted results.
Usage of Shape From Focus Method For 3D Shape Recovery And Identification of ...CSCJournals
Shape from focus is a method of 3D shape and depth estimation of an object from a sequence of pictures with changing focus settings. In this paper we propose a novel method of shape recovery, which was originally created for shape and position identification of glass pipette in medical hybrid robot. In proposed algorithm, Sum of Modified Laplacian is used as a focus operator. Each step of the algorithm is tested in order to pick the operators with the best results. Reconstruction allows not only to determine shape but also precisely define position of the object. The results of proposed method, performed on real objects, have shown the efficiency of this scheme.
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
This document evaluates the sensitivity of seismic inversion algorithms to wavelets estimated using different statistical methods. It summarizes two wavelet estimation techniques - the Hilbert transform method and smoothing spectra method. It also describes two inversion methods - Narrow-band inversion and a Bayesian approach. Numerical experiments were conducted to analyze the performance of the wavelet estimation methods and sensitivity of the inversion algorithms to estimated wavelets. The smoothing spectra method produced better wavelet estimates. The Bayesian approach yielded superior inversion results and more robust impedance estimates compared to Narrow-band inversion in all tests.
Advanced cosine measures for collaborative filteringLoc Nguyen
Cosine similarity is an important measure to compare two vectors for many researches in data mining and information retrieval. In this research, cosine measure and its advanced variants for collaborating filtering (CF) are evaluated. Cosine measure is effective but it has a drawback that there may be two end points of two vectors which are far from each other according to Euclidean distance, but their cosine is high. This is negative effect of Euclidean distance which decreases accuracy of cosine similarity. Therefore, a so-called triangle area (TA) measure is proposed as an improved version of cosine measure. TA measure uses ratio of basic triangle area to whole triangle area as reinforced factor for Euclidean distance so that it can alleviate negative effect of Euclidean distance whereas it keeps simplicity and effectiveness of both cosine measure and Euclidean distance in making similarity of two vectors. TA is considered as an advanced cosine measure. TA and other advanced cosine measures are tested with other similarity measures. From experimental results, TA is not a preeminent measure but it is better than traditional cosine measures in most cases and it is also adequate to real-time application. Moreover, its formula is simple too.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
An Experiment with Sparse Field and Localized Region Based Active Contour Int...CSCJournals
This paper discusses various experiments conducted on different types of Level Sets interactive segmentation techniques using Matlab software, on select images. The objective is to assess the effectiveness on specific natural images, which have complex image composition in terms of intensity, colour mix, indistinct object boundary, low contrast, etc. Besides visual assessment, measures such as Jaccard Index, Dice Coefficient and Hausdorrf Distance have been computed to assess the accuracy of these techniques, between segmented and ground truth images. This paper particularly discusses Sparse Field Matrix and Localized Region Based Active Contours, both based on Level Sets. These techniques were not found to be effective where object boundary is not very distinct and/or has low contrast with background. Also, the techniques were ineffective on such images where foreground object stretches up to the image boundary.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
The document discusses a method for 3D object recognition from 2D images using centroidal representation. It involves several steps: filtering and binarizing the image, detecting edges, calculating the object center point, extracting features around the centroid, and creating mathematical models using wavelet transforms and autoregression. Centroidal samples represent distances from the center to the boundary every 45 degrees. Wavelet transforms and autoregression are used to create scale and position invariant representations of the object for recognition.
Robust Clustering of Eye Movement Recordings for QuantiGiuseppe Fineschi
Characterizing the location and extent of a viewer’s interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-ofinterest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.
Detection of Neural Activities in FMRI Using Jensen-Shannon DivergenceCSCJournals
In this paper, we present a statistical technique based on Jensen-Shanon divergence for detecting the regions of activity in fMRI images. The method is model free and we exploit the metric property of the square root of Jensen-Shannon divergence to accumulate the variations between successive time frames of fMRI images. Theoretically and experimentally we show the effectiveness of our algorithm.
This document compares the accuracy of determining volumes using close range photogrammetry versus traditional methods. It presents a case study where the volume of a test field was calculated using both approaches. Using traditional methods with 425 control points, the volume was calculated as 221475.14 m3 using trapezoidal rules, 221424.52 m3 using Simpson's rule, and 221484.05 m3 using Simpson's 3/8 rule. Using close range photogrammetry with 42 control points and 574 generated points, the volume was calculated as 215310.60 m3 using trapezoidal rules, 215300.43 m3 using Simpson's rule, and 215304.12 m3 using Simpson's 3
Performance improvement of a Rainfall Prediction Model using Particle Swarm O...ijceronline
The performances of the statistical methods of time series forecast can be improved by precise selection of their parameters. Various techniques are being applied to improve the modeling accuracy of these models. Particle swarm optimization is one such technique which can be conveniently used to determine the model parameters accurately. This robust optimization technique has already been applied to improve the performance of artificial neural networks for time series prediction. This study uses particle swarm optimization technique to determine the parameters of an exponential autoregressive model for time series prediction. The model is applied for annual rainfall prediction and it shows a fairly good performance in comparison to the statistical ARIMA model
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
This document proposes a new method for image segmentation using histogram thresholding and hierarchical cluster analysis. The method develops a dendrogram (hierarchical tree) of gray levels in an image histogram based on a similarity measure involving the inter-class variance of clusters to be merged and the intra-class variance of the new merged cluster. By iteratively merging the most similar clusters in a bottom-up approach, the dendrogram yields a clear separation of object and background pixels, providing robust threshold estimates. The method can be extended to multi-level thresholding by terminating the clustering at different levels in the dendrogram. Experiments show the method outperforms Otsu's and Kwon's thresholding methods.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Receding Horizon Stochastic Control Algorithms for Sensor Management ACC 2010Darin Hitchings, Ph.D.
This document summarizes a research paper about developing receding horizon control algorithms for sensor management problems. The algorithms aim to efficiently control information acquisition from smart sensors that can dynamically change their observations. The proposed approaches use a stochastic control approximation and receding horizon optimization to obtain feasible sensor schedules from mixed strategies. Simulations showed the receding horizon algorithms achieved near-optimal performance comparable to a theoretical lower bound, and that a modest horizon was sufficient.
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...ijfcstjournal
The paper proposes the modified-SIFT algorithm which will be a modified form of the scale invariant feature transform. The modification consists of considering successive groups of 8 rows of pixel, along the height of the image. These are used to construct 8 bin histograms for magnitude as well as orientation individually. As a result the number of feature descriptors is significantly less (95%) than the standard SIFT approach. Fewer feature descriptor leads to reduced accuracy. This reduction in accuracy is quite drastic when searching for a single (RANK1) image match; however accuracy improves if a band of likely (say tolerance of 10%) images is to be returned. The paper therefore proposes a two-stage-approach where
First Modified-SIFT is used to obtain a shortlisted band of likely images subsequently SIFT is applied within this band to find a perfect match. It may appear that this process is tedious however it provides a significant reduction in search time as compared to applying SIFT on the entire database. The minor reduction in accuracy can be offset by the considerable time gained while searching a large database. The
modified-SIFT algorithm when used in conjunction with a face cropping algorithm can also be used to find a match against disguised images.
Model calibration or data inversion involves using experimental or field data to estimate the unknown parameters in a mathematical model. In the first part of the talk, I will present a review of a few approaches for model calibration or data inversion with the focus on model discrepancy and measurement bias. A few state-of-art methods, such as modeling the discrepancy by the Gaussian stochastic process (GaSP) or scaled Gaussian stochastic processes (S-GaSP), L2 calibration, Least squares (LS) calibration and orthogonal Gaussian process calibration, will be introduced. The connection and difference between these methods will be discussed. In the second part of talk, I will discuss our ongoing works on calibrating a geophysical model by integrating the different types of the field data, such as the interferometric synthetic aperture radar satellite (InSAR) interferograms, GPS data, velocities of tilt and lava lake from the Kilauea Volcano during the eruption in 2018. This task is complicated by the discrepancy between the model and reality different sample sizes and possible bias in field data. We introduce the scaled Gaussian stochastic process (S-GaSP), a new stochastic process to model the discrepancy function in calibration for the identifiability issue between the calibrated mathematical model and the discrepancy function. We also compare a few approaches to model the measurement bias in the data. A feasible way to fuse the field data from multiple sources will then be discussed. The calibration models are implemented in the "RobustCalibration" R Package on CRAN. The scientific goal of this work is to use data in May 2018 during the earthquake and the eruption of the Kilauea Volcano to resolve the location, volume, and pressure change in the Halema'uma'u Reservoir, as well as relating the results to the inferences from the past caldera collapses.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
This document introduces an R package called PSF that implements a Pattern Sequence based Forecasting (PSF) algorithm for univariate time series forecasting. The PSF algorithm clusters time series data and then predicts future values based on identifying repeating patterns of clusters. The PSF package contains functions that perform the main steps of the PSF algorithm, including selecting the optimal number of clusters, selecting the optimal window size, and making predictions for a given window size and number of clusters. The package aims to promote and simplify the use of the PSF algorithm for time series forecasting.
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection.
New approach to the identification of the easy expression recognition system ...TELKOMNIKA JOURNAL
In recent years, facial recognition has been a major problem in the field of computer vision, which has attracted lots of interest in previous years because of its use in different applications by different domains and image analysis. Which is based on the extraction of facial descriptors, it is a very important step in facial recognition. In this article, we compared robust methods (SIFT, PCA-SIFT, ASIFT and SURF) to extract relevant facial information with different facial posture variations (open and unopened mouth, glasses and no glasses, open and closed eyes). The simulation results show that the detector (SURF) is better than others at finding the similarity descriptor and calculation time. Our method is based on the normalization of vector descriptors and combined with the RANSAC algorithm to cancel outliers in order to calculate the Hessian matrix with the objective of reducing the calculation time. To validate our experience, we tested four facial images databases containing several modifications. The results of the simulation show that our method is more efficient than other detectors in terms of speed of recognition and determination of similar points between two images of the same face, one belonging to the base of the text and the other one to the base driven by different modifications. This method, which can be applied on a mobile platform to analyze the content of simple images, for example, to detect driver fatigue, human-machine interaction, human-robot. Using descriptors with properties important for good accuracy and real-time response.
SENSITIVITY ANALYSIS IN A LIDARCAMERA CALIBRATIONcscpconf
In this paper, variability analysis was performed on the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Light Detection and Ranging). Both sensors
are used to digitize urban environments. A practical and complete methodology is presented to
predict the error propagation inside the LiDAR-camera calibration. We perform a sensitivity
analysis in a local and global way. The local approach analyses the output variance with
respect to the input, only one parameter is varied at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivity indexes are calculated on the total
variation range of the input parameters. We quantify the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the noisy data of both sensors and their
calibration. We calculated the sensitivity indexes by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sensitivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
The document discusses a method for 3D object recognition from 2D images using centroidal representation. It involves several steps: filtering and binarizing the image, detecting edges, calculating the object center point, extracting features around the centroid, and creating mathematical models using wavelet transforms and autoregression. Centroidal samples represent distances from the center to the boundary every 45 degrees. Wavelet transforms and autoregression are used to create scale and position invariant representations of the object for recognition.
Robust Clustering of Eye Movement Recordings for QuantiGiuseppe Fineschi
Characterizing the location and extent of a viewer’s interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-ofinterest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.
Detection of Neural Activities in FMRI Using Jensen-Shannon DivergenceCSCJournals
In this paper, we present a statistical technique based on Jensen-Shanon divergence for detecting the regions of activity in fMRI images. The method is model free and we exploit the metric property of the square root of Jensen-Shannon divergence to accumulate the variations between successive time frames of fMRI images. Theoretically and experimentally we show the effectiveness of our algorithm.
This document compares the accuracy of determining volumes using close range photogrammetry versus traditional methods. It presents a case study where the volume of a test field was calculated using both approaches. Using traditional methods with 425 control points, the volume was calculated as 221475.14 m3 using trapezoidal rules, 221424.52 m3 using Simpson's rule, and 221484.05 m3 using Simpson's 3/8 rule. Using close range photogrammetry with 42 control points and 574 generated points, the volume was calculated as 215310.60 m3 using trapezoidal rules, 215300.43 m3 using Simpson's rule, and 215304.12 m3 using Simpson's 3
Performance improvement of a Rainfall Prediction Model using Particle Swarm O...ijceronline
The performances of the statistical methods of time series forecast can be improved by precise selection of their parameters. Various techniques are being applied to improve the modeling accuracy of these models. Particle swarm optimization is one such technique which can be conveniently used to determine the model parameters accurately. This robust optimization technique has already been applied to improve the performance of artificial neural networks for time series prediction. This study uses particle swarm optimization technique to determine the parameters of an exponential autoregressive model for time series prediction. The model is applied for annual rainfall prediction and it shows a fairly good performance in comparison to the statistical ARIMA model
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
This document proposes a new method for image segmentation using histogram thresholding and hierarchical cluster analysis. The method develops a dendrogram (hierarchical tree) of gray levels in an image histogram based on a similarity measure involving the inter-class variance of clusters to be merged and the intra-class variance of the new merged cluster. By iteratively merging the most similar clusters in a bottom-up approach, the dendrogram yields a clear separation of object and background pixels, providing robust threshold estimates. The method can be extended to multi-level thresholding by terminating the clustering at different levels in the dendrogram. Experiments show the method outperforms Otsu's and Kwon's thresholding methods.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Receding Horizon Stochastic Control Algorithms for Sensor Management ACC 2010Darin Hitchings, Ph.D.
This document summarizes a research paper about developing receding horizon control algorithms for sensor management problems. The algorithms aim to efficiently control information acquisition from smart sensors that can dynamically change their observations. The proposed approaches use a stochastic control approximation and receding horizon optimization to obtain feasible sensor schedules from mixed strategies. Simulations showed the receding horizon algorithms achieved near-optimal performance comparable to a theoretical lower bound, and that a modest horizon was sufficient.
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...ijfcstjournal
The paper proposes the modified-SIFT algorithm which will be a modified form of the scale invariant feature transform. The modification consists of considering successive groups of 8 rows of pixel, along the height of the image. These are used to construct 8 bin histograms for magnitude as well as orientation individually. As a result the number of feature descriptors is significantly less (95%) than the standard SIFT approach. Fewer feature descriptor leads to reduced accuracy. This reduction in accuracy is quite drastic when searching for a single (RANK1) image match; however accuracy improves if a band of likely (say tolerance of 10%) images is to be returned. The paper therefore proposes a two-stage-approach where
First Modified-SIFT is used to obtain a shortlisted band of likely images subsequently SIFT is applied within this band to find a perfect match. It may appear that this process is tedious however it provides a significant reduction in search time as compared to applying SIFT on the entire database. The minor reduction in accuracy can be offset by the considerable time gained while searching a large database. The
modified-SIFT algorithm when used in conjunction with a face cropping algorithm can also be used to find a match against disguised images.
Model calibration or data inversion involves using experimental or field data to estimate the unknown parameters in a mathematical model. In the first part of the talk, I will present a review of a few approaches for model calibration or data inversion with the focus on model discrepancy and measurement bias. A few state-of-art methods, such as modeling the discrepancy by the Gaussian stochastic process (GaSP) or scaled Gaussian stochastic processes (S-GaSP), L2 calibration, Least squares (LS) calibration and orthogonal Gaussian process calibration, will be introduced. The connection and difference between these methods will be discussed. In the second part of talk, I will discuss our ongoing works on calibrating a geophysical model by integrating the different types of the field data, such as the interferometric synthetic aperture radar satellite (InSAR) interferograms, GPS data, velocities of tilt and lava lake from the Kilauea Volcano during the eruption in 2018. This task is complicated by the discrepancy between the model and reality different sample sizes and possible bias in field data. We introduce the scaled Gaussian stochastic process (S-GaSP), a new stochastic process to model the discrepancy function in calibration for the identifiability issue between the calibrated mathematical model and the discrepancy function. We also compare a few approaches to model the measurement bias in the data. A feasible way to fuse the field data from multiple sources will then be discussed. The calibration models are implemented in the "RobustCalibration" R Package on CRAN. The scientific goal of this work is to use data in May 2018 during the earthquake and the eruption of the Kilauea Volcano to resolve the location, volume, and pressure change in the Halema'uma'u Reservoir, as well as relating the results to the inferences from the past caldera collapses.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
This document introduces an R package called PSF that implements a Pattern Sequence based Forecasting (PSF) algorithm for univariate time series forecasting. The PSF algorithm clusters time series data and then predicts future values based on identifying repeating patterns of clusters. The PSF package contains functions that perform the main steps of the PSF algorithm, including selecting the optimal number of clusters, selecting the optimal window size, and making predictions for a given window size and number of clusters. The package aims to promote and simplify the use of the PSF algorithm for time series forecasting.
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection.
New approach to the identification of the easy expression recognition system ...TELKOMNIKA JOURNAL
In recent years, facial recognition has been a major problem in the field of computer vision, which has attracted lots of interest in previous years because of its use in different applications by different domains and image analysis. Which is based on the extraction of facial descriptors, it is a very important step in facial recognition. In this article, we compared robust methods (SIFT, PCA-SIFT, ASIFT and SURF) to extract relevant facial information with different facial posture variations (open and unopened mouth, glasses and no glasses, open and closed eyes). The simulation results show that the detector (SURF) is better than others at finding the similarity descriptor and calculation time. Our method is based on the normalization of vector descriptors and combined with the RANSAC algorithm to cancel outliers in order to calculate the Hessian matrix with the objective of reducing the calculation time. To validate our experience, we tested four facial images databases containing several modifications. The results of the simulation show that our method is more efficient than other detectors in terms of speed of recognition and determination of similar points between two images of the same face, one belonging to the base of the text and the other one to the base driven by different modifications. This method, which can be applied on a mobile platform to analyze the content of simple images, for example, to detect driver fatigue, human-machine interaction, human-robot. Using descriptors with properties important for good accuracy and real-time response.
SENSITIVITY ANALYSIS IN A LIDARCAMERA CALIBRATIONcscpconf
In this paper, variability analysis was performed on the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Light Detection and Ranging). Both sensors
are used to digitize urban environments. A practical and complete methodology is presented to
predict the error propagation inside the LiDAR-camera calibration. We perform a sensitivity
analysis in a local and global way. The local approach analyses the output variance with
respect to the input, only one parameter is varied at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivity indexes are calculated on the total
variation range of the input parameters. We quantify the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the noisy data of both sensors and their
calibration. We calculated the sensitivity indexes by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sensitivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
This document compares four recent boundary detection methods: Boundary Detection with Sketch Tokens, Crisp Boundary Detection Using Pointwise Mutual Information, Oriented Edge Forests for Boundary Detection, and Fast Edge Detection Using Structured Forests. It evaluates these methods on the Berkeley Segmentation Dataset using precision-recall metrics to determine which method achieves the best balance of high precision and high recall. It also considers the time complexity and simplicity of each method to help researchers select the most appropriate approach for their needs.
This project is to retrieve the similar geographic images from the dataset based on the features extracted.
Retrieval is the process of collecting the relevant images from the dataset which contains more number of
images. Initially the preprocessing step is performed in order to remove noise occurred in input image with
the help of Gaussian filter. As the second step, Gray Level Co-occurrence Matrix (GLCM), Scale Invariant
Feature Transform (SIFT), and Moment Invariant Feature algorithms are implemented to extract the
features from the images. After this process, the relevant geographic images are retrieved from the dataset
by using Euclidean distance. In this, the dataset consists of totally 40 images. From that the images which
are all related to the input image are retrieved by using Euclidean distance. The approach of SIFT is to
perform reliable recognition, it is important that the feature extracted from the training image be
detectable even under changes in image scale, noise and illumination. The GLCM calculates how often a
pixel with gray level value occurs. While the focus is on image retrieval, our project is effectively used in
the applications such as detection and classification.
An automatic algorithm for object recognition and detection based on asift ke...Kunal Kishor Nirala
This document presents an automatic algorithm for object recognition and detection based on ASIFT keypoints. The algorithm combines affine scale invariant feature transform (ASIFT) and a region merging algorithm. ASIFT is used to extract keypoints from a training image of the object. These keypoints are then used instead of user markers in a region merging algorithm to recognize and detect the object with full boundary in other images. Experimental results show the method is efficient and accurate at recognizing and detecting objects.
Four experiments were conducted using a paint can hanging from a spring. In the first experiment, the paint can oscillated purely vertically, and PCA isolated this behavior in a single principal component, capturing 95% of the variance. When noise was added by shaking the cameras in the second experiment, PCA was still able to isolate the oscillatory behavior but with less accuracy. In experiments three and four where the paint can moved in both vertical and horizontal directions, PCA extracted the multidimensional behavior with the expected rank and reasonable accuracy.
EDGE DETECTION IN SEGMENTED IMAGES THROUGH MEAN SHIFT ITERATIVE GRADIENT USIN...ijscmcj
In this paper, we propose a new method for edge detection in obtained images from the Mean Shift iterative algorithm. The comparable, proportional and symmetrical images are de?ned and the importance of Ring Theory is explained. A relation of equivalence among proportional images are de?ned for image groups in equivalent classes. The length of the mean shift vector is used in order to quantify the homogeneity of the neighborhoods. This gives a measure of how much uniform are the regions that compose the image. Edge detection is carried out by using the mean shift gradient based on symmetrical images. The difference among the values of gray levels are accentuated or these are decreased to enhance the interest region contours. The chosen images for the experiments were standard images and real images (cerebral hemorrhage images). The obtained results were compared with the Canny detector, and our results showed a good performance as for the edge continuity.
Machine learning for high-speed corner detectionbutest
This document describes research into developing a machine learning approach for high-speed corner detection in images and video. The researchers:
1) Train a decision tree classifier on sample image corners to learn rules for fast corner detection, achieving detection speeds over 7x faster than existing methods like Harris.
2) Evaluate the learned detector against existing detectors using a criterion that corresponding corners should be detected across different views of the same 3D scene.
3) Show that despite being designed for speed, the learned detector outperforms other detectors according to this evaluation criterion.
Template matching is a basic method in image analysis to extract useful information from images. In this
paper, we suggest a new method for pattern matching. Our method transform the template image from two
dimensional image into one dimensional vector. Also all sub-windows (same size of template) in the
reference image will transform into one dimensional vectors. The three similarity measures SAD, SSD, and
Euclidean are used to compute the likeness between template and all sub-windows in the reference image
to find the best match. The experimental results show the superior performance of the proposed method
over the conventional methods on various template of different sizes.
Probabilistic Collaborative Filtering with Negative Cross EntropyAlejandro Bellogin
This document proposes using relevance models from information retrieval to improve probabilistic collaborative filtering recommendation algorithms. It introduces a relevance-based language model (RMUB) and a complete probabilistic model (RMCE) that incorporates negative cross entropy. Experiments on MovieLens datasets show both methods outperform baseline user-based and neighbourhood-based collaborative filtering, with RMCE achieving the best performance.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A new hybrid method for the segmentation of the brain mrissipij
The magnetic resonance imaging is a method which has undeniable qualities of contrast and tissue
characterization, presenting an interest in the follow-up of various pathologies such as the multiple
sclerosis. In this work, a new method of hybrid segmentation is presented and applied to Brain MRIs. The
extraction of the image of the brain is pretreated with the Non Local Means filter. A theoretical approach is
proposed; finally the last section is organized around an experimental part allowing the study of the
behavior of our model on textured images. In the aim to validate our model, different segmentations were
down on pathological Brain MRI, the obtained results have been compared to the results obtained by
another models. This results show the effectiveness and the robustness of the suggested approach.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
Application of normalized cross correlation to image registrationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Local Phase Oriented Structure Tensor To Segment Texture Images With Intensit...CSCJournals
This paper proposed the active contour based texture image segmentation scheme using the linear structure tensor and tensor oriented steerable Quadrature filter. Linear Structure tensor (LST) is a popular method for the unsupervised texture image segmentation where LST contains only horizontal and vertical orientation information but lake in other orientation information and also in the image intensity information on which active contour is dependent. Therefore in this paper, LST is modified by adding intensity information from tensor oriented structure tensor to enhance the orientation information. In the proposed model, these phases oriented features are utilized as an external force in the region based active contour model (ACM) to segment the texture images having intensity inhomogeneity and noisy images. To validate the results of the proposed model, quantitative analysis is also shown in terms of accuracy using a Berkeley image database.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
This document summarizes an article that proposes improvements to the nonlocal means (NLM) image denoising algorithm. The proposed method first applies Gaussian blurring to pre-process the noisy image. It then extracts features from image patches using Hu's moment invariants. Next, it performs k-means clustering on the feature vectors to group similar patches. Finally, it applies row image weighted averaging to reconstruct the image. The experimental results showed this method can perform better denoising than the original NLM, especially at higher noise levels, by providing more reliable candidate patches for the weighted averaging.
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Similar to A comparison of SIFT, PCA-SIFT and SURF (20)
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
1. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 143
A Comparison of SIFT, PCA-SIFT and SURF
Luo Juan qiuhehappy@hotmail.com
Computer Graphics Lab,
Chonbuk National University,
Jeonju 561-756, South Korea
Oubong Gwun obgwun@chonbuk.ac.kr
Computer Graphics Lab,
Chonbuk National University,
Jeonju 561-756, South Korea
Abstract
This paper summarizes the three robust feature detection methods: Scale
Invariant Feature Transform (SIFT), Principal Component Analysis (PCA)–SIFT
and Speeded Up Robust Features (SURF). This paper uses KNN (K-Nearest
Neighbor) and Random Sample Consensus (RANSAC) to the three methods in
order to analyze the results of the methods’ application in recognition. KNN is
used to find the matches, and RANSAC to reject inconsistent matches from
which the inliers can take as correct matches. The performance of the robust
feature detection methods are compared for scale changes, rotation, blur,
illumination changes and affine transformations. All the experiments use
repeatability measurement and the number of correct matches for the evaluation
measurements. SIFT presents its stability in most situations although it’s slow.
SURF is the fastest one with good performance as the same as SIFT. PCA-SIFT
show its advantages in rotation and illumination changes.
Keywords: SIFT, PCA-SIFT, SURF, KNN, RANSAC, robust detectors.
1. INTRODUCTION
Lowe (2004) presented SIFT for extracting distinctive invariant features from images that can be invariant
to image scale and rotation. Then it was widely used in image mosaic, recognition, retrieval and etc. After
Lowe, Ke and Sukthankar used PCA to normalize gradient patch instead of histograms [2]. They showed
that PCA-based local descriptors were also distinctive and robust to image deformations. But the
methods of extracting robust features were still very slow. Bay and Tuytelaars (2006) speeded up robust
features and used integral images for image convolutions and Fast-Hessian detector [3]. Their
experiments turned out that it was faster and it works well.
There are also many other feature detection methods; edge detection, corner detection and etc. Different
method has its own advantages. This paper focuses on three robust feature detection methods which are
invariant to image transformation or distortion. Furthermore, it applies the three methods in recognition
and compares the recognition results by using KNN and RANSAC methods. To give an equality
2. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 144
comparison, use the same KNN and RANSAC to the three methods. In the experiment, we use
repeatability measurement to evaluate the performance of detection for each method [4]; the higher
repeatability score is better than the lower one. When a method gives a stable detector and matching
numbers we can say that it is a stable method and if we want to know how correct the method is, we need
to use correct matches number that can be get from the RANSAC method.
The related work is presented in Section 2 while Section 3 discusses the overview of the method. In
section 4 we can see the experiments and results. Section 5 tells the conclusions and future work of the
paper.
2. RELATED WORK
In [1], Lowe did not only presented SIFT but also discussed the keypoint matching which is also needed
to find the nearest neighbor. He gave an effective measurement to choose the neighbor which is obtained
by comparing the distance of the closest neighbor to the second-closest neighbor. In my experiment
compromising of the cost and match performance, the neighbor will be chosen when the distance ratio is
smaller than 0.5 [1]. All the three methods use the same RANSAC model and parameters, which will
explain more in the following.
K. Mikolajczyk and C. Schmid [6], compared the performance of many local descriptors which used recall
and precision as the evaluation criterion. They gave experiments of comparison for affine transformations,
scale changes, rotation, blur, compression, and illumination changes. In [7], they showed how to compute
the repeatability measurement of affine region detectors also in [4] the image was characterized by a set
of scale invariant points for indexing.
Some researches focused on the application of algorithms such as automatic image mosaic technique
based on SIFT [9][11], stitching application of SIFT [10][15][12] and Traffic sign recognition based on
SIFT [12]. Y. Ke [2] gave some comparisons of SIFT and PCA-SIFT. PCA is well-suited to represents
keypoint patches but observed to be sensitive to the registration error. In [3], the author used Fast-
Hessian detector which is faster and better than Hessian detector. Section 3 will show more details of the
three methods and their differences.
3. OVERVIEW OF THE THREE METHODS
3.1 SIFT detector
SIFT consists of four major stages: scale-space extrema detection, keypoint localization, orientation
assignment and keypoint descriptor. The first stage used difference-of-Gaussian function to identify
potential interest points [1], which were invariant to scale and orientation. DOG was used instead of
Gaussian to improve the computation speed [1].
),,(),,(),()),,(),,((),,( σσσσσ yxLkyxLyxIyxGkyxGyxD −=∗−= (1)
In the keypoint localization step, they rejected the low contrast points and eliminated the edge response.
Hessian matrix was used to compute the principal curvatures and eliminate the keypoints that have a
ratio between the principal curvatures greater than the ratio. An orientation histogram was formed from
the gradient orientations of sample points within a region around the keypoint in order to get an
orientation assignment [1]. According to the paper’s experiments, the best results were achieved with a
4 x 4 array of histograms with 8 orientation bins in each. So the descriptor of SIFT that was used is 4 x 4
x 8 = 128 dimensions.
3.2 PCA-SIFT detector
3. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 145
PCA is a standard technique for dimensionality reduction [2], which is well-suited to represent the
keypoint patches and enables us to linearly-project high-dimensional samples into a low-dimensional
feature space.
In other words, PCA-SIFT uses PCA instead of histogram to normalize gradient patch [2]. The feature
vector is significantly smaller than the standard SIFT feature vector, and it can be used with the same
matching algorithms. PCA-SIFT, like SIFT, also used Euclidean distance to determine whether the two
vectors correspond to the same keypoint in different images. In PCA-SIFT, the input vector is created by
concatenation of the horizontal and vertical gradient maps for the 41x41 patch centered to the keypoint,
which has 2x39x39=3042 elements [2]. According to PCA-SIFT, fewer components requires less storage
and will be resulting to a faster matching, they choose the dimensionality of the feature space, n = 20,
which results to significant space benefits [2].
3.3 SURF detector
SIFT and SURF algorithms employ slightly different ways of detecting features [9]. SIFT builds an image
pyramids, filtering each layer with Gaussians of increasing sigma values and taking the difference. On the
other hand, SURF creates a “stack” without 2:1 down sampling for higher levels in the pyramid resulting
in images of the same resolution [9]. Due to the use of integral images, SURF filters the stack using a box
filter approximation of second-order Gaussian partial derivatives, since integral images allow the
computation of rectangular box filters in near constant time [3].
In keypoint matching step, the nearest neighbor is defined as the keypoint with minimum Euclidean
distance for the invariant descriptor vector. Lowe used a more effective measurement that obtained by
comparing the distance of the closest neighbor to that second-closest neighbor [1] so the author of this
paper decided to choose 0.5 as distance ratio like Lowe did in SIFT.
4. EXPERIMENTS & RESULTS
4.1 Evaluation measurement
The repeatability measurement is computed as a ratio between the number of point-to-point
correspondences that can be established for detected points and the mean number of points detected in
two images [4]:
),(
),(
21
21
2,1
mmmean
IIC
r =
(2)
Where ),( 21 IIC denotes the number of corresponding couples, 1m and 2m means the numbers of the
detector. This measurement represents the performance of finding matches.
Another evaluation measurement is RANSAC, which is used to reject inconsistent matches. The inlier is a
point that has a correct match in the input image. Our goal is to obtain the inliers and reject outliers in the
same time [4]. The probability that the algorithm never selects a set of m points which all are inliers is
p−1 :
km
wp )1(1 −=− (3)
Where m is the least number of points that needed for estimating a model, k is the number of samples
required and w is the probability that the RANSAC algorithm selects inliers from the input data. The
RANSAC repeatedly guess a set of mode of correspondences that are drawn randomly from the input set.
We can think the inliers as the correct match numbers. In the following experiments, matches mean inliers.
4. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 146
In this paper, the three methods that were used are all based on opencv. We use the same image dataset,
which includes the general deformations, such as scale changes, view changes, illumination changes and
rotation. As shown in figure 1. All the experiments work on PC AMD 3200+, 2.0G, and 1.0 GB RAM, with
Windows XP as an operating system.
4.2 Processing Time
Time evaluation is a relative result, which only shows the tendency of the three methods’ time cost. There
are factors that influenced on the results such as the size and quality of the image, image types (e.g.
scenery or texture), and the parameters of the algorithm (e.g. the distance ratio) [1]. The first part of the
experiment uses Graffiti dataset as shown in group A of figure 1, whose sizes are all 300 x 240 pixels.
The parameters of the three algorithms are the same settings according to the original paper [1] [2] [3].
Time is counted for the complete processing which includes feature detecting and matching. Table 1
show that SURF is the fastest one, SIFT is the slowest but it finds most matches.
Items SIFT PCA-SIFT SURF
total matches 271 18 186
total time (ms) 2.15378e+007 2.13969e+007 3362.86
10 matches’ time(ms) 2.14806e+007 2.09696e+007 3304.97
TABLE 1: Processing time comparison. Using group A of figure 1, total time is the
time of finding all the matches, 10 matches’ time is counted until the first 10 matches.
FIGURE 1: Part of test images. A and H are the affine transformed images, B and C are the
scale changed images, D are the rotation images, E and F are the blurred images, G are the
illumination changed images.
E F
C D
A B
G
G
5. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 147
4.3 Scale Changes
The second experiment shows the performance of scale invariant, which uses group B and C of figure 1.
Figure 2 shows some of the matching images, table 2 shows the matching number of the three methods.
In order to see the matches clearly, we just show the first 10 matches [2]. When the scale change gets
larger, SIFT or SURF is much better than PCA-SIFT, therefore the results shows that PCA-SIFT is not
stable as SIFT and SURF to scale invariant and PCA-SIFT detects few matches.
4.4 Image Rotation
The third experiment shows the influence of rotation on the three methods. As shown in group D of figure
1, the image rotates 5 or 10 degrees. SIFT is represented by the blue line that detects the most matches
and stable to rotation. SURF doesn’t work well, it finds the least matches and gets the least repeatability
which also shown in figure 3 and 4. In addition with, PCA-SIFT found only one correct match of the first
ten matches. Although it has one correct match, we can still improve the PCA-SIFT and this is better than
SURF.
4.5 Image Blur
This fourth experiment uses Gaussian blur like the images in group E and F of figure 1. The radius of the
blur changes from 0.5 to 9.0. As shown in figure 5, SURF and PCA-SIFT shows good performance, when
the radius gets larger, mention the matching results in figure 6, SURF detects few matches and PCA-
SIFT finds more matches but smaller correct number. In this case, SIFT shows its best performance here.
Data SIFT PCA-SIFT SURF
1-2 41 1 10
3-4 35 0 36
5-6 495 19 298
7-8 303 85 418
9-10 66 1 26
FIGURE 2: Scale changes comparison. Using Group B, C of figure 1. The first result is SIFT
(10/66matches), second is SURF (10/26matches). The last two images show the results of PCA-SIFT
in scale changes.
TABLE 2: Scale changes comparison. Using group B, C of figure
1, data represents the total number of matches for each method.
6. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 148
4.6 Illumination Changes
This fifth experiment shows the illumination effects of the methods. As shown in group G of figure 1, from
data 1 to data 6, the brightness of the image gets lower and lower. Table 3 shows the repeatability of
illumination changes. SURF has the largest repeatability, 31%, PCA-SIFT shows as good performance as
SURF, which coincides with [2][3][6]. Look at one of the experiment result in figure 7, which shows the
first 10 matches.
FIGURE 3: Rotation comparison. Using group D of
figure 1, data represents the repeatability of rotation.
FIGURE 5: Blur comparison. Using group E of
figure 1, data represents the repeatability of blur.
FIGURE 6: Blur comparison. Using group E and F of figure 1. From left to right, the results are:
SIFT (10 correct / 10), SURF (3 correct / 3), PCA-SIFT (1 correct / 10), which blur radius is 9.
FIGURE 4: Rotation comparison. Using group D of figure 1, rotation degree is 45. The first two
images show the first ten matches of SIFT (10 correct / 10) and PCA-SIFT (1 correct / 10), the
result of SURF (2 correct / 2) is the total matches.
7. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 149
4.7 Affine Transformations
This sixth experiment evaluates the methods’ stability of affine transformations (view transformation).
Affine transformation is important in panorama stitching. This experiment uses images in group A and H
of figure 1, the viewpoint change is approximately 50 degrees from data 1–6. Repeatability of affine
transformation shows in table 4 and the matching results shown in figure 8. From the table and figure,
SURF and SIFT have a good repeatability when the viewpoint change is small, but when the viewpoint
change is larger than data 1–4, SURF detects 0 matches and PCA-SIFT shows better performance.
Data SIFT PCA-SIFT SURF
1-2 43% 39% 70%
1-3 32% 34% 49%
1-4 18% 27% 25%
1-5 8% 18% 6%
1-6 2% 9% 5%
average 21% 25% 31%
Data SIFT PCA-SIFT SURF
1-2 47% 15% 54%
1-3 37% 12% 33%
1-4 22% 12% 12%
1-5 7% 10% 2%
1-6 0% 9% 0%
TABLE 3: Illumination changes comparison. Using group G of figure
1, the data represents repeatability and the average of repeatability.
TABLE 4: Affine transformation comparison. Using group
A, H of figure 1, the data represents the repeatability.
FIGURE 7: Illumination changes comparison. Using group G of figure 1, from left to right,
SIFT (9 correct / 10), PCA-SIFT (6 correct / 10), SURF (10correct/10).
8. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 150
4.8 Discussion
Table 5 shows the results of all experiments. It also shows that there is no best method for all deformation.
Hence, when choosing a feature detection method, make sure which most concerned performance is.
The result of this experiment is not constant for all cases. Changes of an algorithm can get a new result,
find the nearest neighbor instead of KNN or use an improved RANSAC.
SIFT‘s matching success attributes to that its feature representation has been carefully designed to be
robust to localization error. As discussed by Y. Ke, PCA is known to be sensitive to registration error.
Using a small number of dimensions provides significant benefits in storage space and matching speed
[2]. SURF shows its stability and fast speed in the experiments. It is known that ‘Fast-Hessian’ detector
that used in SURF is more than 3 times faster that DOG (which was used in SIFT) and 5 times faster than
Hessian-Laplace [3]. From the following table, we know that PCA-SIFT need to improve blur and scale
performances. SURF looks fast and good in most situations, but when the rotation is large, it also needs
to improve this performance. SIFT shows its stability in all the experiments except for time, because it
detects so many keypoints and finds so many matches, we can think about matching in some interest
keypoints.
5 . CONCLUSIONS & FUTURE WORK
This paper has evaluated three feature detection methods for image deformation. SIFT is slow and not
good at illumination changes, while it is invariant to rotation, scale changes and affine transformations.
Method Time Scale Rotation Blur Illumination Affine
SIFT common best best best common good
PCA-SIFT good common good common good good
SURF best good common good best good
TABLE 5: Conclusion of all the experiments.
FIGURE 8: Affine transformations comparison. Using group A and H of figure 1, when overview
change is large, the results shows from left to right: SIFT ( 5 correct / 10), SURF ( 0 ), PCA-SIFT
(1), PCA-SIFT( 2 / 10 ).
9. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 151
SURF is fast and has good performance as the same as SIFT, but it is not stable to rotation and
illumination changes. Choosing the method mainly depends on the application. Suppose the application
concern is more than one performances compromising from the Table 5, choose a suitable algorithm and
giving improvement according to the application. For example, PCA-SIFT is the best choice, but the
application also concerns blur performance, so what needed is to give PCA-SIFT some improvement on
blur performance. The future work is to improve the algorithm and apply these methods on single areas,
such as image retrieval or stitching.
10. Luo Juan & Oubong Gwun
International Journal of Image Processing (IJIP) Volume(3), Issue(4) 152
6. REFERENCES
1. FOR JOURNALS: D. Lowe.”Distinctive Image Features from Scale-Invariant Keypoints”, IJCV,
60(2):91–110, 2004.
2. FOR CONFERENCES: Y. Ke and R. Sukthankar.PCA-SIFT: “A More Distinctive
Representation for Local Image Descriptors” ,Proc. Conf. Computer Vision and Pattern
Recognition, pp. 511-517, 2004.
3. FOR CONFERENCES: Bay,H,. Tuytelaars, T., &Van Gool, L.(2006). “SURF: Speeded Up
Robust Features”, 9
th
European Conference on Computer Vision.
4. FOR CONFERENCES: K. Mikolajczyk and C. Schmid. “Indexing Based on Scale Invariant
Interest Points”. Proc. Eighth Int’l Conf. Computer Vision, pp. 525-531, 2001.
5. FOR JOURNALS: K.Kanatani. “Geometric information criterion for model selection”, IJCV,
26(3):171-189,1998.
6. FOR JOURNALS: K. Mikolajzyk and C. Schmid. “A Perforance Evaluation of Local
Descriptors”, IEEE,Trans. Pattern Analysis and Machine Intelligence, vol.27, no.10, pp 1615-1630,
October 2005.
7. FOR JOURNALS: K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F.
Schaffalitzky, T. Kadir, and L.V. Gool.” A Comparison of Affine Region Detectors”, IJCV,
65(1/2):43-72, 2005.
8. Eric Chu,Erin Hsu, Sandy Yu. “Image-Guided Tours: Fast-Approximated SIFT with U-SURF
Features”, Stanford University.
9. FOR CONFERENCES: Yang zhan-long and Guo bao-long. “Image Mosaic Based On SIFT”,
International Conference on Intelligent Information Hiding and Multimedia Signal Processing,
pp:1422-1425,2008.
10. FOR CONFERENCES: M. Brown and D. Lowe.” Recognizing Panoramas”. Proc. Ninth Int’l
Conf. Computer Vision, pp. 1218-1227, 2003.
11. FOR CONFERENCES: Salgian, A.S. “Using Multiple Patches for 3D Object Recognition”,
Computer Vision and Pattern Recognition, CVPR '07. pp:1-6, June 2007.
12. FOR CONFERENCES: Y. Heo, K. Lee, and S. Lee. “Illumination and camera invariant
stereo matching ”. In CVPR, pp:1–8, 2008.
13. FOR SYMPOSIUM: Kus, M.C.; Gokmen, M.; Etaner-Uyar, S. ” Traffic sign recognition using
Scale Invariant Feature Transform and color classification”. ISCIS '08. pp: 1-6, Oct. 2008.
14. FOR TRANSACTIONS: Stokman, H; Gevers, T.” Selection and Fusion of Color Models for
Image Feature Detection ”. Pattern Analysis and Machine Intelligence, IEEE Transactions on
Volume 29, Issue 3, pp:371 – 381, March 2007.
15. FOR CONFERENCES: Cheng-Yuan Tang; Yi-Leh Wu; Maw-Kae Hor; Wen-Hung Wang.
“Modified sift descriptor for image matching under interference”. Machine Learning and
Cybernetics, 2008 International Conference on Volume 6, pp:3294 – 3300, July 2008.