The document proposes a new strategy called Maximum Distance Point Strategy (MDPS) to evaluate spherical form error from points measured by a Coordinate Measuring Machine (CMM). MDPS selects points that are maximally distant from each other to define a candidate sphere, unlike the commonly used Least Squares Method (LSM) which minimizes the sum of squared deviations of all points from the sphere. The results of MDPS are compared to LSM. MDPS is found to provide comparable or better results than LSM, especially when points are not uniformly distributed. The strategy aims to provide a more robust evaluation of spherical form error compared to existing methods.
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
Determination of the optimum route is often encountered in daily life. The purpose of the optimum route itself is to find the best trajectory of the two pairs of vertices contained in a map or graph. The search algorithm applied is A*. This algorithm has the evaluation function to assist the search. The function is called heuristic. Two methods which have been introduced as a step to obtain the value of heuristic function are by using Euclidean and Manhattan distance. Both of these methods create the optimum distance in shortest path problem, but these functions gain the different results. This research performs the development of the heuristic function using Euclidean, Manhattan, Euclidean Square and a new method to compare the results.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
Photogrammetry - Space Resection by Collinearity EquationsAhmed Nassar
Space resection is commonly used to determine the exterior orientation parameters (which refers to position and orientation related to an exterior coordinate system) associated with one or more photos based on measurements of ground control points (GCPs). space resection is a nonlinear problem, existing methods involve linearization of the collinearity condition and the use of an iterative process to determine the final solution using the least-squares method. The process also requires initial approximate values of the unknown parameters, some of which must be estimated by another least-squares solution.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
Seismic wavelet estimation is an important step in processing and analysis of seismic data. Inversion methods as Narrow-Band and theConstrained Sparse-Spike ones require information about it so that the inversion solution, once it is not a unique problem, may be restricted by comparing the real seismic trace with the synthetic generated by convolution of the estimated reflectivity and wavelet. Besides helping in seismic inversion, a good estimate of the wavelet enables an inverse filter with less uncertainty to be computed in the deconvolution step and while tying well logs, a better correlation between the seismic trace and well log can be achieved. Depending on the use or not of well log information, the methods of wavelet estimation can be divided into two classes: statistical and deterministic. This work aimed to test the sensitivity of acoustic post-stack seismic inversion algorithms to wavelets statistically estimated by two distinct methods
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
Determination of the optimum route is often encountered in daily life. The purpose of the optimum route itself is to find the best trajectory of the two pairs of vertices contained in a map or graph. The search algorithm applied is A*. This algorithm has the evaluation function to assist the search. The function is called heuristic. Two methods which have been introduced as a step to obtain the value of heuristic function are by using Euclidean and Manhattan distance. Both of these methods create the optimum distance in shortest path problem, but these functions gain the different results. This research performs the development of the heuristic function using Euclidean, Manhattan, Euclidean Square and a new method to compare the results.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
Photogrammetry - Space Resection by Collinearity EquationsAhmed Nassar
Space resection is commonly used to determine the exterior orientation parameters (which refers to position and orientation related to an exterior coordinate system) associated with one or more photos based on measurements of ground control points (GCPs). space resection is a nonlinear problem, existing methods involve linearization of the collinearity condition and the use of an iterative process to determine the final solution using the least-squares method. The process also requires initial approximate values of the unknown parameters, some of which must be estimated by another least-squares solution.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
Seismic wavelet estimation is an important step in processing and analysis of seismic data. Inversion methods as Narrow-Band and theConstrained Sparse-Spike ones require information about it so that the inversion solution, once it is not a unique problem, may be restricted by comparing the real seismic trace with the synthetic generated by convolution of the estimated reflectivity and wavelet. Besides helping in seismic inversion, a good estimate of the wavelet enables an inverse filter with less uncertainty to be computed in the deconvolution step and while tying well logs, a better correlation between the seismic trace and well log can be achieved. Depending on the use or not of well log information, the methods of wavelet estimation can be divided into two classes: statistical and deterministic. This work aimed to test the sensitivity of acoustic post-stack seismic inversion algorithms to wavelets statistically estimated by two distinct methods
Rotation Invariant Matching of Partial ShoeprintsIJERA Editor
In this paper, we propose a solution for the problem of rotated partial shoeprints retrieval based on the combined use of local points of interest and SIFT descriptor. Once the generated features are encoded using SIFT descriptor, matching is carried out using RANSAC to estimate a transformation model and establish the number of its inliers which is then multiplied by the sum of point-to-point Euclidean distances below a hard threshold. We demonstrate that such combination can overcome the issue of retrieval of partial prints in the presence of rotation and noise distortions. Conducted experiments have shown that the proposed solution achieves very good matching results and outperforms similar work in the literature both in terms of performances and complexity.
Collinearity Equations
Kinds of product that can be derived by the collinearity equation
- Space Resection By Collinearity
- Space Intersection By Collinearity
- Interior Orientation
- Relative Orientation
- Absolute Orientation
- Self-Calibration
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
Numerical Solution for the Design of a Ducted Axisymmetric Nozzle using Metho...IJRES Journal
Supersonic nozzles find application in the field of Rocket Propulsion. A method for the design of a ducted axisymmetric nozzle for high speed, low density flows is described and this work was carried out in the rules of Aerodynamics with the aim of projecting a computational method for the calculation of a Ducted Axisymmetric nozzle of minimum length, using the method of characteristics, by expanding a flow of air until to the required Mach Number. A thorough understanding of the method of characteristics and its application to the design of a Ducted Axisymmetric nozzle is required. We make use of the compatibility equations involved in the axisymmetric method of characteristics, Prandtl-Meyer function and develop a MATLAB program with the help of which the contour of the ducted axisymmetric nozzle has been developed for an exit Mach number 2.5, with an output of 4 expansions. A similar code was also developed, but for the two-dimensional case for 30 expansions as it more was simpler. Finally, the desired exit Mach number has been achieved giving a minimum length of the nozzle with a shock free and isentropic flow. We will be dealing with the various aspects of the draft code and its implementation, as well as the results obtained.
3D Graph Drawings: Good Viewing for Occluded VerticesIJERA Editor
The growing studies show that the human brain can comprehend increasingly complex structures if they are
displayed as objects in three dimensional spaces. In addition to that, recent technological advances have led to
the production of a lot of data, and consequently have led to many large and complex models of 3D graph
drawings in many domains. Good Drawing (Visualization) resolves the problems of the occluded structures of
the graph drawings and amplifies human understanding, thus leading to new insights, findings and predictions.
We present method for drawing 3D graphs which uses a force-directed algorithm as a framework.
The main result of this work is that, 3D graph drawing and presentation techniques are combined available at
interactive speed. Even large graphs with hundreds of vertices can be meaningfully displayed by enhancing the
presentation with additional attributes of graph drawings and the possibility of interactive user navigation.
In the implementation, we interactively visualize many 3D graphs of different size and complexity to support
our method. We show that Gephi Software is capable of producing good viewpoints for 3D graph drawing, by
its built-in force directed layout algorithms.
The Geometric Characteristics of the Linear Features in Close Range Photogram...IJERD Editor
The accuracy of photogrammetry can be increased with better instruments, careful geometric
characteristics of the system, more observations and rigorous adjustment. The main objective of this research is
to develop a new mathematical model of two types of linear features (straight line, spline curve) in addition to
relating linear features in object space to the image space using the Direct Linear Transformation (DLT). The
second main objective of the present paper is to study of some geometric characteristics of the system, when the
linear features are used in close range photogrammetric reduction processes. In this research, the accuracy
improvement has been evaluated by adopting certain assessment criteria, this will be performed by computing
the positional discrepancies between the photogrammetrically calculated object space coordinates of some check
object points, with the original check points of the test field, in terms of their respective RMS errors values. In
addition, the resulting least squares estimated covariance matrices of the check object point's space coordinates.
To perform the above purposes, some experiments are performed with synthetic images. The obtained results
showed significant improvements in the positional accuracy of close range photogrammetry, when starting node,
end nodes, and interior node on straight line and spline curve are increased with certain specifications regarding
the location and magnitude of each type of them.
Object Elimination and Reconstruction Using an Effective Inpainting MethodIOSR Journals
Abstract: Three major problems have been found in the existing algorithms of image inpainting:
Reconstruction of large regions, Preference of filling-in and Choice of best exemplars to synthesize the missing
region. The proposed algorithm introduces two ideas that deal with these problems preserving edge continuity
along with decrease in error propagation. The proposed algorithm introduces a modified priority computation
in order to generate better edges in the omitted region and to reduce the transmission of errors in the resultant
image a novel way to find optimal exemplar has been proposed. This proposal optimizes the reconstruction
process and increases the accuracy. The proposed algorithm removes blurness and builds edges efficiently
while reconstructing large target region.
Keywords: Image inpainting, texture synthesis, Image Completion, exemplar-based method
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
PROPOSED ALGORITHM OF ELIMINATING REDUNDANT DATA OF LASER SCANNING FOR CREATI...IAEME Publication
Aerial laser scanning for creating digital relief models has been widely applied in
Russia for over 20 years. For processing results of aerial laser scanning, it is
necessary to classify cloud of laser points. Classification of cloud of laser points is
performed via commercial software products (produced abroad), which frequently use
refinement algorithms of “Earth” class data, which consider the relief particularities.
The paper puts forward a proprietary algorithm of laser scanning data interpolation,
enabling to remove redundant cloud of laser points of “Earth” class when data
granularity reduces, in the first place, for flat terrain. The paper provides a detailed
stepwise description of algorithm operation, and the results of constructing a digital
relief model on the basis of the cloud of laser points of “Earth” class, processed via
the proposed algorithm.
Rotation Invariant Matching of Partial ShoeprintsIJERA Editor
In this paper, we propose a solution for the problem of rotated partial shoeprints retrieval based on the combined use of local points of interest and SIFT descriptor. Once the generated features are encoded using SIFT descriptor, matching is carried out using RANSAC to estimate a transformation model and establish the number of its inliers which is then multiplied by the sum of point-to-point Euclidean distances below a hard threshold. We demonstrate that such combination can overcome the issue of retrieval of partial prints in the presence of rotation and noise distortions. Conducted experiments have shown that the proposed solution achieves very good matching results and outperforms similar work in the literature both in terms of performances and complexity.
Collinearity Equations
Kinds of product that can be derived by the collinearity equation
- Space Resection By Collinearity
- Space Intersection By Collinearity
- Interior Orientation
- Relative Orientation
- Absolute Orientation
- Self-Calibration
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
Numerical Solution for the Design of a Ducted Axisymmetric Nozzle using Metho...IJRES Journal
Supersonic nozzles find application in the field of Rocket Propulsion. A method for the design of a ducted axisymmetric nozzle for high speed, low density flows is described and this work was carried out in the rules of Aerodynamics with the aim of projecting a computational method for the calculation of a Ducted Axisymmetric nozzle of minimum length, using the method of characteristics, by expanding a flow of air until to the required Mach Number. A thorough understanding of the method of characteristics and its application to the design of a Ducted Axisymmetric nozzle is required. We make use of the compatibility equations involved in the axisymmetric method of characteristics, Prandtl-Meyer function and develop a MATLAB program with the help of which the contour of the ducted axisymmetric nozzle has been developed for an exit Mach number 2.5, with an output of 4 expansions. A similar code was also developed, but for the two-dimensional case for 30 expansions as it more was simpler. Finally, the desired exit Mach number has been achieved giving a minimum length of the nozzle with a shock free and isentropic flow. We will be dealing with the various aspects of the draft code and its implementation, as well as the results obtained.
3D Graph Drawings: Good Viewing for Occluded VerticesIJERA Editor
The growing studies show that the human brain can comprehend increasingly complex structures if they are
displayed as objects in three dimensional spaces. In addition to that, recent technological advances have led to
the production of a lot of data, and consequently have led to many large and complex models of 3D graph
drawings in many domains. Good Drawing (Visualization) resolves the problems of the occluded structures of
the graph drawings and amplifies human understanding, thus leading to new insights, findings and predictions.
We present method for drawing 3D graphs which uses a force-directed algorithm as a framework.
The main result of this work is that, 3D graph drawing and presentation techniques are combined available at
interactive speed. Even large graphs with hundreds of vertices can be meaningfully displayed by enhancing the
presentation with additional attributes of graph drawings and the possibility of interactive user navigation.
In the implementation, we interactively visualize many 3D graphs of different size and complexity to support
our method. We show that Gephi Software is capable of producing good viewpoints for 3D graph drawing, by
its built-in force directed layout algorithms.
The Geometric Characteristics of the Linear Features in Close Range Photogram...IJERD Editor
The accuracy of photogrammetry can be increased with better instruments, careful geometric
characteristics of the system, more observations and rigorous adjustment. The main objective of this research is
to develop a new mathematical model of two types of linear features (straight line, spline curve) in addition to
relating linear features in object space to the image space using the Direct Linear Transformation (DLT). The
second main objective of the present paper is to study of some geometric characteristics of the system, when the
linear features are used in close range photogrammetric reduction processes. In this research, the accuracy
improvement has been evaluated by adopting certain assessment criteria, this will be performed by computing
the positional discrepancies between the photogrammetrically calculated object space coordinates of some check
object points, with the original check points of the test field, in terms of their respective RMS errors values. In
addition, the resulting least squares estimated covariance matrices of the check object point's space coordinates.
To perform the above purposes, some experiments are performed with synthetic images. The obtained results
showed significant improvements in the positional accuracy of close range photogrammetry, when starting node,
end nodes, and interior node on straight line and spline curve are increased with certain specifications regarding
the location and magnitude of each type of them.
Object Elimination and Reconstruction Using an Effective Inpainting MethodIOSR Journals
Abstract: Three major problems have been found in the existing algorithms of image inpainting:
Reconstruction of large regions, Preference of filling-in and Choice of best exemplars to synthesize the missing
region. The proposed algorithm introduces two ideas that deal with these problems preserving edge continuity
along with decrease in error propagation. The proposed algorithm introduces a modified priority computation
in order to generate better edges in the omitted region and to reduce the transmission of errors in the resultant
image a novel way to find optimal exemplar has been proposed. This proposal optimizes the reconstruction
process and increases the accuracy. The proposed algorithm removes blurness and builds edges efficiently
while reconstructing large target region.
Keywords: Image inpainting, texture synthesis, Image Completion, exemplar-based method
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
PROPOSED ALGORITHM OF ELIMINATING REDUNDANT DATA OF LASER SCANNING FOR CREATI...IAEME Publication
Aerial laser scanning for creating digital relief models has been widely applied in
Russia for over 20 years. For processing results of aerial laser scanning, it is
necessary to classify cloud of laser points. Classification of cloud of laser points is
performed via commercial software products (produced abroad), which frequently use
refinement algorithms of “Earth” class data, which consider the relief particularities.
The paper puts forward a proprietary algorithm of laser scanning data interpolation,
enabling to remove redundant cloud of laser points of “Earth” class when data
granularity reduces, in the first place, for flat terrain. The paper provides a detailed
stepwise description of algorithm operation, and the results of constructing a digital
relief model on the basis of the cloud of laser points of “Earth” class, processed via
the proposed algorithm.
Apresentação utilizada em uma das seções de consultoria de negócios em um cliente. Utilizada para mostrar que a crise já ocorre muito antes do que todos imaginam, mas que chegou, e que a maioria das empresas demoraram muito para se prevenir e agora estão tendo de reagir.
Chamada para nível gerencial sobre a importância de se aproveitar o momento de menor carga de trabalho para se fazer planejamento estratégico e transformar a crise em oportunidade.
Fique à vontade de entrar em contato para conversarmos a respeito do assunto.
As apresentações são criadas para um público específico e para empresa específica mas com um pouco de criatividade pode-se aplicá-la em diversos casos. Não deixe de dar feed-back sobre o que achou.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
A comparison of SIFT, PCA-SIFT and SURFCSCJournals
This paper compares three robust feature detection methods, they are, Scale Invariant Feature Transform (SIFT), Principal Component Analysis (PCA) -SIFT and Speeded Up Robust Features (SURF). Lowe presented SIFT [1], which was successfully used in recognition, stitching and many other applications because of its robustness. Yan Ke [2] gave a change of SIFT by using PCA to normalize the gradient patch instead of histogram. H. Bay [3] presented a faster method for SURF, which used Fast-Hessian detector. The performance of the three methods is compared for scale changes, rotation , blur, illumination changes and affine transformations, all of which uses repeatability as an evaluation measurement. Additionally, RANSAC is used to reject the inconsistent matches [4]. SIFT presents its stability in most situation except rotation and illumination changes. SURF is the fastest one with good performance as the same as SIFT, PCA-SIFT shows its advantages in rotation, blur and illumination changes.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
Polygon approximation plays a vital role in abquitious applications like multimedia, geographic and object
recognition. An extensive number of polygonal approximation techniques for digital planar curves have
been proposed over the last decade, but there are no survey papers on recently proposed techniques.
Polygon is a collection of edges and vertices. Objects are represented using edges and vertices or contour
points (ie. polygon). Polygonal approximation is representing the object with less number of dominant
points (less number of edges and vertices). Polygon approximation results in less computational speed and
memory usage. This paper deals with comparative study on polygonal approximation techniques for digital
planar curves with respect to their computation and efficiency.
Nose Tip Detection Using Shape index and Energy Effective for 3d Face Recogni...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Assessing Error Bound For Dominant Point DetectionCSCJournals
This paper compares three error bounds that can be used to make dominant point detection methods non-parametric. The three error bounds are based on the error in slope estimation due to digitization. However, each of the three methods takes a different approach for calculating the error bounds. This results into slightly different natures of the three methods and slightly different values. The impact of these error bounds is studied in the context of the non-parametric version of the widely used RDP method [1, 2] of dominant point detection. It is seen that the recently derived error bound (the third error bound in this paper), which depends on both the length and the slope of the line segment, provides the most balanced dominant point detection results for a variety of curves.
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
LIDAR POINT CLOUD CLASSIFICATION USING EXPECTATION MAXIMIZATION ALGORITHMijnlc
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
Generalized method of moment estimating function enables one to estimate regression parameters consistently and efficiently. However, it involves one major computational problem: in complex data settings, solving generalized method of moments estimating function via Newton-Raphson technique gives rise often to non-invertible Jacobian matrices. Thus, parameter estimation becomes unreliable and computationally inefficient. To overcome this problem, we propose to use secant method based on vector divisions instead of the usual Newton-Raphson technique to estimate the regression parameters. This new method of estimation demonstrates a decrease in the number of non-convergence iterations as compared to the Newton-Raphson technique and provides reliable estimates.
Symbol Based Modulation Classification using Combination of Fuzzy Clustering ...CSCJournals
Most of approaches for recognition and classification of modulation have been founded on modulated signal’s components. In this paper, we develop an algorithm using fuzzy clustering and consequently hierarchical clustering algorithms considering the constellation of the received signal to identify the modulation types of the communication signals automatically. The simulation that has been conducted shows high capability of this method for recognition of modulation levels in the presence of noise and also, this method is applicable to digital modulations of arbitrary size and dimensionality. In addition this classification finds the decision boundary of the signal which is critical information for bit detection.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
Extended hybrid region growing segmentation of point clouds with different re...csandit
In the recent years, 3D city reconstruction is one of the active researches in the field of
photogrammetry. The goal of this work is to improve and extend region growing based
segmentation in the X-Y-Z image in the form of 3D structured data with combination of spectral
information of RGB and grayscale image to extract building roofs, streets and vegetation. In
order to process 3D point clouds, hybrid segmentation is carried out in both object space and
image space. Our experiments on two case studies verify that updating plane parameters and
robust least squares plane fitting improves the results of building extraction especially in case
of low accurate point clouds. In addition, region growing in image space has been derived to
the fact that grayscale image is more flexible than RGB image and results in more realistic
building roofs.
Extended hybrid region growing segmentation of point clouds with different re...
Ih2616101615
1. V J Patel, R B Gandhi/ International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 6, November- December 2012, pp.1610-1615
Evaluation of Spherical Form Error Using Maximum Distance
Point Strategy (MDPS)
V J Patel, R B Gandhi
(Department of Mechanical Engineering, BVM Engineering College, VallabhVidyanagar – 388 120, Gujarat,
India
(Department of Mathematics, BVM Engineering College, VallabhVidyanagar – 388 120, Gujarat, India
ABSTRACT
Calibration of probe prior to any errors at each of the observation points. This is a
measurement by Coordinate Measuring Machine very natural measure. The squaring is done to stop
(CMM) is very important activity. A reference cancellations among errors with different signs.
sphere is used for the calibration. Hence, Obviously, it is most desirable to find the choice of
measurement of spherical feature using CMM is parameters that minimizes error. It turns out that this
one of the important operations in precision choice can be computed efficiently. Hence, this
engineering. The operation requires use of method is called Least Squares Method (LSM) [2].
efficient computational algorithm as it has to The least-square fitting method requiresoptimization
determine diameter and center of spherical algorithm to minimize non-linear objective function.
feature. This paper proposes a strategy named as Gauss-Newton Algorithm (GNA) is very prevalent
Maximum Distance Point Strategy (MDPS) for algorithm to minimize such a non-linear objective
the spherical feature for minimizing sphericity function.
from CMM measured points. The results of P. Bourdet, C. Lartigue and F. Leveauxhave
MDPS are compared to very well known appliedLSM in order to find the location of center
algorithm i.e. Least Square Method (LSM). It has and radius of sphere to calibrate a probe [3]. They
been found that the results are comparable if have analyzed variouserrors involved during the
measured points are uniformly distributed. If steps leading to the identification of the sphereand
points are not uniformly distributed, the MDPS location of its center. The errors include those
results are better than LSM results. associated with surfaceaccessibility for sampling
points, sampling strategies, optimization
Keywords -Sphericity, Best-fit Sphere, Coordinate algorithm,and probe diameter versus reference
Measuring Machine, Least-Square Method, sphere diameter.It is inferred that a calibration
Maximum Distance Point Strategy. strategy is proposed as a function of sampled points,
optimization algorithm and the geometric surface
1. Introduction involved.
A measurement of geometric feature of T. Kanada[4] has suggested the calculation
manufactured part by Coordinate Measuring of the value of spherical form errors (sphericity).
Machine (CMM) involves collection of points. The iterative least-squares method and the minimum
These points are fitted into appropriate geometric zone method are applied on simulated data. To
features like plane, line, circle (hole), sphere, calculate the minimum zone sphericity, the downhill
cylinder etc. by suitable fitting algorithm. The simplex method is applied. It is concluded that the
calibration of probeis essential before any sphericity calculated by iterative least-square
measurement process is carried out to compensate method and minimum-zone method are comparable.
the radius of probe. This process is carried out using 3D measurement of a whole spherical surface as per
reference sphere. Apart from this, sphere is an the definition of sphericity can be simplified in terms
important feature of manufactured components. of 2D measurement of cross-sectional profiles.This
When these components are manufactured, closeness simplification may give rise to the evaluation errors
to required dimension is expressed in terms of in the sphericity, because the sphericity should be
sphericity. evaluated three-dimensionally. Considering this, T.
The ANSI Dimensioning and Tolerance Kanada[5] has determined the minimum number of
Standard Y14.5 [1] defines that the form tolerances cross-sectional profiles that needed to be measured
on a component must be evaluated with reference to in order to estimate the sphericity from roundness
an ideal geometric feature. CMM software evaluates values accurately. A recommended number of cross-
sphericity of spherical features by establishing a sectional profile measurements is proposed by
sphere as reference geometric feature from the means of a statistical process. The accuracy of the
measured points. A common way of measuring how sphericity values estimated using this procedure is
well a function fits, is the least-squares criterion. The not investigated in the work.
error is calculated by adding up the squares of the
1610 | P a g e
2. V J Patel, R B Gandhi/ International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 6, November- December 2012, pp.1610-1615
C. M. Shakarji [6] suggested use of the sphericity form error and sum of square of
Levenberg-Marquardt Algorithm (LMA) to deviation.
minimize the square of error distances for various This is a customized approach to find the
features including sphere. LMA is a trust-region best fit sphere for evaluating sphericity rather than
strategy which provides a numerical solution to the addressing a general unconstrained nonlinear
problem of minimizing non-linear function. LMA is problem. It is based on the postulate that “A unique
more robust than GNA. However, even for well- sphere passes through any four non-coplanar points
behaved functions and reasonable starting in space, where any three points are not in a line.”
parameters, LMA tends to be a bit slower than GNA. Hence, selection procedure of four points
LMA can also be viewed as improved GNA with (quadruplet) is suggested in the next section.
trust region approach [7, 8]. Also, convergence of
the solution is highly dependent on choice of 2. Point Selection Procedure
Levenberg-Marquardt parameter and its selection is The selection procedure for quadruplet (A,
challenging. B, C, D) is as follow.
Theoretical derivation of the minimum zone 1. Let 𝑃𝑖 𝑥 𝑖 , 𝑦 𝑖 , 𝑧 𝑖 , 𝑖 = 1,2, … , 𝑛 and 𝑛 > 3 ,
criteria of sphericity error based on the principle of be the set of 𝑛 points.
minimum potential energy is proposed by K. C. Fan 2. Select any point from 𝑃𝑖 and name it as A,
and J. C. Lee [9]. They have converted the problem which is first point in quadruplet.
of finding the minimum zone sphericity error into 3. Calculate distance from point A to each
the problem of finding the minimum elastic potential point 𝑃𝑖 using equation 2.1.
energy of the corresponding mechanical system. 𝐴𝑃𝑖 (2.1)
G. L. Samuel and M. S. Shunmugam [10]
= 𝑥 𝑎 − 𝑥𝑖 2 + 𝑦 𝑎 − 𝑦𝑖 2 + 𝑧 𝑎 − 𝑧𝑖 2
have developed methods based on the computational
where, 𝐴𝑃𝑖 is distance from point A to
geometry to establish the assessment spheres. The
point 𝑃𝑖 , 𝑖 = 1,2, … , 𝑛
suggested methods start with construction of 3-D
𝑥 𝑎 , 𝑦 𝑎 , 𝑧 𝑎 are coordinates of point A.
hull. The 3-D convex hull is established using
𝑥 𝑖 , 𝑦 𝑖 , 𝑧 𝑖 are coordinates of point 𝑃𝑖 .
computational geometric concept. For establishing a
4. Second point in quadruplet (point B) is
3-D inner hull, a new heuristic method is suggested.
selected which has maximum value of 𝐴𝑃𝑖 .
A new concept of 3-D equidistant (ED) line is
5. Select third point in quadruplet (point C)
introduced in the method. Based on this concept, the
authors have constructed 3-D farthest and from 𝑃𝑖 , 𝑖 = 1,2, … , 𝑛 such that its normal
nearestequidistance diagrams for establishing the distance from line AB to the selected point
assessment spheres. Algorithms proposed are is maximum. To determine normal distance
implemented and validated with the simulated data. the equation 2.3 is used.
Techniques for evaluating circularity and 𝑑(𝑃𝑖 ) 𝐴𝐵 = ( 𝑥 𝑎 − 𝑥 𝑖 2 + 𝑦 𝑎 − 𝑦 𝑖 2
sphericity error from CMM data are presented by G. + 𝑧 𝑎 − 𝑧 𝑖 2 )( 𝑥 𝑏
L Samuel and M.S. Shunmugam [11]. It is − 𝑥𝑖 2 + 𝑦 𝑏 − 𝑦𝑖 2
summarized that the form error can be evaluated + 𝑧 𝑏 − 𝑧𝑖 2)
directly from CMM data by employing sphere as − 𝑥𝑎 − 𝑥𝑏 𝑥𝑎 (2.2)
assessment features and using normal deviations. − 𝑥𝑖
The CMM data can also be transformed by applying + 𝑦𝑎 − 𝑦 𝑏 𝑦 − 𝑦𝑖
appropriate methods that not only suppress the size + 𝑧𝑎 − 𝑧𝑏 𝑧𝑎
but also introduce distortion. In the work, the form 2
− 𝑧𝑖
error is evaluated from the transformed data by where, 𝑑(𝑃𝑖 ) 𝐴𝐵 is normal distance
employing limacon/limacoid as assessment features parameter from point 𝑃𝑖 to line AB,
and using linear deviations. Also, the methods for 𝑖 = 1,2, … , 𝑛
handling CMM and transformed data are presents. 𝑥 𝑏 , 𝑦 𝑏 , 𝑧 𝑏 are coordinates of point B.
The authors of this paper have developed a Note: The point with 𝑑(𝑃𝑖 ) 𝐴𝐵 = 0 should
novel strategy to evaluate circularity which is named be discarded [11].
as Maximum Distance Point Strategy (MDPS) [12]. 6. The normal distance parameter from point
In the present work, the same strategy is extended to 𝑃𝑖 to the plane passing through selected
evaluate the spherical form error. Circularity is 2- points A, B and C is determined by the
dimensional feature and a sphere is 3-dimensional equation 2.3.
feature. Hence, the procedure for selection of the
points at maximum distance is different for sphere 𝑑(𝑃𝑖 ) 𝐴𝐵𝐶
than that of circle.Point selection procedure should 𝑥𝑖 − 𝑥 𝑎 𝑦𝑖 − 𝑦𝑎 𝑥𝑖 − 𝑧 𝑎
𝑥𝑏 − 𝑥𝑎 𝑦 𝑏 − 𝑦𝑎 𝑦𝑐 − 𝑦 𝑎 (2.3)
be capable of eliminating coplanar points. The = 𝑎𝑏𝑠
proposed strategy is compared with least-square 𝑥𝑐 − 𝑥𝑎 𝑥𝑏 − 𝑧𝑎 𝑧𝑐 − 𝑦𝑎
fitting method usingGauss-Newton algorithm and
other published methods/algorithm in literature for
1611 | P a g e
3. V J Patel, R B Gandhi/ International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 6, November- December 2012, pp.1610-1615
where, 𝑑(𝑃𝑖 ) 𝐴𝐵𝐶 is normal distance 2 xb − xa x + 2 yb − ya y + 2 zb − za z
parameter from point 𝑃𝑖 to plane passing 2 2 2 2
= xb − xa + yb − ya
through points A, B and C, 𝑖 = 1,2, … , 𝑛 2 2
+ zb − za
𝑥 𝑐 , 𝑦 𝑐 , 𝑧 𝑐 are coordinates of point C. 2 xc − xa x + 2 yc − ya y + 2 zc − za z
7. Point D, is selected with maximum value of 2 2 2
= xc − xa + yc − ya 2
d(Pi )ABC from the available points. 2
+ zc − za 2
The selection procedure is followed for 2 xd − xa x + 2 yd − ya y + 2 zd − za z
each point Pi . Hence, there are n quadruplet and n 2 2 2
= xd − xa + yd − ya 2
candidate spheres. 2
+ zd − za 2
Amongst all candidate spheres, spheres for center of the sphere x0 , y0 and
which are far from the solution are eliminated calculate r0 using
heuristically as discussed in section 3.3. The average
r0 = xa − x0 2 + ya − y0 2 + z 𝑖 − z0 2 . This
of center coordinates of the selected spheres and the
average radii of these spheres represent the center is the sphere passing through xa , ya , za ,
and radius of the best fit sphere for a given set of xb , yb , zb , xc , yc , zc and xd , yd , zd .
points. Repeating the procedure by fixing each point Pi ,
i = 1, 2, … n, n–centers and n–radii are found. To
3. Formulations select the best fit sphere following heuristic method
is used.
For Pi xi , yi , 𝑧 𝑖 i = 1,2, … , n and n ≥ 4
1. Let
3.1 Least Square Method
A sphere with the center x0 , y0 , z0 and ek
n
radius r0 is found such that it minimizes the sum of 2 2 2
= xi − a k + yi − bk + zi − ck
squared deviations. The sphere equation in an
i=1
implicit form can be written as 2
f x, y, z = x − x0 2 + y − y0 2 + z − z0 2 − r0 2 − rk
=0 a k , bk , ck is center and rk is radius of
The deviation of distance for a point Pi , kthsphere where, k = 1,2, … n.
i = 1,2, … , n may be explicitly written as 2. The mean and standard deviation of ek ,
ei = xi − x0 2 + yi − y0 2 + z 𝑖 − z0 2 − k = 1,2, … n are found.
r0 ; 𝑤ℎ𝑒𝑟𝑒 i = 1,2, … , n (3.1) 3. The spheres with ek less than or equal to mean
The sum of squared deviations is then of ek , k = 1,2, … n are selected.
described as 4. Calculate the mean of the coordinates of
n centers and radii of these selected spheres. This
es = e2
i gives center and radius of the best fit sphere.
i=1 5. Calculate sphericity using equation (3.3) for the
n
sphere found in step 4.
= xi − x0 2 + yi − y0 2 + z 𝑖 − z0 2
6. Steps 1 to 5 are followed for (n – m) number of
i=1 spheres; where m is number of spheres which
2
− r0 are not selected in step 3.
7. If sphericity calculated in step 5 is less than
(3.2)
sphericity calculated in previous iteration, go to
step 1. Otherwise go to step 8.
3.2 Sphericity error
8. Stop iterations. The sphere found is the claimed
Denote the maximum value among the
best fit sphere.
deviations ei , i = 1,2, … , n as emax and the
minimum value as emin . Then, the sphericity error h
can be computed as
4. Results and Discussion
MATLAB programs for evaluating
h = emax − emin (3.3)
sphericity by MDPS and LSM were executed on
According to the minimum zone criterion
computer with Intel atom processor, 800 MHz clock
given by ANSI Standard Y14.5 [1], the center
speed and 1 GB RAM. The programs were run for
x0 , y0 , z0 and radius r0 of an ideal circle should be
CMM measured data set. A reference sphere is
determined such that the sphericity h is the
measured using SCAN facility available on CMM.
minimum.
The SCAN facility ensures that points in the dataset
are uniformly spaced. These measured points are
3.3 Maximum Distance Point Strategy tabulated in Table 1.
Fix a point, say, Pk and select three other
Table 2 shows results of sphericity (h)
points as explained in section 2. Let the coordinates
evaluation for the dataset presented in Table 1. The
of the points be xa , ya , za , xb , yb , zb , xc , yc ,
results are expressed up to six decimal places. It can
zcand xd, yd, zd. Solve the following system of be observed that sphericity error obtained by MDPS
linear algebraic equations
1612 | P a g e
4. V J Patel, R B Gandhi/ International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 6, November- December 2012, pp.1610-1615
is not less than that of LSM and CMM result. But, it examinations with using simulated data,
can be observed that the order of sphericity error is Precision Engineering, 17(4), 1995, 281-
10-2µm. Table 2 also shows the comparison of sum 289.http://dx.doi.org/10.1016/0141-
of squared deviation (es). It can be observed that sum 6359(95)00017-8
of squared deviation of MDPS is less than that of [5] T. Kanada, Estimation of sphericity by
CMM result and more than LSM. The order of sum means of statistical processing for
of squared deviation is 10-8 mm. roundness of spherical parts, Precision
Since, the points are uniformly distributed, Engineering, 20(2),1997, 117-
the results obtained from each method is very 122.http://dx.doi.org/10.1016/S0141-
precise and near to each other. Three more datasets 6359(97)00013-5
of points are manually measured for the same [6] C. M. Shakarji, Least-Squares Fitting
reference sphere. These datasets contain 44, 37 and Algorithms of the NIST Algorithm Testing
23 points respectively. Table 3 shows sphericity System, Journal of Research of the
obtained by MDPS and LSM. It can be seen that National Institute of Standards and
sphericity achieved by MDPS is less than that of Technology, 103(6), 1998, 633-641.
LSM. The table 3 also indicates that the sum of [7] Nocedal and Wright, Numerical
squared deviation of MDPS is not less than that of Optimization, (Springer-Verlag, New
LSM, which is not objective of the method. York, 1999).
[8] Antoniou and Lu, Practical Optimization:
5. Conclusions Algorithms and Engineering Applications,
The present paper shows that the Maximum (Springer Science+Business Media, 2007).
Distance Point Strategy (MDPS) suggested for circle [9] K. C. Fan, J. C. Lee, Analysis of minimum
[11] is extended to the sphere. It is concluded that zone sphericity error using minimum
the points are very well uniformly distributed the potential energy theory, Precision
MDPS gives comparable results with Least Square Engineering, 23(2),1999, 65-
Method (LSM). The MDPS gives better results 72http://dx.doi.org/10.1016/S0141-
compared to LSM when points are not uniformly 6359(98)00024-5
distributed. This method can be used as starting [10] G. L. Samuel, M. S. Shunmugam,
solution for Simplex Search. The developed Evaluation of sphericity error from
methodology has great potential for implementation doordinate measurement data using
in CMM software for evaluation of spherical computational geometric techniques,
features. Computer Methods in Applied Mechanics
and Engineering, 190, 2001,6765-6781
References http://dx.doi.org/10.1016/S0045-
[1] ANSI Standard Y14.5, „Dimensioning and 7825(01)00220-1
Tolerancing‟, The American Society of [11] G. L. Samuel, M. S. Shunmugam,
Mechanical Engineers, New York, 2009. Evaluation of circularity and sphericity
[2] R. Sedgewick, Algorithms, (Addison- from coordinate measurement data, Journal
Wesley Publishing Co., 1983). of Materials Processing Technology, 139
[3] P. Bourdet, C. Lartigue, F. Leveaux, Effects (1-3),2003, 90-95.
of data points distribution and mathematical http://dx.doi.org/10.1016/S0924-
model on finding the best-fit sphere to data, 0136(03)00187-0
Precision Engineering, 15(3), 1993, 150- [12] V. J. Patel, R. B. Gandhi, Evaluation of
157.http://dx.doi.org/10.1016/0141- Circularity from Coordinate data using
6359(93)90002-R Maximum Distance Point Strategy
[4] T. Kanada, Evaluation of spherical form (MDPS), International Journal of
errors – Computation of sphericity by Engineering and Technology, 1 (04), 2012,
means of minimum zone method and some ISSN 2278-0181
1613 | P a g e
6. V J Patel, R B Gandhi/ International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 6, November- December 2012, pp.1610-1615
85. 182.7367 453.6017 -477.5433 91. 181.3477 445.3688 -477.5433
86. 180.2401 453.809 -477.5433 92. 183.7082 446.2146 -477.5433
Table 2: Results of sphericity evaluation for measured points tabulated in table 1.
Sum of squared
x0 (mm) y0 (mm) z0 (mm) r0 (mm) h(µm)
deviation, es
MDPS 181.150541 449.935688 -489.279092 12.488976 0.082138 5.7158×10-8
LSM 181.150541 449.635692 -489.279098 12.488985 0.073402 5.2312×10-8
CMM
181.150541 449.635693 -489.279077 12.489 0.080474 5.8423×10-8
result
Table 2: Results of sphericity evaluation for points manually.
MDPS LSM
Number of points h(µm) Sum of squared h(µm) Sum of squared
deviation, es deviation, es
44 2.0791 7.9354 2.1182 6.9776
37 2.0810 7.4879 2.0852 6.0300
23 2.0101 3.4039 2.0854 3.4771
1615 | P a g e