Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
This document proposes a method for obtaining a sparse polynomial model from time series data. It uses an optimal minimal nonuniform time embedding to construct a time delay kernel from which a polynomial basis is built. A sparse model is then obtained by solving a regularized least squares problem that minimizes error while penalizing model complexity. The method is applied to generate a model of the Mackey-Glass chaotic system from time series data.
This document discusses extrapolation, which is constructing new data points outside a range of known data points based on trends. It summarizes extrapolation techniques, assumptions, advantages, and disadvantages. Common extrapolation methods include least squares curve fitting, smooth curve fitting, and nonlinear curve fitting using the Levenberg-Marquardt algorithm. Examples of extrapolation applications given are weather and hurricane forecasting, geophysical modeling, and estimating properties at temperature and depth extremes.
Firefly Algorithm, Stochastic Test Functions and Design OptimisationXin-She Yang
This document describes the Firefly Algorithm, a metaheuristic optimization algorithm inspired by the flashing behavior of fireflies. It summarizes the main concepts of the algorithm, including how firefly attractiveness varies with distance, and provides pseudocode for the algorithm. It also introduces some new test functions with singularities or stochastic components that can be used to validate optimization algorithms. As an example application, the Firefly Algorithm is used to find the optimal solution to a pressure vessel design problem.
This document discusses strategies for parallelizing spectral methods. Spectral methods are global in nature due to their use of global basis functions, making them challenging to parallelize on fine-grained architectures. However, the document finds that spectral methods can be effectively parallelized. The main computational steps in spectral methods are the calculation of differential operators on functions and solving linear systems, both of which can exploit parallelism. Domain decomposition techniques may also help parallelize computations over non-Cartesian domains.
1) The document describes a vehicle routing project that uses a multi-commodity network flow formulation to explore sub-optimal solutions for object classification with noisy sensors on a 2D grid.
2) It formulates the problem as assigning tasks to vehicles (commodities) that must flow through the graph in 4 directions while being constrained by boundaries and returning to base.
3) The algorithm uses a look-ahead window to consider future moves and a rollout step using linear programming to approximate costs farther in time and decide optimal vehicle movements.
Research on Chaotic Firefly Algorithm and the Application in Optimal Reactive...TELKOMNIKA JOURNAL
The document proposes a chaotic firefly algorithm (CFA) to overcome the shortcomings of the original firefly algorithm getting stuck in local optima. CFA introduces chaos initialization, chaos population regeneration, and linear decreasing inertia weight to increase global search ability. CFA is tested on six benchmark functions and applied to optimize reactive power dispatch in an IEEE 30-bus system. Results show CFA performs better than the original firefly algorithm and particle swarm optimization in finding optimal solutions faster.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
This document proposes a method for obtaining a sparse polynomial model from time series data. It uses an optimal minimal nonuniform time embedding to construct a time delay kernel from which a polynomial basis is built. A sparse model is then obtained by solving a regularized least squares problem that minimizes error while penalizing model complexity. The method is applied to generate a model of the Mackey-Glass chaotic system from time series data.
This document discusses extrapolation, which is constructing new data points outside a range of known data points based on trends. It summarizes extrapolation techniques, assumptions, advantages, and disadvantages. Common extrapolation methods include least squares curve fitting, smooth curve fitting, and nonlinear curve fitting using the Levenberg-Marquardt algorithm. Examples of extrapolation applications given are weather and hurricane forecasting, geophysical modeling, and estimating properties at temperature and depth extremes.
Firefly Algorithm, Stochastic Test Functions and Design OptimisationXin-She Yang
This document describes the Firefly Algorithm, a metaheuristic optimization algorithm inspired by the flashing behavior of fireflies. It summarizes the main concepts of the algorithm, including how firefly attractiveness varies with distance, and provides pseudocode for the algorithm. It also introduces some new test functions with singularities or stochastic components that can be used to validate optimization algorithms. As an example application, the Firefly Algorithm is used to find the optimal solution to a pressure vessel design problem.
This document discusses strategies for parallelizing spectral methods. Spectral methods are global in nature due to their use of global basis functions, making them challenging to parallelize on fine-grained architectures. However, the document finds that spectral methods can be effectively parallelized. The main computational steps in spectral methods are the calculation of differential operators on functions and solving linear systems, both of which can exploit parallelism. Domain decomposition techniques may also help parallelize computations over non-Cartesian domains.
1) The document describes a vehicle routing project that uses a multi-commodity network flow formulation to explore sub-optimal solutions for object classification with noisy sensors on a 2D grid.
2) It formulates the problem as assigning tasks to vehicles (commodities) that must flow through the graph in 4 directions while being constrained by boundaries and returning to base.
3) The algorithm uses a look-ahead window to consider future moves and a rollout step using linear programming to approximate costs farther in time and decide optimal vehicle movements.
Research on Chaotic Firefly Algorithm and the Application in Optimal Reactive...TELKOMNIKA JOURNAL
The document proposes a chaotic firefly algorithm (CFA) to overcome the shortcomings of the original firefly algorithm getting stuck in local optima. CFA introduces chaos initialization, chaos population regeneration, and linear decreasing inertia weight to increase global search ability. CFA is tested on six benchmark functions and applied to optimize reactive power dispatch in an IEEE 30-bus system. Results show CFA performs better than the original firefly algorithm and particle swarm optimization in finding optimal solutions faster.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Gradient descent is an optimization algorithm used to minimize a cost function by iteratively adjusting parameter values in the direction of the steepest descent. It works by calculating the derivative of the cost function to determine which direction leads to lower cost, then taking a step in that direction. This process repeats until reaching a minimum. Gradient descent is simple but requires knowing the gradient of the cost function. Backpropagation extends gradient descent to neural networks by propagating error backwards from the output to calculate gradients to update weights.
The document describes two algorithms, gradient descent and Gauss-Newton, for determining location from GPS satellite signals. It outlines the process of linearizing the pseudorange equations and deriving the iterative algorithms to minimize error between measured and calculated ranges. Simulation results on synthetic noiseless data show Gauss-Newton converges much faster than gradient descent, in 4 iterations versus over 50,000 iterations. Gauss-Newton is thus determined to be a more optimal algorithm for GPS positioning.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
The document presents a procedure for quantifying the roughness of diamond samples at the nanoscale. It involves calculating the ratio of the total surface area of the sample to its base area using 3D calculus. The procedure approximates the surface area formula and provides 11 steps to determine roughness factor from the data. It was tested on 3 samples and produced roughness factors of 26.17, 29.98, and 5.71 respectively. The goal was to create an easy-to-use method for the Materials Research Team to evaluate nano-scale coatings.
This poster was created in LaTeX on a Dell Inspiron laptop with a Linux Fedora Core 4 operating system. The background image and the animation snapshots are dxf meshes of elastic waveform solutions, rendered on a Windows machine using 3D Studio Max.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
This document describes numerical simulations of subsonic and supersonic flow through a choked nozzle using various schemes to solve the quasi-1D Euler equations. The author compares the steady state solutions from the Jameson-Schmidt-Turkel (JST) scheme (3, 4, and 5 stages), first-order Steger flux-vector splitting, and an analytic solution. The JST scheme converges quickly but is overly diffusive, while Steger converges slower but is less diffuse and more accurate near shocks. Higher-order JST stages and adjusting diffusion coefficients improve accuracy versus the analytic solution.
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
Development, Optimization, and Analysis of Cellular Automaton Algorithms to S...IRJET Journal
This document summarizes research on using cellular automaton algorithms to solve stochastic partial differential equations (SPDEs). It proposes a finite-difference method to approximate an SPDE modeling a random walk with angular diffusion. A Monte Carlo algorithm is also developed for comparison. Analysis finds a moderate correlation between the two methods, suggesting the finite-difference approach is reasonably accurate. It also identifies an inverse-square relationship between variables, linking to a foundational stochastic analysis concept. The research concludes the finite-difference method shows promise for approximating SPDEs while considering boundary conditions.
Theories and Applications of Spatial-Temporal Data Mining and Knowledge Disco...Beniamino Murgante
This document summarizes spatial-temporal data mining and knowledge discovery techniques. It discusses (1) clustering spatial data using scale-space filtering and regression-classification decomposition, (2) classifying spatial data using neural networks and decision trees, (3) discovering temporal processes using multifractal analysis, and (4) uncovering knowledge structures from relational spatial data using concept lattices. Various applications are described, including clustering typhoon tracks, earthquake data, and daily rainfall patterns to identify spatial and temporal patterns.
This document summarizes a study that used sigmoidal parameterization and Metropolis-Hasting (MH) inversion to estimate seismic velocity models from traveltime data. The key points are:
1) Sigmoidal functions were used to parameterize discontinuous velocity fields, allowing for sharp variations while maintaining continuity.
2) Ray tracing and the MH algorithm were used to invert traveltime data and estimate model parameters.
3) Tests on synthetic models showed the MH method produced higher resolution velocity models that better fit the observed traveltime data, compared to other global optimization methods like very fast simulated annealing.
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...Xin-She Yang
This document proposes a new two-stage hybrid search method called the Eagle Strategy for solving stochastic optimization problems. The Eagle Strategy combines random search using Lévy walk with intensive local search using the Firefly Algorithm. It first uses Lévy walk to randomly explore the search space, then switches to the Firefly Algorithm to intensively search locally around good solutions. Numerical results suggest the Eagle Strategy is efficient for stochastic optimization problems.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
This document evaluates the sensitivity of seismic inversion algorithms to wavelets estimated using different statistical methods. It summarizes two wavelet estimation techniques - the Hilbert transform method and smoothing spectra method. It also describes two inversion methods - Narrow-band inversion and a Bayesian approach. Numerical experiments were conducted to analyze the performance of the wavelet estimation methods and sensitivity of the inversion algorithms to estimated wavelets. The smoothing spectra method produced better wavelet estimates. The Bayesian approach yielded superior inversion results and more robust impedance estimates compared to Narrow-band inversion in all tests.
This document summarizes a novel algorithm for fast sparse image reconstruction from compressed sensing measurements. The algorithm uses adaptive nonlinear filtering strategies in an iterative framework. It formulates the image reconstruction problem using total variation minimization and solves it using a two-step iterative scheme. Numerical experiments show that the algorithm is efficient, stable, and fast compared to state-of-the-art methods, as it can reconstruct images from highly incomplete samples in just a few seconds with competitive performance.
The document describes the derivation of a simple Kalman filter. It begins by introducing a process state x and measurement z, related by equations with additive noise terms w and v. An a priori state estimate x^ is updated using a measurement z to give an a posteriori estimate x+. The blending factor K is derived by minimizing the a posteriori error variance, yielding K = s^/ (s^ + sv), where s^ is the a priori variance and sv is the measurement noise variance. This optimal K balances the a priori estimate and measurement based on their relative uncertainties. The Kalman filter thus combines estimates and measurements in a way that minimizes the estimated error.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
APPLICATION OF PARTICLE SWARM OPTIMIZATION TO MICROWAVE TAPERED MICROSTRIP LINEScseij
This document discusses using Particle Swarm Optimization (PSO) to design a tapered microstrip transmission line to match an arbitrary load to a 50Ω line. PSO was used to optimize the impedances of a three section tapered line to minimize reflections. Simulations found impedances that gave good matching at 5GHz. PSO converged to solutions in under 1000 iterations. This demonstrates PSO's effectiveness in solving multi-objective microwave engineering optimization problems.
Application of particle swarm optimization to microwave tapered microstrip linescseij
Application of metaheuristic algorithms has been of continued interest in the field of electrical engineering
because of their powerful features. In this work special design is done for a tapered transmission line used
for matching an arbitrary real load to a 50Ω line. The problem at hand is to match this arbitray load to 50
Ω line using three section tapered transmission line with impedances in decreasing order from the load. So
the problem becomes optimizing an equation with three unknowns with various conditions. The optimized
values are obtained using Particle Swarm Optimization. It can easily be shown that PSO is very strong in
solving this kind of multiobjective optimization problems.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Gradient descent is an optimization algorithm used to minimize a cost function by iteratively adjusting parameter values in the direction of the steepest descent. It works by calculating the derivative of the cost function to determine which direction leads to lower cost, then taking a step in that direction. This process repeats until reaching a minimum. Gradient descent is simple but requires knowing the gradient of the cost function. Backpropagation extends gradient descent to neural networks by propagating error backwards from the output to calculate gradients to update weights.
The document describes two algorithms, gradient descent and Gauss-Newton, for determining location from GPS satellite signals. It outlines the process of linearizing the pseudorange equations and deriving the iterative algorithms to minimize error between measured and calculated ranges. Simulation results on synthetic noiseless data show Gauss-Newton converges much faster than gradient descent, in 4 iterations versus over 50,000 iterations. Gauss-Newton is thus determined to be a more optimal algorithm for GPS positioning.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
The document presents a procedure for quantifying the roughness of diamond samples at the nanoscale. It involves calculating the ratio of the total surface area of the sample to its base area using 3D calculus. The procedure approximates the surface area formula and provides 11 steps to determine roughness factor from the data. It was tested on 3 samples and produced roughness factors of 26.17, 29.98, and 5.71 respectively. The goal was to create an easy-to-use method for the Materials Research Team to evaluate nano-scale coatings.
This poster was created in LaTeX on a Dell Inspiron laptop with a Linux Fedora Core 4 operating system. The background image and the animation snapshots are dxf meshes of elastic waveform solutions, rendered on a Windows machine using 3D Studio Max.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
This document describes numerical simulations of subsonic and supersonic flow through a choked nozzle using various schemes to solve the quasi-1D Euler equations. The author compares the steady state solutions from the Jameson-Schmidt-Turkel (JST) scheme (3, 4, and 5 stages), first-order Steger flux-vector splitting, and an analytic solution. The JST scheme converges quickly but is overly diffusive, while Steger converges slower but is less diffuse and more accurate near shocks. Higher-order JST stages and adjusting diffusion coefficients improve accuracy versus the analytic solution.
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
Development, Optimization, and Analysis of Cellular Automaton Algorithms to S...IRJET Journal
This document summarizes research on using cellular automaton algorithms to solve stochastic partial differential equations (SPDEs). It proposes a finite-difference method to approximate an SPDE modeling a random walk with angular diffusion. A Monte Carlo algorithm is also developed for comparison. Analysis finds a moderate correlation between the two methods, suggesting the finite-difference approach is reasonably accurate. It also identifies an inverse-square relationship between variables, linking to a foundational stochastic analysis concept. The research concludes the finite-difference method shows promise for approximating SPDEs while considering boundary conditions.
Theories and Applications of Spatial-Temporal Data Mining and Knowledge Disco...Beniamino Murgante
This document summarizes spatial-temporal data mining and knowledge discovery techniques. It discusses (1) clustering spatial data using scale-space filtering and regression-classification decomposition, (2) classifying spatial data using neural networks and decision trees, (3) discovering temporal processes using multifractal analysis, and (4) uncovering knowledge structures from relational spatial data using concept lattices. Various applications are described, including clustering typhoon tracks, earthquake data, and daily rainfall patterns to identify spatial and temporal patterns.
This document summarizes a study that used sigmoidal parameterization and Metropolis-Hasting (MH) inversion to estimate seismic velocity models from traveltime data. The key points are:
1) Sigmoidal functions were used to parameterize discontinuous velocity fields, allowing for sharp variations while maintaining continuity.
2) Ray tracing and the MH algorithm were used to invert traveltime data and estimate model parameters.
3) Tests on synthetic models showed the MH method produced higher resolution velocity models that better fit the observed traveltime data, compared to other global optimization methods like very fast simulated annealing.
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...Xin-She Yang
This document proposes a new two-stage hybrid search method called the Eagle Strategy for solving stochastic optimization problems. The Eagle Strategy combines random search using Lévy walk with intensive local search using the Firefly Algorithm. It first uses Lévy walk to randomly explore the search space, then switches to the Firefly Algorithm to intensively search locally around good solutions. Numerical results suggest the Eagle Strategy is efficient for stochastic optimization problems.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
This document evaluates the sensitivity of seismic inversion algorithms to wavelets estimated using different statistical methods. It summarizes two wavelet estimation techniques - the Hilbert transform method and smoothing spectra method. It also describes two inversion methods - Narrow-band inversion and a Bayesian approach. Numerical experiments were conducted to analyze the performance of the wavelet estimation methods and sensitivity of the inversion algorithms to estimated wavelets. The smoothing spectra method produced better wavelet estimates. The Bayesian approach yielded superior inversion results and more robust impedance estimates compared to Narrow-band inversion in all tests.
This document summarizes a novel algorithm for fast sparse image reconstruction from compressed sensing measurements. The algorithm uses adaptive nonlinear filtering strategies in an iterative framework. It formulates the image reconstruction problem using total variation minimization and solves it using a two-step iterative scheme. Numerical experiments show that the algorithm is efficient, stable, and fast compared to state-of-the-art methods, as it can reconstruct images from highly incomplete samples in just a few seconds with competitive performance.
The document describes the derivation of a simple Kalman filter. It begins by introducing a process state x and measurement z, related by equations with additive noise terms w and v. An a priori state estimate x^ is updated using a measurement z to give an a posteriori estimate x+. The blending factor K is derived by minimizing the a posteriori error variance, yielding K = s^/ (s^ + sv), where s^ is the a priori variance and sv is the measurement noise variance. This optimal K balances the a priori estimate and measurement based on their relative uncertainties. The Kalman filter thus combines estimates and measurements in a way that minimizes the estimated error.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
APPLICATION OF PARTICLE SWARM OPTIMIZATION TO MICROWAVE TAPERED MICROSTRIP LINEScseij
This document discusses using Particle Swarm Optimization (PSO) to design a tapered microstrip transmission line to match an arbitrary load to a 50Ω line. PSO was used to optimize the impedances of a three section tapered line to minimize reflections. Simulations found impedances that gave good matching at 5GHz. PSO converged to solutions in under 1000 iterations. This demonstrates PSO's effectiveness in solving multi-objective microwave engineering optimization problems.
Application of particle swarm optimization to microwave tapered microstrip linescseij
Application of metaheuristic algorithms has been of continued interest in the field of electrical engineering
because of their powerful features. In this work special design is done for a tapered transmission line used
for matching an arbitrary real load to a 50Ω line. The problem at hand is to match this arbitray load to 50
Ω line using three section tapered transmission line with impedances in decreasing order from the load. So
the problem becomes optimizing an equation with three unknowns with various conditions. The optimized
values are obtained using Particle Swarm Optimization. It can easily be shown that PSO is very strong in
solving this kind of multiobjective optimization problems.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...
lost_valley_search.pdf
1. Global optimization algorithm:Search
for the Lost Valley (SLV)
Manuel Abarca
Urubamba, Perú
Email: manuel.z.abarca@outlook.es
A new global search algorithm is proposed here. This technique to find a
minimum (or maximum) of an objective function begins with a population of
models randomly generated . The global minimum of this function must be at
some point of the variables space. Search for that minimum uses of an arithmetic
mean (centroid) between neighbourhoods models. There is also a similar tactic to
bring outside of a local minimum. The new algorithm is tested with a seismological
inversion problem, modelling the Earth through receiver function. First with
synthetic data and finally with real data of receiver functions.
Keywords: computation of global search; optimization; geophysical inversion; seismology;
receiver function
Received ; revised
1. INTRODUCTION
The global search algorithms are computational tools
used in many fields of science and industry; the goal is
to solve a problem through the minimization (or maxi-
mization) of an objective function (or error function).
The proposal of an algorithm to solve an also known as
optimization problem means we know two things: one,
the way to sampling the variables space and second, an
search mechanism. The focus of our research is on the
development of a new search mechanism.
There are some kind of search mechanisms based in
the gradient analysis (steepest descent [1];conjugate
gradient [2]). Another search mechanisms are purely
random or controlled random (Monte Carlo [5] , CRS
[7]). The process by which molten metal crystallizes
and reaches equilibrium in the process of annealing
can, for instance, gives place to a method of optimiza-
tion called Simulated Annealing [4]. A new criteria
in optimization were introduced by evolutionary algo-
rithms (Genetic Algorithm [3] ) which uses operations
as cross-over and mutation to find the best fitted indi-
vidual.
Our algorithm implements a search strategy similar
to the Simplex [6] but combined with a tracking in the
neighbourhood. The main difference respect to Simplex
is that this uses the point of centroid over all population
in variables space. While our logical reasoning is to find
the centroid around a (transitory) minimum point plus
two points in their neighbourhood. The idea below this
search mechanism is that one point of minimum in a
surface representing an error function could mean one
of this things: it is a local minimum or it is a point
near the global minimum. In the first case we have
a tactic to bring outside the local minimum, will be
described in the next section. In the second case we
have a mechanism to track in the vicinity of that point
as if it were located in a valley, until to reach the bottom
of the valley (in the hope that is the global minimum).
The name of our algorithm explains this search for an
unknown valley and the tracking by their surface until
to find the global minimum, Search for the Lost Valley
(SLV).
2. METHOD
First than all, some definitions here. We call ”model”
to the group of variables able to describe a physical
situation. For example the variables x1, x2, ..., xp can
represent seismic velocities and thicknesses of layers
in a sedimentary basin. One model explaining the
geophysical relationship between strata in the basin
could be ⃗
m1(x1
1, x1
2, ..., x1
p); another model would be
⃗
m2(x2
1, x2
2, ..., x2
p); where p is the number of variables in
the model.
We call ”forward problem” to the physical-
mathematical equations which relates a ”model”
with an ”measurable” response in some point of the
space; in a general form is a f(⃗
m). This function of
the model has unique solution, so, for each model ⃗
mi
there is one and only one set of observed responses in
the space f( ⃗
mi).
The ”real” data or ”observed” data ⃗
do are the mea-
sures of some physical magnitude in points of the space
(generally on surface earth).
We call ”inverse problem” to the mathematical pro-
cedure to find the model which best fit the observed
2. 2 M. Abarca
data. This is made minimizing an objective (or error)
function,
τ(⃗
m) = f(⃗
m) − ⃗
do
2
2
(1)
for instance we are using here the least square norm.
The inverse procedure is generally dealing with ill
posed problems. This is because the noise (natural,
instrumental, anthropogenic) included in the real
data. Then to stabilize the solution and to get
uniqueness in the inverse problem we have to apply
some regularization criteria. We choose the Tijonov
criteria ([9]), which implies to add a-priori information
to the objective function,
τ(⃗
m) = f(⃗
m) − ⃗
do
2
2
+ λ2
∥W · ⃗
m∥
2
2 (2)
⃗
do : real data;
W : regularization function;
λ : equalization factor .
Another measure necessary to our search strategy is
the ”distance” between models,
s̄ =
q
(x1
1 − x1)2 + (x1
2 − x2)2 + ... + (x1
p − xp)2 (3)
Now, we want to minimize the objective function τ
applying the SLV algorithm:
1. Create a random population of models,
⃗
m1, ⃗
m2, ..., ⃗
mn, with n number of elements; insert
a threshold value e.
2. Evaluate (2) for each model.
3. Sort the population of models by the magnitude
of error function. The lowest error is the best model,
until this step.
4. Measure the distance between best model and all
elements of population, (3).
5. Sort models by distance to the best model.
6. Choose 3 models with shortest distance; this group
includes the best (distance zero).
7. Take the centroid from 3 elements of group.
8. Evaluate (2) for the centroid.
8.1 Compare objective function of centroid with thresh-
old value; if τ is less to e the process ends. Another
stop criteria could be the number of iterations, defined
by n.
9. If the objective function of centroid is better than
τ of best model, then centroid takes the place of best
model; go to step 4.
10. Else, if the error of centroid is not better than best
model, then takes the next 2 models by distance and
the best one.
11. Go to step 7.
The tactic to bring outside a local minimum is given
in step 10. This search mechanism is not totally ran-
dom because we are using the best model as pivot.
If there is a lost valley in any region of the variables
space it will be possible to find with this mechanism.
We present the same algorithm in a flux diagram in
the next figure,
FIGURE 1. SLV algorithm for global search in
optimization problems. Begin (inicio), end (fin).
2.1. An improvement to search mechanism
When we evaluate the centroid over 3 models, in
mathematical terms we are taking the arithmetic mean,
⃗
mc = ( ⃗
m1 + ⃗
m2 + ⃗
m3)/3 (4)
so, we can refine our tracking by the surface of the
valley putting different weights to each one of this 3
models.
⃗
mc = (k1 · ⃗
m1 + k2 · ⃗
m2 + k3 · ⃗
m3)/6 (5)
3. 3
where,
k1 = 1; k2 = 2; k3 = 3 , for the first weighted mean;
k1 = 3; k2 = 1; k3 = 2 , for the second weighted mean;
k1 = 2; k2 = 3; k3 = 1 , for the third weighted mean.
Then, inside the step 8 of SLV we choose the centroid
with the lowest error function.
3. INVERSE PROBLEM IN GEOPHYSICS
We can take a geophysical example to show how to
apply the SLV algorithm in solution of inverse problems.
In a sedimentary basin the layers would be characterized
by parameters as seismic velocity, density of rocks
and thicknesses of strata. Knowing some physical-
mathematical formulae we are able to calculate the
travel time of a seismic wave, from the bottom of
sediments to top (air-earth interface). This is the
forward problem in geophysics,
T(⃗
m(xj, xj+1, ..., xp−1, xp)) (6)
where,
xj : seismic velocity in the first layer ;
xj+1 : depth of first layer;
xp−1 : seismic velocity of semi-infinite medium;
xp : depth trending to infinite (90000 m in table 1
means semi-infinite);
T : travel time.
But, in a real seismological investigation we got time
series (seismograms) registered by instruments (called
seismometers) where we can read arrival times of seis-
mic waves. In Receiver Function method (RF) we have
not just the travel time of one kind of seismic wave, but
also the times of converted waves (P to S) and rever-
berated waves; further we have the complete waveform
of RF. In any case the relevant parameter is time,
t0 observed in a seismogram or in a pseudo-seismogram.
We don’t describe the RF method in their seis-
mological intricacies for one reason: this is not a
seismological study. This a research with focus in the
area of optimization, or could be also in geophysical
inversion techniques, so our proposal is to make known
a new method in global search (or maybe in non linear
optimization).
The final subject of an inversion is to get a
realistic model of sedimentary basin from RF pseudo-
seismogram. This implies to minimize an objective
function,
τ(⃗
m) = ∥T(⃗
m) − t0∥
2
2 + λ2
∥W · ⃗
m∥
2
2 (7)
4. TESTING SLV WITH A SYNTHETIC RF
It is a theoretical model, with variables xj represent-
ing seismic velocity of P-wave and depth of layers
into a sedimentary basin , Last depth has a very large
[hhh]
TABLE 1. Seismic model of a sedimentary basin.
Depht m Vp m/s
1500 1800
4000 2500
90000 5000
value because it is representing a semi-inifinite medium.
Theoretical response of a seismic P-wave crossing this
pack o layers is given by a RF (Fig. 2)
FIGURE 2. Synthetic Receiver Function (RF), obtained
from the model of table 1.
The test consists in applying our SLV algorithm to
inverse problem; in that way we have to recovery the
same model (or approximately the same) given in table
1, from inversion of ”observed” RF (Fig. 2). Our RF
gets a more realistic shape including 5% of noise.
Minimization of( 7) will produce a best fitted RF
associated to a seismic model which must be similar
to the proposed in the premises of our test,
1.0
2.0
3.0
4.0
5.0
1000
2000
3000
5000
FIGURE 3. ”Real” waveform of RF and best fitted RF
(at the left); seismic model of a sedimentary basin result of
inversion (right side of figure).
4. 4 M. Abarca
Considering the difficulties of RF inversion, results of
the test are satisfactory.
5. EVALUATION OF GOODNESS OF SLV
ALGORITHM
SLV algorithm begins generating a random population
of models; the ideal number of models in this population
is in the range 50-60. Test of sensitivity shows that
an initial population too great no correlates with an
increasing in fitness of final result.
FIGURE 4. Surface of objective function from 512 models
randomly created.
Creation of random models open the possibility to
put an interval for each variable. This is a kind of soft
regularization.
The first best model (from initial random population)
is signed with a green square. In figure (Fig. 5) the path
of search by new local minima is indicated with orange
straight lines. When SLV found a general minima marks
this with a clear blue square. Note that the area around
the blue square is not revealed as a valley in the initial
population (Fig. 4).
inicio
final
FIGURE 5. Best model from initial population in green
square (inicio); final solution for inversion in blue square;
path of search in orange lines; the nodes of path are points
of centroid, this explains why there are not nodes in the
borders of surface of objective function.
6. TESTING SLV WITH A REAL GEOPHYS-
ICAL PROBLEM
The waveform of a RF is the subject of present
evaluation of SLV in their capacity to solve inverse
problems. This RF (Fig.6)was obtained in a study
developed over sedimentary basin of Parana river ([11]).
0 1 2 3 4 5
Tiempo - seg.
-0,2
-0,1
0
0,1
0,2
0,3
0,4
FR
amplitud
FR ajustada
FR observada
FIGURE 6. Best fitted RF in black line; observed RF in
green dotted line.
After running our program which realize the SLV
algorithm we got inversion of RF, obtaining following
5. 5
2000 2500 3000 3500 4000 4500 5000
Velocidad de P - m/s
-9000
-8000
-7000
-6000
-5000
-4000
-3000
-2000
-1000
0
Profundidad
-
m
FIGURE 7. Seismic model of sedimentary basin in Parana
(Brazil), obtained from inversion of RF.
2,5 3 3,5 4 4,5 5 5,5 6
Vp (km/s)
-7
-6
-5
-4
-3
-2
-1
0
Profundidade
(km)
Modelo sismico
Dois RF inversion - capb stn.
FIGURE 8. Result of RF inversion with GA in Parana
basin ([10]), using same seismic station and the same
waveform of RF used in this section.
results (Fig. 7).
In the doubt smoothes, says some geophysicist who’s
name I don’t remember. So, we apply smoothness
in the regularization function of our τ. This type
of regularization try to oblige the model to minimize
differences between velocities of layers; but abrupt
changes in velocity can appear if the data indicates
necessity of that jumps in velocity. This is a good proof
for our algorithm and the model in following figure is
signalling that strong differences in velocities are real.
In this case we have not a previously well known model
to compare with the result. However we have made an
study some years ago with the same RF . In that old
study we used a Genetic Algorithm (GA) in inversion of
RF, obtaining the model exhibited in Fig.8. Comparing
the resulting seismic models for Parana basin, obtained
with two different algorithms, SLV Fig. 7 and GA
Fig. 8, we can see many similarities. One of the
main goals of that old study was to determine depth
of sedimentary basin (until basement). In both models
the basement depth is located near 3500 m. A second
feature distinctive of Parana basin is the existence of
a surficial volcanic rocks cover; this basalts layer has
higher seismic velocity than sediments below. Both
algorithms were capable to define the high velocity
upper layer in the range 4250-4500 m/s for P-wave.
GA uses more layers to fit the RF, this is the reason
by which we can see a thin low velocity layer at the
top of basin; but SLV (using fewer layers) can fit the
waveform with an average-velocity layer in surface.
7. CONCLUSIONS
SLV algorithm meets two important proposals of all
global optimization strategy,. First one. is capable of
an uniform and complete sampling of variables space;
this quality is given by a good random numbers gen-
erator. And the second one, has a search mechanism
able to track the variables space until to find any local
minimum, and probably the general minima. Looking
in the neighbourhood of a transitory best model with 4
centroids increase probability to get a new best model.
It is a notable feature of algorithm the pivoting on each
new best model, re-beginning the search for a new min-
ima, or jumping outside this local minimum but track-
ing again all the space.
The new global search algorithm SLV passes satisfacto-
rily two tests. The first test with synthetic RF, and the
second test inverting a real RF.
DATA AVAILABILITY
The data underlying this article will be shared on
reasonable request to the corresponding author.
ACKNOWLEDGEMENTS
I have to thank to mi sister Martha by her financial
support during research epoch.
REFERENCES
[1] R. Fletcher, M. J. D. Powell (1963) A Rapidly
Convergent Descent Method for Minimization. The
Computer Journal, 6-2, 163–168.
[2] R. Fletcher, C. M. Reeves (1964) Function minimiza-
tion by conjugate gradients. The Computer Journal,
7-2, 149–154.
[3] J.H. Holland (1975) Adaptation in Natural and
Artificial Systems. University of Michigan Press, Ann
Arbor, Michigan, re-issued by MIT Press.
[4] Scott Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi
(1983) Optimization by simulated annealing. Science,
220(4598), 671–680.
[5] Nicholas Metropolis, S. Ulam (1949) The Monte
Carlo Method. Journal of the American Statistical
Association, 44, No. 247, 335–341.
[6] Nelder, John A.; R. Mead (1965) A simplex method for
minimization. The ComputerJournal , 7, 308–313.
[7] W. L. PRICE (1983) Global Optimization by Con-
trolled Random Search. Journal of optimization theory
and applications, 40, 333–348.
6. 6 M. Abarca
[8] Tarantola, A. (2005) Inverse problem theory and
methods for model parameter estimation. Society for
Industrial and Applied Mathematics, Philadelphia.
[9] Tijonov,Andrei Nikolayevich (1963) Solution of in-
correctly formulated problems and the regularization
method. Soviet Mathematics , 4, 1035–1038.
[10] Zevallos Abarca, Ivan (2004) Modelamento da Bacia
do Parana - reservatorio Capivara - atraves da inversao
conjunta de Funcao do Receptor e de sondagem
Magnetotelurica. Universidade de Sao Paulo, Sao
Paulo.
[11] Iván Zevallos, Marcelo Assumpc̃ão, A. L. Padilha.
(2009) Inversion of teleseismic receiver function and
magnetotelluric sounding to determine basement depth
in the Paraná basin, SE Brazil. Journal of Applied
Geophysics , 68, 231–242.