Successfully reported this slideshow.
Upcoming SlideShare
×

# Levenberg - Marquardt (LM) algorithm_ aghazade

775 views

Published on

Introducing nonlinear inverse problems some local optimization algorithms

Published in: Science
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

### Levenberg - Marquardt (LM) algorithm_ aghazade

1. 1. Levenberg Marquardt Algorithm Kamal Aghazade Aghazade.kamal@ut.ac.ir In The Name Of God
2. 2. Introduction Linear Inverse Problems • The relation between unknown model parameters and observed data is Linear. Nonlinear Inverse Problems • the relationship between the model parameters and the data can be Nonlinear. • Nonlinearity itself is the source of ill-posedness. 2 Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points.
3. 3. one of the main difficulty with nonlinear problems is the local minima trap. The initial model is important for certain convergence. 3Local minima trap
4. 4. Jacobian Matrix : represents the local sensitivity of the calculated data to variation in the parameters . Solution of the nonlinear problems leads to iterative procedure. 4Taylor series approximation In local optimization manner we are looking for a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions.
5. 5. Newton root finding algorithm 5 Itterative process need to resolve the unkonwns .
6. 6. Gauss Newton Algorithm 6
7. 7. 7 Gauss-Newton Algorithm
8. 8. Challenges with nonlinear LS problems Gradient Descent • certain Convergence • Slow convergence rate Gauss Newton • No guarantee about convergence • good convergence rate 8 Certain Convergence & Convergence rate Another Problem: Matrix Singularity may occur.
9. 9. Levenberg Marquardt Algorithm By defining new parameter in Gauss Newton Algorithm we can deal with Convergence guarantee , Convergence rate and Matrix singularity. LM : 9
10. 10. 1 0
11. 11. The hard part of the Levenberg–Marquardt method is determining the right value of λ. The general idea is to use small values of λ in situations where the Gauss–Newton method is working well, but to switch to larger values of λ when the Gauss–Newton method is not making progress. A very simple approach is to start with a small value of λ, and then adjust it in every iteration. If the Levenberg–Marquardt step leads to a reduction in f(m), then decrease λ by a constant factor (say 2). If the Levenberg–Marquardt step does not lead to a reduction in f(m), then do not take the step. Instead, increase λ by a constant factor (say 2), and try again. Repeat this process until a step is found which actually does decrease the value of f(m). Aster (2005) 1 1
12. 12. Gradient Descent Algorithm 1 2
13. 13. Gauss - Newton Algorithm 1 3
14. 14. Levenberg Marquardt Algorithm 1 4
15. 15. Example Earthquake location (Modern Global Seismology , page 231). 1 5
16. 16. Stations Location 1 6
17. 17. Initial model 1 7 X0 21 Y0 21 Z0 12 t0 30
18. 18. Gradient Descent Algorithm 1 8
19. 19. Results 1 9
20. 20. Iteration Newton Levenberg Marquardt 1 29.268 30.7036 21.0631 34.4796 29.2612 30.7085 21.1182 34.4767 2 30.0082 30.1598 11.1419 35.0240 30.0007 30.1781 11.2455 35.0169 3 29.9522 30.1925 9.1039 34.9608 29.9405 30.2195 9.3350 34.9501 4 29.9532 30.1922 8.9268 34.9596 29.9395 30.2237 9.2259 34.9473 5 29.9533 30.1921 8.9249 34.9596 29.9394 30.2240 9.2280 34.9472 X0 21 Y0 21 Z0 12 t0 30 Initial Model X0 30 Y0 30 Z0 8 t0 35 True Model 2 0
21. 21. Good Luck