Upcoming SlideShare
×

# Numerical Methods

7,117 views

Published on

Dynamics Course

Published in: Technology, Education
1 Comment
2 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• its really meaning full.

Are you sure you want to  Yes  No
Views
Total views
7,117
On SlideShare
0
From Embeds
0
Number of Embeds
85
Actions
Shares
0
426
1
Likes
2
Embeds 0
No embeds

No notes for slide

### Numerical Methods

1. 1. Prof. A. Meher Prasad Department of Civil Engineering Indian Institute of Technology Madras email: prasadam@iitm.ac.in NUMERICAL METHODS
2. 2. Direct Integration of the Equations of Motion <ul><li>Provides the response of system at discrete intervals of time (which are usually equally spaced). </li></ul><ul><li>A process of marching along the time dimension, in which the response parameters (i.e., acceleration, velocities & displacements at a give time point) are evaluated from their known historic values. </li></ul><ul><li>For a SDOF system, this requires three equations to determine three unknowns. </li></ul><ul><li>(a) Two of these equations are usually derived from assumptions regarding the manner in which the response parameters vary during a time step. </li></ul><ul><li>(b) The third equation is the equation of motion written at a selected time point. </li></ul>
3. 3. <ul><li>When the selected point represents the current time (n), the method of integration is referred to as Explicit method (e.g. Central difference method ). </li></ul><ul><li>When the equation of motion is written at the next time point in the future (n+1), the method is said to be Implicit method (e.g. Newmark’s β method, Wilson – θ method ). </li></ul>Direct Integration of the Equations of Motion…
4. 4. For SDOF System Let ∆t = time interval t n = n ∆t P n is the applied force at time t n Direct Integration of the Equations of Motion… P(t) ∆ t 0 1 …… t n t n+1 P n P n+1 mx + cx + kx = P(t) .. . x n , x n , x n Displacement, velocity and acceleration at time station ‘n’ . ..
5. 5. General Expression for the time integration methods R is a remainder term representing the error given by x (m) is the value of m th differential of x at t= ξ A l , B l and C l are constants ( some of which may be equal to zero) (n-k) ∆t ≤ ξ ≤ (n+1) ∆t . ..
6. 6. <ul><li>The equation is employed to represent exactly a polynomial of order p-1, p being smaller than m. </li></ul><ul><li>Then (m-p) constants become available which can be assigned arbitrary chosen values so as to improve stability or convergence characteristics of the resulting formula. </li></ul><ul><li>Formulas of type eq(1) for time integration can also be obtained from physical considerations, such as, for example, an assumed variation in the acceleration or from the finite difference approximations of the differentials. </li></ul><ul><li>Eq(1) relates </li></ul><ul><li>stations n-k, n-k+1, ………n </li></ul><ul><li>Eq(1) has m = 5+3k undetermined constants A, B and C. </li></ul>x n , x n , x n @ t n+1 to their values at the previous time . ..
7. 7. Newmark’s β Method <ul><li>In 1959, Newmark devised a series of numerical integration formulas collectively known as Newmark’s β methods. </li></ul><ul><li>The velocity expression is of the form </li></ul><ul><li> x n+1 = a 1 x n +a 2 x n +a 3 x n+1 (1) </li></ul><ul><li>The displacement expression is of the form </li></ul><ul><li> x n+1 = b 1 x n +b 2 x n +b 3 x n +b 4 x n+1 (2) </li></ul><ul><li>To determine constants make equations (1) & (2) for x=1, x=t, x=t 2 , </li></ul><ul><li>we get a 1 =1, 2 ∆t = 2a 2 +2a 3 </li></ul><ul><li>b 1 =1, b 2 = ∆t, 2b 3 +2b 4 =(∆t) 2 </li></ul><ul><li>Say a 3 = γ ∆t & b 4 = β ( ∆t) 2 </li></ul>x n x n+1 ∆ t . .. .. . . .. ..
8. 8. Then equations (1) & (2) reduce to x n+1 = x n + ∆t(1- γ )x n + ∆t γ x n+1 +R (3) x n+1 = x n + ∆t x n + ( ∆t) 2 (1/2 – β ) x n + (∆t) 2 β x n+1 + R (4) Third relationship m x n+1 + cx n+1 + kx n+1 = P n+1 (5) Substituting eqn.(3) and (4) in eqn.(5), we get expression for x n+1 To begin the time integration, we need to know the values of x o , x o and x o at time t=0. . . . .. .. .. .. . .. .. . .. Newmark’s β Method…
9. 9. Acceleration, Time, t γ =0, β =0 Constant Acceleration x .. ∆ t ∆ t x n ..
10. 10. γ =1/2, β =1/4 Average Acceleration Acceleration, x .. ∆ t Time, t x n .. x n+1 .. x = .. .. ..
11. 11. γ =1/2, β =1/6 Linear Acceleration Acceleration, x .. ∆ t Time, t x n .. x n+1 .. x = .. .. .. .. t
12. 12. Algorithm Enter k, m, c, β , γ and P(t) x 0 = .. . Select ∆t ^
13. 13. x i+1 = x i + ∆x i ; x i+1 = x i + ∆x i ; x i+1 = x i + ∆x i ∆ x i = ^ ^ . .. i = 0 i = i+1 . ∆ p i = p i + a x i + b x i ^ .. . .. . .. . . .. . . .. ..
14. 14. Elastoplastic System x 0 = .. ; x t = ; x c = Define key = 0 (elastic) key = -1 (plastic behavior in compression key = 1 (plastic behavior in tension) Newmark’s β Method -- Enter k, m, c, R t , R c and P(t) Set x 0 = 0, x 0 = 0; . Select ∆t
15. 15. Calculate x i and x i . key = -1; R=R c x i > x c x i < x t R = R t – (x t – x i ) k x i < x c x i > x t key = 1; R=R t < 0 = (P(t i+1 ) – c i+1 – R) /m x i+1 .. x i+1 . > 0 x i . key = 0; x t = x i ; x c = x i – (R t – R c )/k R = R t – (x t – x i ) k x i . key = 0; x c = x i ; x t = x i + (R t – R c )/k R = R t – (x t – x i ) k n y y n y y y n i = 0 i = i+1
16. 16. Central Difference Method The method is based on finite difference approximations of the time derivatives of displacement (velocity and acceleration) at selected time intervals Displacement, u Time, t x n+1 θ (n-1) ∆t (n+1) ∆t x n-1
17. 17. . ^ Algorithm Enter k, m, c, and P(t) x 0 = .. x -1 = x 0 - ∆t x 0 + 0.5 ∆t 2 x 0 .. .
18. 18. x i+1 = ^ ^ . .. i = 0 i = i+1 p i = p i – a x i-1 – b x i ^
19. 19. Wilson- Method Time, t ∆ t <ul><li>This method is similar to the linear acceleration method and is based on the assumption that the acceleration varies linearly over an extended interval θ ∆t. </li></ul><ul><li>θ , which is always greater than 1, is selected to give the desired characteristics of accuracy and stability. </li></ul>∆ t x n .. x n+1 .. x n+ θ .. Acceleration, x ..
20. 20. . n=0 Algorithm Enter k, m, c, , ∆t and P(t) Specify initial conditions p n+ θ = p n (1- θ ) + p n+1 θ ^ k = a 1 m + a 3 c + k ; a 5 = a 1 x n + a 4 x n + 2x n ; a 6 = a 3 x n + 2x n + a 2 x n . .. . .. A
21. 21. x n+ θ = ( p n+ θ +ma 5 +ca 6 ) /k ^ . .. .. A n = n+1 x n+1 = x n + (x n+ θ – x n ) / θ .. .. .. .. x n+1 = x n + (x n + x n+1 ) h /2 .. . . ..
22. 22. Errors involved in the Numerical Integration <ul><li>Round off errors </li></ul><ul><li>Introduced by repeated computation using a small step size. </li></ul><ul><ul><li>Random in nature </li></ul></ul><ul><ul><li>To reduce use higher precision </li></ul></ul><ul><li>Truncation errors </li></ul><ul><li>Involved in representing x n+1 and x n+1 by a finite number of terms in the Taylor’s series expansion. </li></ul><ul><ul><li>Represented by R in the previous slides </li></ul></ul><ul><ul><li>Accumulated locally at each step. </li></ul></ul><ul><ul><li>If integration method is stable, then truncation error indicates the accuracy. </li></ul></ul><ul><li>Propagated error </li></ul><ul><li>Introduced by replacing the differential equation by a finite difference equivalent. </li></ul>.
23. 23. Stability of the Integration method <ul><li>Effect of the error introduced at one step on the computations at the next step determines the stability . </li></ul><ul><li>If error grows, the solution becomes unbounded and meaningless. </li></ul><ul><li>Spectral radius of a matrix ρ (A) = max of (magnitude of eigen values of A) </li></ul><ul><li>A is ‘amplification matrix’ </li></ul>. If θ ≥ 1.37 Wilson- θ is unconditionally stable . .. .. . ρ (A) > 1 Unstable
24. 24. Attributes required for good Direct Integration method <ul><li>Unconditional stability when applied to linear problems </li></ul><ul><li>Not more than one set of implicit equation to be solved at each step </li></ul><ul><li>Second order accuracy </li></ul><ul><li>Controllable algorithmic dissipation in the higher modes </li></ul><ul><li>Self starting – Wilson- θ is reasonably good </li></ul>* For MDOF systems, scalar equations of the SDOF systems become matrix equations.
25. 25. Spectral radii for α -methods, optimal collocation schemes and Houbolt, Newmark, Park and Wilson methods
26. 26. Selection of a numerical integration method Period elongation vs. ∆t/T Amplitude decay vs. ∆t/T * For the numerical integration of SDOF systems, the linear acceleration method, which gives no amplitude decay and the lowest period elongation, is the most suitable of the methods presented
27. 27. Selection of time step ∆t <ul><li> t must be small enough to get a good accuracy, and long enough to be computationally efficient. </li></ul><ul><li>p ∆t < 1 i .e., ∆t/T ≤ 0.16 (arrived at from truncation errors for a free vibration case) </li></ul><ul><li>Typically ∆t/T ≈ 0.1 is acceptable. </li></ul><ul><li>Sampling of exciting function at intervals equal to selected ∆t inspection of the forcing function. </li></ul>
28. 28. Mass Condensation or Guyan Reduction <ul><li>Extensively used to reduce the number of D.O.F for eigen value extraction. </li></ul><ul><li>Unless properly used it is detrimental to accuracy </li></ul><ul><li>This method is never used when optimal damping is used for mass matrix </li></ul>
29. 29. <ul><li>Assumption: Slave d.o.f do not have masses – only elastic forces are important </li></ul>
30. 30. <ul><li>Choice of Slave d.o.f </li></ul><ul><ul><li>All rotational d.o.f </li></ul></ul><ul><ul><li>Find ratio, neglect those having large values for this ratio </li></ul></ul><ul><ul><li>If [ M ss ] = 0, diagonal, [K r ] = same as static condensation then there is no loss of accuracy </li></ul></ul>
31. 31. Subspace Iteration Method <ul><li>Most powerful method for obtaining first few Eigen values/Eigen vectors </li></ul><ul><li>Minimum storage is necessary as the subroutine can be implemented as out-of core solver </li></ul><ul><li>Basic Steps </li></ul><ul><ul><li>Establish p starting vectors, where p is the number of Eigen values/vectors required P<<n </li></ul></ul><ul><ul><li>Use simultaneous inverse iteration on ‘p’ vectors and Ritz analysis to extract best Eigen values/vectors </li></ul></ul><ul><ul><li>After iteration converges, use STRUM sequence check to verify on missing Eigen values </li></ul></ul>
32. 32. <ul><li>Method is called “Subspace” iteration because it is equivalent </li></ul><ul><li>to iterating on whole of ‘p’ dimension (rather that n) and not </li></ul><ul><li>as simultaneous iteration of “p’ individual vectors </li></ul><ul><li>Starting vectors </li></ul><ul><li>Strum sequence property </li></ul>For better convergence of initial lower eigen values ,it is better if subspace is increased to q > p such that, q = min( 2p , p+8) Smallest eigen value is best approximated than largest value in subspace q.
33. 33. Starting Vectors (1) When some masses are zero, for non zero d.o.f have one as vector entry. (2) Take ratio .The element that has minimum value will have 1 and rest zero in the starting vector.
34. 34. <ul><li>Starting vectors can be generated by Lanczos algorithm- converges fast. </li></ul><ul><li>In dynamic optimisation , where structure is modified previous vectors could be good starting values. </li></ul>Eigen value problem (1) (2) (3)
35. 35. Eqn. 2 are not true. Eigen values unless P = n If [  ] satisfies (2) and (3),they cannot be said that they are true Eigen vectors. If [  ] satisfies (1),then they are true Eigen vectors. Since we have reduced the space from n to p. It is only necessary that subspace of ‘P’ as a whole converge and not individual vectors.
36. 36. Algorithm: Pick starting vector X R of size n x p For k=1,2,…..   k+1 – { X } k+1 -  k -   static p x p p x p Smaller eigen value problem, Jacobi
37. 37. Factorization Subspace Iteration Sturm sequence check (1/2)nm 2 + (3/2)nm nq(2m+1) (nq/2)(q+1) (nq/2)(q+1) n(m+1) (1/2)nm 2 + (3/2)nm 4nm + 5n nq 2
38. 38. Total for p lowest vector. @ 10 iteration with nm 2 + nm(4+4p)+5np q = min(2p , p+8) is 20np(2m+q+3/2) This factor increases as that iteration increases. N = 70000,b = 1000, p = 100, q = 108 Time = 17 hours
39. 39. <ul><li>Aim: Generate (neq x m) modal matrix (Ritz vector). </li></ul><ul><li>Find  k and { u } k for the k th component </li></ul><ul><li>Let [  ] k = substructure Modal matrix </li></ul><ul><li>which is nk x n  , nk = # of interior d.o.f </li></ul><ul><li>n  = # of normal modes take determined for that structure </li></ul><ul><li>Assuming ‘l’ structure, </li></ul>
40. 40. (2) Neq x m [ I ] k,k+1 - with # of rows = # of attachment d.o.f. between k and k+1 = # of columns Ritz analysis: Determine [ K r ] = [R] T [k] [R] [ M r ] = [R] T [M] [R] [k r ] {X} = [M] r +[X] [ ] - Reduced Eigen value problem Eigen vector Matrix, [  ] = [ R ] [ X ]
41. 41. Use the subspace Iteration to calculate the eigen pairs (  1 ,  1 ) and (  2 ,  2 ) of the problem K  =  M  ,where Example