Numerical Methods
Upcoming SlideShare
Loading in...5
×
 

Numerical Methods

on

  • 1,857 views

Presentation by Didier Besset from ESUG 1999

Presentation by Didier Besset from ESUG 1999

Statistics

Views

Total Views
1,857
Views on SlideShare
1,846
Embed Views
11

Actions

Likes
0
Downloads
78
Comments
0

1 Embed 11

http://www.slideshare.net 11

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • At that time, the radius of the earth was not known exactly. The radius of the circle corresponding to the error quoted by Jamshid Masud (0.7mm) is 20 times larger than the half axis of the orbit of Pluto.
  • DhbFloatingPointMachine is a class which determines automatically the parameters of the floating point representation on the current machine.
  • In this talk, only the classes in “pink” boxes will be discussed.
  • Tasks in green boxes are the responsibility of the subclasses.
  • Methods “evaluate” and “hasConverged” are methods of the super class. Methods “initializeIterations”, “evaluateIterations” and “finalizeIterations” are the responsibility of the subclasses.
  • This is the abstract class at the top of the hierarchy of any iterative process. The class diagram shows the class name at top (without prefix), the methods (other than “get” and “set” of instance variables) in the middle and the instance variables at bottom. Instance variables have a corresponding “get” methods when tagged with a “g” and a corresponding “set” method when tagged with a “s”. The code is available in Listing 1.
  •   The code is available in Listing 2.
  •   The code is available in Listing 3.
  •     The code is available in Listing 4.
  •    The code is available in Listing 5.

Numerical Methods Numerical Methods Presentation Transcript

  • Numerical Methods ESUG, Gent 1999 Didier Besset Independent Consultant
  • Road Map
    • Dealing with numerical data
    • Framework for iterative processes
    • Newton’s zero finding
    • Eigenvalues and eigenvectors
    • Cluster analysis
    • Conclusion
  • Road Map
    • Dealing with numerical data
    • Framework for iterative processes
    • Newton’s zero finding
    • Eigenvalues and eigenvectors
    • Cluster analysis
    • Conclusion
  • Dealing With Numerical Data
    • Besides integers, Smalltalk supports two numerical types
      • Fractions
      • Floating point numbers
  • Using Fractions
    • Results of products and sums are exact.
    • Cannot be used with transcendental functions.
    • Computation is slow.
    • Computation may exceed computer’s memory.
    • Convenient way to represent currency values when combined with rounding.
  • Using Floating Point Numbers
    • Computation is fast.
    • Results have limited precision (usually 54 bits, ~16 decimal digits).
    • Products preserve relative precision.
    • Sums may keep only wrong digits.
  • Using Floating Point Numbers
    • Sums may keep only wrong digits. Example:
      • Evaluate 2.718 281 828 459 0 5 - 2.718 281 828 459 0 4
      • … = 9.76996261670138 x 10 -15
      • Should have been 10 -14 !
  • Using Floating Point Numbers
    • Beware of meaningless precision!
      • In 1424, Jamshid Masud al-Kashi published  = 3.141 592 653 589 793 25…
      • … but noted that the error in computing the perimeter of a circle with a radius 600’000 times that of earth would be less than the thickness of a horse’s hair .
  • Using Floating Point Numbers
    • Donald E. Knuth :
      • Floating point arithmetic is by nature inexact, and it is not difficult to misuse it so that the computed answers consist almost entirely of “noise”. One of the principal problems of numerical analysis is to determine how accurate the results of certain numerical methods will be.
  • Using Floating Point Numbers
    • Never use equality between two floating point numbers.
    • Use a special method to compare them.
  • Using Floating Point Numbers
    • Proposed extensions to class Number
    relativelyEqualsTo: aNumber upTo: aSmallNumber | norm | norm := self abs max: aNumber abs. ^norm <= aSmallNumber or: [ (self - aNumber) abs < ( aSmallNumber * norm)]   equalsTo: aNumber ^self relativelyEqualsTo: aNumber upTo: DhbFloatingPointMachine default defaultNumericalPrecision  
  • Road Map
    • Dealing with numerical data
    • Framework for iterative processes
    • Newton’s zero finding
    • Eigenvalues and eigenvectors
    • Cluster analysis
    • Conclusion
  • Iterative Processes
    • Find a result using successive approximations
    • Easy to implement as a framework
    • Wide field of application
  • Successive Approximations Begin Compute or choose initial object Compute next object Object sufficient? Yes End No
  • Iterative Processes NewtonZeroFinder TrapezeIntegrator SimpsonIntegrator RombergIntegrator EigenValues MaximumLikelihoodFit LeastSquareFit GeneticAlgorithm SimplexAlgorithm HillClimbing IterativeProcess FunctionalIterator ClusterAnalyzer InfiniteSeries ContinuedFraction
  • Implementation Begin iterations := 0 Yes Perform clean-up Compute next object iterations := iterations + 1 No iterations < maximumIterations Compute or choose initial object End No Yes precision < desiredPrecision
  • Methods evaluate finalizeIteration hasConverged evaluateIteration initializeIteration Compute or choose initial object Yes No Compute next object iterations := iterations + 1 Perform clean-up No Yes iterations := 0 iterations < maximumIterations Begin End precision < desiredPrecision
  • Simple example of use | iterativeProcess result | iterativeProcess := <a subclass of IterativeProcess> new. result := iterativeProcess evaluate. iterativeProcess hasConverged ifFalse:[ <special case processing> ].
  • Advanced example of use | iterativeProcess result precision | iterativeProcess := < a subclass of DhbIterativeProcess > new. iterativeProcess desiredPrecision: 1.0e-6; maximumIterations: 25. result := iterativeProcess evaluate. iterativeProcess hasConverged ifTrue: [ Transcript nextPutAll: ‘Result obtained after ‘. iterativeProcess iteration printOn: Transcript. Transcript nextPutAll: ‘iterations. Attained precision is ‘. iterativeProcess precision printOn: Transcript. ] ifFalse:[ Transcript nextPutAll: ‘Process did not converge’]. Transcript cr.
  • Class Diagram IterativeProcess evaluate initializeIteration hasConverged evaluateIteration finalizeIteration precisionOf:relativeTo: desiredPrecision (g,s) maximumIterations (g,s) precision (g) iterations (g) result (g)
    • Computation of precision should be made relative to the result
    • General method:
    Case of Numerical Result precisionOf: aNumber1 relativeTo: aNumber2 ^aNumber2 > DhbFloatingPointMachine default defaultNumericalPrecision ifTrue: [ aNumber1 / aNumber2] ifFalse:[ aNumber1] Result Absolute precision
  • Example of use | iterativeProcess result | iterativeProcess := <a subclass of FunctionalIterator> function: ( DhbPolynomial new: #(1 2 3). result := iterativeProcess evaluate. iterativeProcess hasConverged ifFalse:[ <special case processing> ].
  • Class Diagram initializeIteration computeInitialValues relativePrecision setFunction: functionBlock (s) FunctionalIterator IterativeProcess value: AbstractFunction
  • Road Map
    • Dealing with numerical data
    • Framework for iterative processes
    • Newton’s zero finding
    • Eigenvalues and eigenvectors
    • Cluster analysis
    • Conclusion
  • Newton’s Zero Finding
    • It finds a value x such that f(x) = 0.
    • More generally, it can be used to find a value x such that f(x) = c .
  • Newton’s Zero Finding
    • Use successive approximation formula x n+1 = x n – f(x n )/f’(x n )
    • Must supply an initial value
      • overload computeInitialValues
    • Should protect against f’(x n ) = 0
  • Newton’s Zero Finding x 1 f(x) x 2 x 3 y = f(x 1 ) + f’(x 1 ) · (x- x 1 )
  • Example of use | zeroFinder result | zeroFinder:= DhbNewtonZeroFinder function: [ :x | x errorFunction - 0.9] derivative: [ :x | x normal]. zeroFinder initialValue: 1. result := zeroFinder evaluate. zeroFinder hasConverged ifFalse:[ <special case processing> ].
  • Class Diagram evaluateIteration computeInitialValues (o) defaultDerivativeBlock initialValue: setFunction: (o) NewtonZeroFinder IterativeProcess FunctionalIterator value: AbstractFunction derivativeBlock (s)
  • Newton’s Zero Finding x 1 Effect of a nearly 0 derivative during iterations x 2 x 3
  • Road Map
    • Dealing with numerical data
    • Framework for iterative processes
    • Newton’s zero finding
    • Eigenvalues and eigenvectors
    • Cluster analysis
    • Conclusion
  • Eigenvalues & Eigenvectors
    • Finds the solution to the problem M · v =  · v where M is a square matrix and v a non-zero vector.
    •  is the eigenvalue .
    • v is the eigenvector .
  • Eigenvalues & Eigenvectors
    • Example of uses:
      • Structural analysis (vibrations).
      • Correlation analysis (statistical analysis and data mining).
  • Eigenvalues & Eigenvectors
    • We use the Jacobi method applicable to symmetric matrices only.
    • A 2 x 2 rotation matrix is applied on the matrix to annihilate the largest off-diagonal element.
    • This process is repeated until all off-diagonal elements are negligible.
  • Eigenvalues & Eigenvectors
    • When M is a symmetric matrix, there exist an orthogonal matrix O such that: O T · M · O is diagonal.
    • The diagonal elements are the eigenvalues.
    • The columns of the matrix O are the eigenvectors of the matrix M .
  • Eigenvalues & Eigenvectors
    • Let i and j be the indices of the largest off-diagonal element.
    • The rotation matrix R 1 is chosen such that ( R 1 T · M · R 1 ) ij is 0.
    • The matrix at the next step is: M 2 = R 1 T · M · R 1 .
  • Eigenvalues & Eigenvectors
    • The process is repeated on the matrix M 2 .
    • One can prove that, at each step, one has: max |( M n ) ij | <= max |( M n-1 ) ij | for all i  j .
    • Thus, the algorithm must converge.
    • The matrix O is then the product of all matrices R n .
  • Eigenvalues & Eigenvectors
    • The precision is the absolute value of the largest off-diagonal element.
    • To finalize, eigenvalues are sorted in decreasing order of absolute values.
    • There are two results from this algorithm:
      • A vector containing the eigenvalues,
      • The matrix O containing the corresponding eigenvectors.
  • Example of use | matrix jacobi eigenvalues transform | matrix := DhbMatrix rows: #( ( 3 -2 0) (-2 7 1) (0 1 5)). jacobi := DhbJacobiTransformation new: matrix. jacobi evaluate. iterativeProcess hasConverged ifTrue:[ eigenvalues := jacobi result. transform := jacobi transform. ] ifFalse:[ <special case processing> ].
  • Class Diagram evaluateIteration largestOffDiagonalIndices transformAt:and: finalizeIteration sortEigenValues exchangeAt: printOn: JacobiTransformation IterativeProcess lowerRows (i) transform (g)
  • Road Map
    • Dealing with numerical data
    • Framework for iterative processes
    • Newton’s zero finding
    • Eigenvalues and eigenvectors
    • Cluster analysis
    • Conclusion
  • Cluster Analysis
    • Is one of the techniques of data mining.
    • Allows to extract unsuspected similarity patterns from a large collection of objects.
    • Used by marketing strategists (banks).
    • Example: BT used cluster analysis to locate a long distance phone scam.
  • Cluster Analysis
    • Each object is represented by a vector of numerical values.
    • A metric is defined to compute a similarity factor using the vector representing the object.
  • Cluster Analysis
    • At first the centers of each clusters are defined by picking random objects.
    • Objects are clustered to the nearest center.
    • A new center is computed for each cluster.
    • Continue iteration until all objects remain in the cluster of the preceding iteration.
    • Empty clusters are discarded.
  • Cluster Analysis
    • The similarity metric is essential to the performance of cluster analysis.
    • Depending on the problem, a different metric may be used.
    • Thus, the metric is implemented with a S TRATEGY pattern.
  • Cluster Analysis
    • The precision is the number of objects that changed clusters at each iteration.
    • The result is a grouping of the initial objects, that is a set of clusters.
    • Some clusters may be better than others.
  • Example of use | server finder | server := < a subclass of DhbAbstractClusterDataServer > new. finder := DhbClusterFinder new: 15 server: server type: DhbEuclidianMetric. finder evaluate. finder tally
  • Class Diagram evaluateIteration accumulate: indexOfNearestCluster: initializeIteration finalizeIteration ClusterFinder IterativeProcess dataServer (i) clusters (i,g) accumulate: distance: isEmpty: reset Cluster openDataStream closeDataStream dimension seedClusterIn: accumulateDataIn: ClusterDataServer accumulator (i) metric (i) count (g)
  • Conclusion
    • When used with care numerical data can be processed with Smalltalk.
    • OO programming can take advantage of the mathematical structure.
    • Programming numerical algorithms can take advantage of OO programming.
  • Questions ? To reach me: [email_address]