2. What is Scientific Computing?
• What is science?
• When is something scientific?
Faculty of Computer Science, University of Indonesia 2
3. What is Scientific Computing?
• Design and analysis of algorithms for solving mathematical problems
arising in science and engineering numerically
• Also called numerical analysis or computational mathematics
Faculty of Computer Science, University of Indonesia 3
4. The importance of scientific computing
so far ...
• Predictive simulation of natural phenomena
Faculty of Computer Science, University of Indonesia 4
5. The importance of scientific computing
so far ...
• Virtual prototyping of engineering designs
Faculty of Computer Science, University of Indonesia 5
6. The importance of scientific computing
so far ...
• Analyzing data
Faculty of Computer Science, University of Indonesia 6
7. The importance of scientific computing...
Faculty of Computer Science, University of Indonesia 7
8. Mathematical problems
• Given mathematical relationship y = f (x), typical problems include
• Evaluate a function: compute output y for given input x
• Solve an equation: find input x that produces given output y
• Optimize: find x that yields extreme value of y over given domain
• Solution obtained may only approximate that of original problem
• Our goal is to estimate accuracy and ensure that it suffices
Faculty of Computer Science, University of Indonesia 8
9. How it works
• Theory + Data -> (Numerical) Model -> (Numerical) Simulation ->
Evaluation
• Toy Example:
• Theory:
• Human population in Indonesia grows exponentially, 𝑃 𝑡 = exp 𝛼 + 𝛽 ⋅ 𝑡
• Data:
• 1990: 181,436,821
• 1995: 196,957,849
• Model:
• 𝛼 = −13.65 and 𝛽 = 0.016
• Simulate:
• Human population in 2000 based on model:
𝑃 2000 = exp −13.65 + 0.016 ⋅ 2000 ≈ 213,615,024
• Evaluate:
• Actual population in 2000: 211,540,429
Faculty of Computer Science, University of Indonesia 9
10. Issues in scientific computing
• Computational Accuracy
• To measure accuracy:
• Error analysis
• Computational efficiency
• Time efficiency:
• Time complexity analysis of the algorithm
• Actual running time
• Space efficiency:
• Space complexity analysis of the algorithm
• Actual memory use allocation
Faculty of Computer Science, University of Indonesia 10
11. Error Analysis
• What is error?
Faculty of Computer Science, University of Indonesia 11
12. Error subdivision
• Error from different perspectives:
• Source: data error and computational error
• What is known / evaluated: forward error and backward error
• Measurement context: absolute error and relative error
Faculty of Computer Science, University of Indonesia 12
13. Data and computational error
• Computing 𝐹 𝑋 using imperfect data 𝑋 and imprefect algorithm 𝐹:
• Data error is attributed to 𝑋
• Computational error is attributed to 𝐹
• Total error =
𝐹 𝑋 − 𝐹 𝑋
= 𝐹 𝑋 − 𝐹 𝑋 + 𝐹 𝑋 − 𝐹 𝑋
computational error + propagated data error
• Propagated data error has nothing to do with the algorithm.
• Example:
• Computing sin
𝜋
1000
:
• Instead of 𝜋, use 3 (data error) and instead of sin 𝑥 , use 𝑥 (computational error).
• Computed result: sin
𝜋
1000
≈
𝜋
1000
≈ 0.003. Actual result: 0.00314158748 … Not so bad!
Faculty of Computer Science, University of Indonesia 13
14. Forward and backward error (Important!)
• Suppose we want to compute y = f (x), where f : R → R, but obtain approximate value 𝑦
• Forward error : Difference between computed result 𝑦 and true output 𝑦
Δ𝑦 = 𝑦 − 𝑦
• Backward error : Difference between actual input 𝑥 and input 𝑥 for which computed result 𝑦 is
exactly correct (i.e., 𝑓 𝑥 = 𝑦 ),
Δ𝑥 = 𝑥 − 𝑥
• Don’t be fooled by the variable name 𝑥 or 𝑦. Focus on: what you have and what you want to
compute.
• Backward error is useful especially when we do not know the actual solution to our
problem.
• Example:
• Want to compute 2
• Exact solution: 𝑥∗
≈ 1.41421...
• Suppose the computed solution is 𝑥 = 1.42
• Forward error: 𝑥 − 𝑥 = 1.42 − 1.41421 = −1.4 × 10−2
• Backward error: 1.4 2
− 2 = −4 × 10−2
Faculty of Computer Science, University of Indonesia 14
15. Absolute and relative error
• Let 𝑥 be the computed version of the actual value 𝑥 ≠ 0.
• We define two errors (absolute and relative error):
• Absolute error: 𝑥 − 𝑥.
• Relative error:
𝑥−𝑥
𝑥
.
• Useful: If 𝛿 is the relative error of 𝑥 from 𝑥, then
𝑥 = 𝑥 1 + 𝛿 .
• Typical problem in applied mathematics has the actual value 𝑥
unknown (e.g. exact solution of an equation from physics)! Hence,
error can only be approximated.
Faculty of Computer Science, University of Indonesia 15
16. Absolute and relative error
• If a measurement of a physical quantity has absolute error 1 mm, is it
a good measurement or not? Not enough information!
• If you were measuring thickness of a book, it might be not so good.
• If you were measuring distance of Jakarta and Bandung, it is a pretty
good measurement!
• Relative error gives you better insight with regard to the measurement
you are performing.
Faculty of Computer Science, University of Indonesia 16
17. Condition number
• Given a function 𝑓. The condition number of 𝑓 at point 𝑥 is given by
𝜅 𝑓 𝑥 =
𝑥𝑓′
𝑥
𝑓 𝑥
.
Faculty of Computer Science, University of Indonesia 17
18. Condition number
• Relative error at the input:
𝑥−𝑥
𝑥
• Relative error at the output:
𝑓 𝑥 −𝑓 𝑥
𝑓 𝑥
• How big does the relative error at the output become knowing the
relative error at the input? Take the absolute of ratio!
•
𝑓 𝑥 −𝑓 𝑥
𝑓 𝑥
𝑥−𝑥
𝑥
=
𝑥
𝑓 𝑥
⋅
𝑓 𝑥 −𝑓 𝑥
𝑥−𝑥
≈
𝑥
𝑓 𝑥
⋅ 𝑓′
𝑥 = 𝜅 𝑓 𝑥
Faculty of Computer Science, University of Indonesia 18
19. Condition number
• Using words: 𝜅 𝑓 𝑥 is the amplification factor of the relative error at
the input to the relative error at the output, i.e.
𝜅 𝑓 𝑥 ≈
|relative error at the output|
relative error at the input
Faculty of Computer Science, University of Indonesia 19
20. Sensitivity
• Large 𝜅 𝑓 𝑥 means that the 𝑓 is sensitive or ill-conditioned when
computed at 𝑥. Otherwise, 𝑓 is well-conditioned when computed at 𝑥.
• Example:
• For 𝑓 𝑥 = 𝑥2
, we can compute 𝜅 𝑓 𝑥 = 2 for any 𝑥.
• For 𝑓 𝑥 = 𝑒 𝑥
, we can compute 𝜅 𝑓 𝑥 = 𝑥 for any 𝑥.
Faculty of Computer Science, University of Indonesia 20
21. Stability
• Algorithm is stable if result produced is relatively insensitive to
perturbations during computation
• For stable algorithm, effect of computational error is no worse than
effect of small data error in input
Faculty of Computer Science, University of Indonesia 21
22. Accuracy
Accuracy : closeness of computed solution to true solution (i.e., relative
forward error)
Accuracy depends on conditioning of problem as well as stability of
algorithm
Inaccuracy can result from
• applying stable algorithm to ill-conditioned problem
• applying unstable algorithm to well-conditioned problem
• applying unstable algorithm to ill-conditioned problem (yikes!)
• Applying stable algorithm to well-conditioned problem yields
accurate solution
Faculty of Computer Science, University of Indonesia 22
23. Computer Aritmetics
• How does computer do arithmetics?
Faculty of Computer Science, University of Indonesia 23
24. How do computers store real numbers?
• Real numbers cannot be represented accurately numerically since it
will require infinitely many bits!
• 𝜋 = 3.1415926535 8979323846 2643383279 5028841971
6939937510 5820974944 ...
• Today’s computer use system that is called floating point.
• Not all numbers can be represented; most of them will only be approximated
by rounding or truncation.
• The standard arithmetic operations (+, −,×,/, ^) are also not accurate, but still
can be understood.
Faculty of Computer Science, University of Indonesia 24
25. Floating point system
𝑥 = ± 𝑑0 +
𝑑1
𝛽
+
𝑑2
𝛽2
+ ⋯ +
𝑑 𝑝−1
𝛽 𝑝−1
⋅ 𝛽 𝐸
where
𝐿 ≤ 𝐸 ≤ 𝑈 and 0 ≤ 𝑑𝑖 ≤ 𝛽 − 1
Digits 𝑑0. 𝑑1 𝑑2 … 𝑑 𝑝−1 are also called 𝑚𝑎𝑛𝑡𝑖𝑠𝑠𝑎.
The number 𝐸 is also called exponent.
Faculty of Computer Science, University of Indonesia 25
Parameters (depending on your machine’s architecture)
Base 𝛽
Precision 𝑝
Exponent range 𝐿, 𝑈
26. Floating point system
Machine 𝜷 𝒑 𝑳 𝑼
IEEE Single Precision 2 24 -126 127
IEEE Double Precision 2 53 -1022 1023
Cray 2 48 -16383 16384
HP Calculator 10 12 -499 499
IBM mainframe 16 6 -64 63
Faculty of Computer Science, University of Indonesia 26
Most computers system now use 𝛽 = 2 and support IEEE floating-point system.
27. Floating Point
• Example: IEEE double precision (64 bits)
• Floating-point system is normalized if leading digit 𝑑0 is always nonzero unless
number represented is zero.
• Reasons for normalization:
• representation of each number is unique
• no digits wasted on leading zeros
• leading bit need not be stored (in binary system)
• Most floating point system today is normalized.
• For 𝛽 = 2, the leading digit of normalized number is always 1 so it does not need
to be stored. We then say the system has 𝑝 = 53 bits (with 1 bit implicit).
Faculty of Computer Science, University of Indonesia 27
... ...
1 bit for sign 52 bit for mantissa 11 bits for exponent
28. Rounding
• If real number 𝑥 is not exactly representable, then it is approximated
by “nearby” floating-point number which we denote by fl(𝑥) . This
processed is called rounding, and the error introduced is called
rounding error.
• Two commonly used rounding rules
1. chop : truncate base-𝛽 expansion of 𝑥 after (𝑝 − 1)-st digit;
2. round to nearest : fl(𝑥) is nearest floating-point number to 𝑥, using
floating-point number whose last stored digit is even in case of tie;
Round to nearest is most accurate, and is default rounding rule in IEEE
systems.
Faculty of Computer Science, University of Indonesia 28
29. Machine epsilon
• Given a floating point system and a rounding rule, one can determine
the accuracy of such floating point system using machine epsilon
𝜀 𝑚𝑎𝑐ℎ (other terms: machine precision, unit round-off).
• There are several definitions in the textbook:
• The distance of 1 and the smallest normalized floating point number greater
than 1
• The smallest 𝜀 such that fl 1 + 𝜀 > 1
• Unfortunately, all these definitions give different values of 𝜀 𝑚𝑎𝑐ℎ
(even though just differ by factor of
1
2
).
• Nevertheless, they aim to measure the accuracy of a floating point
system in representing real numbers.
Faculty of Computer Science, University of Indonesia 29
30. Machine epsilon
• In this course, we use the following the value of 𝜀 𝑚𝑎𝑐ℎ:
• Using chopping: 𝜀 𝑚𝑎𝑐ℎ = 𝛽1−𝑝
• Using rounding to nearest: 𝜀 𝑚𝑎𝑐ℎ =
1
2
𝛽1−𝑝
• For this value of 𝜀 𝑚𝑎𝑐ℎ, we have:
fl 𝑥 − 𝑥
𝑥
≤ 𝜀 𝑚𝑎𝑐ℎ
That is, it bounds the relative error of the representation of real numbers from the real
numbers that it represents.
• Example:
• IEEE double precision (𝑝 = 53): 𝜀 𝑚𝑎𝑐ℎ =
1
2
⋅ 21−53
= 2−53
• IEEE single precision (𝑝 = 24): 𝜀 𝑚𝑎𝑐ℎ =
1
2
⋅ 21−24
= 2−24
• Note that IEEE use rounding to nearest system.
• Note that if we use other definition of 𝜀 𝑚𝑎𝑐ℎ:
• the smallest 𝜀 such that fl 1 + 𝜀 > 1: then for IEEE double precision, we have 𝜀 𝑚𝑎𝑐ℎ = 2−52
.
Faculty of Computer Science, University of Indonesia 30
31. Properties of floating point
• Not equally spaced; getting sparser as the numbers are further away
from 0.
• Finite.
• Not all numbers can be represented. The numbers that can be represented are
called machine numbers.
• The number of real numbers that can be represented:
2 𝛽 − 1 𝛽 𝑝−1 𝑈 − 𝐿 + 1 + 1
• Discrete.
• Between every two floating point number does not necessarily has another
floating point between them.
• Smallest positive normalized number = UFL = 𝛽 𝐿.
• Largest positive normalized number (OFL) = OFL = 𝛽 𝑈+1
1 − 𝛽−𝑝
.
Faculty of Computer Science, University of Indonesia 31
32. Properties of floating point
• Be careful even though both are small numbers, 𝜀 𝑚𝑎𝑐ℎ ≠UFL:
• 𝜀 𝑚𝑎𝑐ℎ depends on the precision 𝑝
• UFL depends on the minimum exponent
• In most practical system,
0 < 𝑈𝐹𝐿 < 𝜀 𝑚𝑎𝑐ℎ < 𝑂𝐹𝐿
Faculty of Computer Science, University of Indonesia 32
33. Example
• All the 25 numbers that can be represented by floating point with
parameters 𝛽 = 2, 𝑝 = 3, 𝐿 = −1, 𝑈 = 1.
• OFL = 1.11 2 × 21 = 3.5
• UFL = 1.00 2 × 2−1 = 0.5
Faculty of Computer Science, University of Indonesia 33
34. Special numbers
• IEEE floating-point standard provides special values to indicate two
exceptional situations
• Inf, which stands for “infinity,” results from dividing a finite number by
zero, such as 1/0
• NaN, which stands for “not a number,” results from undefined or
indeterminate operations such as 0/0, 0 ∗ Inf, or Inf/Inf
• Inf and NaN are implemented in IEEE arithmetic through special
reserved values of exponent field
Faculty of Computer Science, University of Indonesia 34
35. Floating point operations
• Floating point operations introduce rounding errors
• Addition and subtraction
• For addition and subtraction the mantissa has to be shifted until the exponents of the numbers
are equal (Potential loss of significant bits in the smaller number)
• Multiplication
• Mantissas have to be multiplied, yielding theoretically a new mantissa with 2𝑝 digits which has
to be rounded
• Division
• Quotient of mantissas can theoretically have an infinite number of digits which have to be
rounded
• Result of floating-point arithmetic operation may differ from result of
corresponding real arithmetic operation on same operands
• Overflow is usually more serious than underflow because there is no good
approximation to arbitrarily large magnitudes in floating-point system,
whereas zero is often reasonable approximation for arbitrarily small
magnitudes
• On many computer systems overflow is fatal, but an underflow may be
silently set to zero
Faculty of Computer Science, University of Indonesia 35
36. Floating point representations and
operations (example)
• Assume 𝛽 = 2, 𝑝 = 4 , 𝐿 = −7, 𝑈 = 8 normalized:
±1. 𝑑1 𝑑2 𝑑3 𝑑4 ⋅ 2 𝑒
− 7 ≤ 𝑒 ≤ 8.
• Let 𝑥 = 4.5 and 𝑦 = 0.92188
• What is fl 𝑥 ?
• The binary representation of 𝑥 is 100.1
• Perform shifting to get 1 in front of decimal point (normalize): +1.001 ⋅ 22
• No rounding needed: +1.0010 ⋅ 22
• Apparently, fl 𝑥 = 4.5, same as 𝑥
• What is fl 𝑦 ?
• The binary representation of 𝑦 is 0.111011
• Perform shifting to get 1 in front of decimal point (normalize): +1.11011 ⋅ 2−1
• Rounding needed and since the first digit thrown away is 1, the number that still
fits is incremented by 1, giving us +1.1110 ⋅ 2−1
• Hence, fl 𝑦 = 0.93750
Faculty of Computer Science, University of Indonesia 36
37. Floating point representations and
operations (example)
• To work out the result of floating point operations, we can work
“theoretically” as if the computer can store more bits than it actually
allocates, and then do rounding later at the end of calculation.
• What is fl 𝑥 + fl 𝑦 in floating point aritmetics?
• Shifting to make the exponent of both operands the same (to the larger
number):
fl 𝑥 = +1.0010 ⋅ 22
fl 𝑦 = +0.0011110 ⋅ 22
• Add the two mantissa:
1.0010 + 0.0011110 = 1.0101110
• Theoretical result: +1.0101110 ⋅ 22
• Normalize: +1.0101110 ⋅ 22
• Round to nearest: +1.0110 ⋅ 22
Faculty of Computer Science, University of Indonesia 37
38. Floating point representations and
operations (example)
• To work out the result of floating point operations, we can work
“theoretically” as if the computer can store more bits than it actually
allocates, and then do rounding later at the end of calculation.
• What is fl 𝑥 ⋅ fl 𝑦 in floating point aritmetics?
• Sum the exponent:
2 + −1 = 1
• Multiply the two mantissa:
1.0010 ⋅ 1.1110 = 10.00011100
• Theoretical result: 10.00011100 ⋅ 21
• Normalize: 1.000011100 ⋅ 22
• Round to nearest: 1.0001 ⋅ 22
Faculty of Computer Science, University of Indonesia 38
39. Example: Infinite Series
• In mathematics, the series 𝑛=1
∞ 1
𝑛
(known as harmonic series), must
be infinite!
• If one implements the usual summation algorithm to compute this
value, the result will converge.
• Why?
• Also, consider evaluating 𝑛=1
10000 1
𝑛
in two ways:
•
1
1
+
1
2
+
1
3
+ ⋯ +
1
10000
(summing 1/1 to 1/10000)
•
1
10000
+
1
9999
+ ⋯ +
1
2
+ 1 (summing 1/10000 to 1/1)
• Which one do you think will give a better result?
Faculty of Computer Science, University of Indonesia 39
40. Two issues in computer arithmetics
• Error propagation
• Cancellation (loss of significant digits)
Faculty of Computer Science, University of Indonesia 40
41. Error propagation
• Error can propagate!!! It can be dangerous in critical system, e.g.
Ariane 5 space-shuttle accident in 1996 due to floating point error.
• Toy example:
• Say 𝑥 = 𝑥 1 + 𝛿 with 𝛿 is the relative error and we want to compute 𝑥2
.
• Using 𝑥, instead, we will get
𝑥2 = 𝑥2 1 + 𝛿 2
= 𝑥2
1 + 2𝛿 + 𝛿2
= 𝑥2
1 + 2𝛿
So the relative error becomes twice larger!
What if we perform more complex computation?
• We should control the error such that the error does not blow up!
Faculty of Computer Science, University of Indonesia 41
42. Cancellation (loss of significant digits)
• Is a phenomenon in computing 𝑥 − 𝑦 where 𝑥 ≈ 𝑦
• Example:
• 𝑥 = 1.86394 × 10−2
and 𝑦 = 1.86284 × 10−2
are only accurate up to, say, 3
digits
• Hence, the 0.00394 × 10−2 and the 0.00284 × 10−2 actually does not
contain much information about 𝑥 and 𝑦 (not very significant!)
• But, when you substract 𝑥 and 𝑦, what you get is the difference
𝑥 − 𝑦 = 1.86394 × 10−2
− 1.86284 × 10−2
= 0.00110 × 10−2
= 1.10000 × 10−5
which probably does not tell so much about 𝑥 − 𝑦 anymore!
• One should avoid this and usually there is a way to handle this.
Faculty of Computer Science, University of Indonesia 42
43. Issue: Cancellation and loss of significant
digits
• Study case: Explain how to compute 𝑓 𝑥 = 1 − cos 𝑥 with 𝑥 ≈ 0 that avoids loss of
significant digits.
• Study Case: There are two formulas to compute the variance of 𝑛 data in statistics
𝑉𝐴𝑅1 =
1
𝑛
𝑖=1
𝑛
𝑥𝑖 − 𝑥 2
VS 𝑉𝐴𝑅2 =
1
𝑛
𝑖=1
𝑛
𝑥𝑖
2
− 𝑥2
• In hich one do you think is better in terms of accuracy?
• Study Case: To compute roots of 𝑎𝑥2
+ 𝑏𝑥 + 𝑐 = 0 with 𝑎 ≠ 0, one can use the formula
𝑥1,2 =
−𝑏 ± 𝑏2 − 4𝑎𝑐
2𝑎
.
• For which 𝑎, 𝑏, 𝑐 is such formula not good and implement alternative formula for each case.
• Study Case: Three numbers 𝑎, 𝑏, 𝑐 > 0 are sides of a triangle if it satisfies 𝑎 + 𝑏 > 𝑐, 𝑏 +
𝑐 > 𝑎, 𝑐 + 𝑎 > 𝑏. One formula to compute the area of a triangle of sides 𝑎, 𝑏, 𝑐 is
𝐴 = 𝑠 𝑠 − 𝑎 𝑠 − 𝑏 𝑠 − 𝑐 where 𝑠 =
1
2
𝑎 + 𝑏 + 𝑐 .
This is not good if the triangle is almost flat, e.g. 𝑎 + 𝑏 ≈ 𝑐. Implement alternative formula
for each case.
Faculty of Computer Science, University of Indonesia 43