lec03 floats

27
FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors Read: Section 2.7 of text. 3-1 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Upload: adis022

Post on 17-Jul-2016

243 views

Category:

Documents


2 download

DESCRIPTION

floats

TRANSCRIPT

Page 1: Lec03 Floats

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

•Brief review of floating point arithmetic

•Model of floating point arithmetic

•Notation, backward and forward errors

•Read: Section 2.7 of text.

3-1 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 2: Lec03 Floats

Roundoff errors and floating-point arithmetic

ä The basic problem: The set A of all possible

representable numbers on a given machine is finite -

but we would like to use this set to perform standard

arithmetic operations (+,*,-,/) on an infinite set. The

usual algebra rules are no longer satisfied since results of

operations are rounded.

3-2 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 3: Lec03 Floats

ä Basic algebra breaks down in floating point arithmetic.

Example: In floating point arithmetic.

a + (b + c) ! = (a + b) + c

- Matlab experiment: For 10,000 random numbers find

number of instances when the above is true. Same thing

for the multiplication..

3-3 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 4: Lec03 Floats

Machine precision - machine epsilon

ä When a number x is very small, there is a point when

1 + x == 1 in a machine sense. The computer no longer

makes a difference between 1 and 1 + x.

Definition: the machine epsilon is the smallest number

ǫ such that

fl(1 + ǫ) 6= 1

This number is denoted by u – sometimes by eps.

3-4 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 5: Lec03 Floats

- Matlab experiment: find the machine epsilon on your

computer.

ä Many discussions on what conditions/ rules should be

satisfied by floating point arithmetic. The IEEE standard is

a set of standards adopted by many CPU manufacturers.

3-5 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 6: Lec03 Floats

Rule 1.

fl(x) = x(1 + ǫ), where |ǫ| ≤ u

Rule 2. For all operations ⊙ (one of +,−, ∗, /)

fl(x⊙ y) = (x⊙ y)(1 + ǫ⊙), where |ǫ⊙| ≤ u

Rule 3. For +, ∗ operations

fl(a⊙ b) = fl(b⊙ a)

- Matlab experiment: Verify experimentally Rule 3 with

10,000 randomly generated numbers ai, bi.

3-6 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 7: Lec03 Floats

Example: Consider the sum of 3 numbers: y = a+b+c.

ä Done as fl(fl(a + b) + c)

η = fl(a + b) = (a + b)(1 + ǫ1)

y1 = fl(η + c) = (η + c)(1 + ǫ2)

= [(a + b)(1 + ǫ1) + c] (1 + ǫ2)

= [(a + b + c) + (a + b)ǫ1)] (1 + ǫ2)

= (a + b + c)

[

1 +a + b

a + b + cǫ1(1 + ǫ2) + ǫ2

]

So disregarding the high order term ǫ1ǫ2

fl(fl(a + b) + c) = (a + b + c)(1 + ǫ3)

ǫ3 ≈a + b

a + b + cǫ1 + ǫ2

3-7 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 8: Lec03 Floats

ä If we redid the computation as y2 = fl(a + fl(b + c))

we would find

fl(a + fl(b + c)) = (a + b + c)(1 + ǫ4)

ǫ4 ≈b + c

a + b + cǫ1 + ǫ2

ä The first error is amplified by the factor (a + b)/y in

the first case and (b + c)/y in the second case.

ä In order to sum n numbers more accurately, it is better

to start with the small numbers first. [However, sorting

before adding is not worth the cost!]

ä But watch out if the numbers have mixed signs!

3-8 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 9: Lec03 Floats

The absolute value notation

ä For a given vector x, |x| is the vector with components

|xi|, i.e., |x| is the component-wise absolute value of x.

ä Similarly for matrices:

|A| = {|aij|}i=1,...,m; j=1,...,n

ä Obvious result. The basic inequality

|fl(aij)− aij| ≤ u |aij|

translates into

fl(A) = A + E with |E| ≤ u |A|

ä A ≤ B means aij ≤ bij for all 1 ≤ i ≤ m; 1 ≤ j ≤ n

3-9 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 10: Lec03 Floats

Error Analysis: Inner product

ä Inner products are in the innermost parts of many cal-

culations. Their analysis is important.

Lemma: If |δi| ≤ u and nu < 1 then

Πni=1(1 + δi) = 1 + θn where |θn| ≤

nu

1− nu

ä Common notation γn ≡nu

1−nu

- Prove the lemma [Hint: use induction]

3-10 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 11: Lec03 Floats

ä Use the following result derived from previous lemma:

Lemma: If |δi| ≤ u and nu < .01 then

Πni=1(1 + δi) = 1 + γn where |γn| ≤ 1.01nu

ä Previous sum of numbers can be written

fl(a + b + c) = a(1 + ǫ1)(1 + ǫ2)

+ b(1 + ǫ1)(1 + ǫ2)

+ c(1 + ǫ2)

= a(1 + γ1) + b(1 + γ2) + c(1 + γ3)

= exact sum of slightly perturbed inputs,

where all γ’s satisfy |γ| ≤ 1.01nu (here n = 3).

ä Here implies forward bound:

|fl(a + b + c)− (a + b + c)| ≤ |aγ1|+ |bγ2|+ |cγ3|.

3-11 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 12: Lec03 Floats

Main result on inner products:

ä Backward error expression:

fl(xTy) = [x .∗ (1 + dx)]T [y .∗ (1 + dy)]

where ‖d�‖∞ ≤ 1.01nu , � = x, y.

ä Can show equality valid even if one of the dx, dy absent.

ä Forward error expression:

|fl(xTy)− xTy| ≤ γn |x|T |y|

where 0 ≤ γn ≤ 1.01nu .

ä Elementwise absolute value |x| and multiply .∗ notation.

ä Above assumes nu ≤ .01.

For u = 2.0× 10−16, this holds for n ≤ 4.5× 1013.

3-12 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 13: Lec03 Floats

ä Consequence of lemma:

|fl(A ∗B)−A ∗B| ≤ γn |A| ∗ |B|

ä Another way to write the result (less precise) is

|fl(xTy)− xTy| ≤ n u |x|T |y|+ O(u 2)

3-13 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 14: Lec03 Floats

- Assume you use single precision for which you have u =

2. × 10−6. What is the largest n for which γn ≤ 1.01nu

holds? Any conclusions for the use of single precision

arithmetic?

- What does the main result on inner products imply for

the case when y = x? [Contrast the relative accuracy you

get in this case vs. the general case when y 6= x]

3-14 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 15: Lec03 Floats

Backward and forward errors

ä Assume the approximation y to y = alg(x) is computed

by some algorithm with arithmetic precision ǫ. Possible

analysis: find an upper bound for the Forward error

|∆y| = |y − y|

ä This is not always easy.

Alternative question: find the equivalent perturbation on

the initial data (x) which produces the result y. In other

words, for what ∆x do we have:

alg(x + ∆x) = y

ä The value of |∆x| is called the backward error. An

analysis to find an upper bound for |∆x| is called Backward

error analysis.

3-15 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 16: Lec03 Floats

Example:

A =

(

a b

0 c

)

B =

(

d e

0 f

)

Consider the product: fl(A.B) =[

(ad)(1 + ǫ1) [ae(1 + ǫ2) + bf(1 + ǫ3)] (1 + ǫ4)

0 cf(1 + ǫ5)

]

with ǫi ≤ u , for i = 1, ..., 5. Result can be written as:[

a b(1 + ǫ3)(1 + ǫ4)

0 c(1 + ǫ5)

] [

d(1 + ǫ1) e(1 + ǫ2)(1 + ǫ4)

0 f

]

ä So fl(A.B) = (A + EA)(B + EB).

ä Backward errors EA, EB satisfy:

|EA| ≤ 2u |A|+ O(u 2) ; |EB| ≤ 2u |B|+ O(u 2)

3-16 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 17: Lec03 Floats

ä When solving Ax = b by Gaussian Elimination, we will

see that a bound on ‖ex‖ such that this holds exactly:

A(xcomputed + ex) = b

is much harder to find than bounds on ‖EA‖, ‖eb‖ such

that this holds exactly:

(A + EA)xcomputed = (b + eb).

Note: In many instances backward errors are more

meaningful than forward errors: if initial data is accurate

only to 4 digits for example, then my algorithm for

computing x need not guarantee a backward error of less

then 10−10 for example. A backward error of order 10−4

is acceptable.

3-17 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 18: Lec03 Floats

- Show for any x, y, there exist ∆x,∆y such that

fl(xTy) = (x + ∆x)Ty, with |∆x| ≤ γn|x|

fl(xTy) = xT(y + ∆y), with |∆y| ≤ γn|y|

- (Continuation) Let A an m×n matrix, x an n-vector,

and y = Ax. Show that there exist a matrix ∆A such

fl(y) = (A + ∆A)x, with |∆A| ≤ γn|A|

- (Continuation) From the above derive a result about a

column of the product of two matrices A and B. Does a

similar result hold for the product AB as a whole?

3-18 GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float

Page 19: Lec03 Floats

Supplemental notes: Floating Point Arithmetic

In most computing systems, real numbers are represented

in two parts: A mantissa and an exponent. If the represen-

tation is in the base β then:

x = ±(.d1d2 · · · dm)ββe

ä .d1d2 · · · dm is a fraction in the base-β representation

ä e is an integer - can be negative, positive or zero.

ä Generally the form is normalized in that d1 6= 0.

3-19 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 20: Lec03 Floats

Example: In base 10 (for illustration)

1. 1000.12345 can be written as

0.10001234510 × 104

2. 0.000812345 can be written as

0.81234510 × 10−3

ä Problem with floating point arithmetic: we have to live

with limited precision.

Example: Assume that we have only 5 digits of accuray

in the mantissa and 2 digits for the exponent (excluding

sign).

.d1 d2 d3 d4 d5 e1 e2

3-20 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 21: Lec03 Floats

Try to add 1000.2 = .10002e+03 and 1.07 = .10700e+01:

1000.2 = .1 0 0 0 2 0 4 ; 1.07 = .1 0 7 0 0 0 1

First task: align decimal points. The one with small-

est exponent will be (internally) rewritten so its exponent

matches the largest one:

1.07 = 0.000107 × 104

Second task: add mantissas:

0. 1 0 0 0 2

+ 0. 0 0 0 1 0 7

= 0. 1 0 0 1 2 7

3-21 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 22: Lec03 Floats

Third task: round result. Result has 6 digits - can use

only 5 so we can

ä Chop result: .1 0 0 1 2 ;

ä Round result: .1 0 0 1 3 ;

Fourth task: Normalize result if needed (not needed here)

result with rounding: .1 0 0 1 3 0 4 ;

- Redo the same thing with 7000.2 + 4000.3 or 6999.2

+ 4000.3.

3-22 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 23: Lec03 Floats

Some More Examples

ä Each operation fl(x⊙ y) proceeds in 4 steps:1. Line up exponents (for addition & subtraction).2. Compute temporary exact answer.3. Normalize temporary result.4. Round to nearest representable number

(round-to-even in case of a tie).

.40015 e+02 .40010 e+02 .41015 e-98

+ .60010 e+02 .50001 e-04 -.41010 e-98

temporary 1.00025 e+02 .4001050001e+02 .00005 e-98

normalize .100025e+03 .400105⊕ e+02 .00050 e-99

round .10002 e+03 .40011 e+02 .00050 e-99

note: round to round to nearest too small:even ⊕=not all 0’s unnormalized

exactly halfway closer to exponent isbetween values upper value at minimum

3-23 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 24: Lec03 Floats

The IEEE standard

32 bit (Single precision) :

± 8 bits ← 23 bits →

sign ︸ ︷︷ ︸

exponent︸ ︷︷ ︸

mantissa

ä In binary: The leading one in mantissa does not need

to be represented. One bit gained. ä Hidden bit.

ä Largest exponent: 27 − 1 = 127; Smallest: = −126.

[‘bias’ of 127]

3-24 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 25: Lec03 Floats

64 bit (Double precision) :

± 11 bits ← 52 bits →

sign ︸ ︷︷ ︸

exponent︸ ︷︷ ︸

mantissa

ä Bias of 1023 so if c is the contents of exponent field

actual exponent value is 2c−1023

ä e + bias = 2047 (all ones) = special use

ä Largest exponent: 1023; Smallest = -1022.

ä With hidden bit: mantissa has (!) 52 bits represented.

3-25 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 26: Lec03 Floats

- Take the number 1.0 and see what will happen if you

add 1/2, 1/4, ...., 2−i. Do not forget the hidden bit!

Hidden bit

↓ ← 52 bits →

1 1 0 0 0 0 0 0 0 0 0 0 e e

1 0 1 0 0 0 0 0 0 0 0 0 e e

1 0 0 1 0 0 0 0 0 0 0 0 e e

.......

1 0 0 0 0 0 0 0 0 0 0 1 e e

1 0 0 0 0 0 0 0 0 0 0 0 e e

ä Conclusion

fl(1 + 2−52) 6= 1 but: fl(1 + 2−53) == 1 !!

3-26 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0

Page 27: Lec03 Floats

Special Values

ä Exponent field = 00000000000 (smallest possible value)

No hidden bit. All bits == 0 means exactly zero.

ä Allow for unnormalized numbers,

leading to gradual underflow.

ä Exponent field = 11111111111 (largest possible value)

Number represented is ”Inf” ”-Inf” or ”NaN”.

3-27 ??GvL4 2.7; GvL3 2.4; Ortega 9.2; Heath 1.3; TB 13-15 – Float0