differentiation and richardson extrapolation douglas wilhelm harder, m.math. lel department of...
TRANSCRIPT
Differentiation andRichardson Extrapolation
Douglas Wilhelm Harder, M.Math. LELDepartment of Electrical and Computer Engineering
University of Waterloo
Waterloo, Ontario, Canada
ece.uwaterloo.ca
© 2012 by Douglas Wilhelm Harder. Some rights reserved.
2
Outline
This topic discusses numerical differentiation:– The use of interpolation– The centred divided-difference approximations of the derivative
and second derivative• Error analysis using Taylor series
– The backward divided-difference approximation of the derivative• Error analysis
– Richardson extrapolation
Differentiation and Richardson Extrapolation
3
Outcomes Based Learning Objectives
By the end of this laboratory, you will:– Understand how to approximate first and second derivatives– Understand how Taylor series are used to determine errors of
various approximations – Know how to eliminate higher errors using Richardson
extrapolation– Have programmed a Matlab routine with appropriate error
checking and exception handling
Differentiation and Richardson Extrapolation
4
Approximating the Derivative
Suppose we want to approximate the derivative:
(1)
0limh
u x h u xu x
h
Differentiation and Richardson Extrapolation
5
Approximating the Derivative
If the limit exists, this suggests that if we choose a very small h,
Unfortunately, this isn’t as easy as it first appears:>> format long>> cos(1)ans = 0.540302305868140
>> for i = 0:20 h = 10^(-i); (sin(1 + h) - sin(1))/hend
(1) u x h u xu x
h
Differentiation and Richardson Extrapolation
6
Approximating the Derivative
At first, the approximations improve:
h
10.067826442017785
0.1 0.4973637525353890.01 0.5360859810118690.001 0.5398814803603270.0001 0.5402602314186210.00001 0.5402980985058650.000001 0.5403018851213300.0000001 0.5403022640404490.00000001 0.540302302898255
sin 1 sin 1h
h
Differentiation and Richardson Extrapolation
>> cos(1)ans = 0.540302305868140
7
Approximating the Derivative
Then it seems to get worse:
h
0.00000001 0.540302302898255
0.000000001 0.5403023584094060.0000000001 0.5403022473871030.00000000001 0.5403011371640790.000000000001 0.54034554608506410-13 0.53956838996782610-14 0.54400928206632710-15 0.55511151231257810-16 010-17 010-18 010-19 010-20 0
sin 1 sin 1h
h
>> cos(1)ans = 0.540302305868140
Differentiation and Richardson Extrapolation
8
Approximating the Derivative
There are two things that must be explained:– Why do we, to start with, appear to get one more digit of
accuracy every time we divide h by 10– Why, after some point, does the accuracy decrease, ultimately
rendering a useless approximations
Differentiation and Richardson Extrapolation
9
Increasing Accuracy
We will start with why the answer appears to improve:– Recall Taylor’s approximation:
where , that is, x is close to x
(1) (2) 21
2u x h u x u x h u h
,x x h
Differentiation and Richardson Extrapolation
10
Increasing Accuracy
We will start with why the answer appears to improve:– Recall Taylor’s approximation:
where , that is, x is close to x
– Solve this equation for the derivative
(1) (2) 21
2u x h u x u x h u h
,x x h
Differentiation and Richardson Extrapolation
11
Increasing Accuracy
First we isolate the term :
(1) (2) 21
2u x h u x h u x u h
(1)u x h
Differentiation and Richardson Extrapolation
12
Increasing Accuracy
Then, divide each side by h:
– Again, , that is, x is close to x ,x x h
(1) (2)1
2
u x h u xu x u h
h
Differentiation and Richardson Extrapolation
13
Increasing Accuracy
Assuming that doesn’t vary too wildly, this term is approximately a constant:
(1) (2)1
2
u x h u xu x u h
h
(2)u x
Differentiation and Richardson Extrapolation
14
Increasing Accuracy
We can easily see this is true from our first example:
where
(1) u x h u xu x Mh
h
(2)1
2M u x
Differentiation and Richardson Extrapolation
15
Increasing Accuracy
Thus, the absolute error of as an approximation of is
Therefore,– If we halve h, the absolute error should drop approximately half– If we divide h by 10, the absolute error should drop by
approximately 10
(1)abs
u x h u xE u x Mh
h
u x h u x
h
(1)u x
Differentiation and Richardson Extrapolation
16
Increasing Accuracy
h Absolute Error
1. 0.067826442017785 0.47248 0.42074
0.1 0.497363752535389 0.042939 0.042074
0.01 0.536085981011869 0.0042163 0.0042074
10–3 0.539881480360327 0.00042083 0.00042074
10–4 0.540260231418621 0.000042074 0.000042074
10–5 0.540298098505865 0.0000042074 0.0000042074
10–6 0.540301885121330 0.00000042075 0.00000042074
10–7 0.540302264040449 0.0000000418276 0.000000042074
10–8 0.540302302898255 0.0000000029699 0.0000000042074
10–9 0.540302358409406 0.000000052541 0.00000000042074
sin 1 sin 1h
h
1sin 1
2h
>> cos(1)ans = 0.540302305868140
Differentiation and Richardson Extrapolation
17
Increasing Accuracy
h Absolute Error
1. 0.067826442017785 0.47248 0.42074
0.1 0.497363752535389 0.042939 0.042074
0.01 0.536085981011869 0.0042163 0.0042074
10–3 0.539881480360327 0.00042083 0.00042074
10–4 0.540260231418621 0.000042074 0.000042074
10–5 0.540298098505865 0.0000042074 0.0000042074
10–6 0.540301885121330 0.00000042075 0.00000042074
10–7 0.540302264040449 0.0000000418276 0.000000042074
10–8 0.540302302898255 0.0000000029699 0.0000000042074
10–9 0.540302358409406 0.000000052541 0.00000000042074
sin 1 sin 1h
h
1sin 1
2h
>> cos(1)ans = 0.540302305868140
Differentiation and Richardson Extrapolation
18
Increasing Accuracy
h Absolute Error
1. 0.067826442017785 0.47248 0.42074
0.1 0.497363752535389 0.042939 0.042074
0.01 0.536085981011869 0.0042163 0.0042074
10–3 0.539881480360327 0.00042083 0.00042074
10–4 0.540260231418621 0.000042074 0.000042074
10–5 0.540298098505865 0.0000042074 0.0000042074
10–6 0.540301885121330 0.00000042075 0.00000042074
10–7 0.540302264040449 0.0000000418276 0.000000042074
10–8 0.540302302898255 0.0000000029699 0.0000000042074
10–9 0.540302358409406 0.000000052541 0.00000000042074
sin 1 sin 1h
h
1sin 1
2h
Differentiation and Richardson Extrapolation
19
Increasing Accuracy
Let’s try this with something less familiar:– The Bessel function J2(x) has the derivative:
– These functions are implemented in Matlab as:
J2(x) besselj( 2, x )
J1(x) besselj( 1, x )
J0(x) besselj( 0, x )
– Bessel functions appear any time you are dealing with electromagnetic fields in cylindrical coordinates
2(1)2 1
1 2(2)2 0 2
2
3 6
J xJ x J x
xJ x J x
J x J xx x
Differentiation and Richardson Extrapolation
20
Increasing Accuracy
h Absolute Error
1. 0.067826442017785 0.133992 0.144008
0.1 –0.025284847088251 0.0143904 0.0144008
0.01 –0.038235218035143 0.00144007 0.00144008
10–3 –0.039531281976313 0.000144008 0.000144008
10–4 –0.039660889397664 0.0000144008 0.0000144008
10–5 –0.039673850132926 0.00000144009 0.00000144008
10–6 –0.039675146057405 0.000000144166 0.000000144008
10–7 –0.039675276397588 0.0000000183257 0.0000000144008
10–8 –0.039675285279372 0.00000000494388 0.00000000144008
10–9 –0.039675318586063 0.0000000283628 0.000000000144008
2 26.568 6.568J h J
h
22
16.568
2J h
>> x = 6.568;>> besselj( 1, x ) - 2*besselj( 2, x )/x ans = -0.039675290223248
Differentiation and Richardson Extrapolation
21
Increasing Accuracy
We could use a rule of thumb: Use h = 10–8
– It appears to work…
Unfortunately:– It is not always the best approximation– It may not give us sufficient accuracy– We still don’t understand why our approximation breaks down…
Differentiation and Richardson Extrapolation
22
Decreasing Precision
Suppose we want 10 digits of accuracy in our answer:– If h = 0.01, we need 12 digits when calculating sin(1.01) and
sin(1):
– If h = 0.00001, we need 15 digits when calculating sin(1.00001) and sin(1):
sin 1.01 sin 1
0.01
0.846831844618
0.841470984808
0.005360859810
sin 1.00001 sin 1
0.00001
0.841476387788881
0.841470984807896
0.000005402980985
Differentiation and Richardson Extrapolation
23
Decreasing Precision
Suppose we want 10 digits of accuracy in our answer:– If h = 10–12, we need 22 digits when calculating sin(1 + h) and
sin(1):
– Matlab, however, uses double-precision floating-point numbers:• These have a maximum accuracy of 16 decimal digits:
>> format long>> sin( 1 + 1e-12 ) ans = 0.841470984808437>> sin( 1 ) ans = 0.841470984807897
12
12
sin 1 10 sin 1
10
0.8414709848084368089584
0.8414709848078965066525
0.0000000000005403023059
0.841470984808437
0.841470984807897
0.000000000000540
Differentiation and Richardson Extrapolation
24
Decreasing Precision
Because of the limitations of doubles, our approximation is
Note: this is not entirely true because Matlab uses base 2 and not base 10, but the analogy is faithful…
12
12
sin 1 10 sin 10.540
10
Differentiation and Richardson Extrapolation
25
Decreasing Precision
We can view this using the binary representation of doubles:>> cos( 1 )ans =3fe14a280fb5068c
3 f e 1 4 a 2 8 0 f b 5 0 6 8 c
0011 1111 1110 0001 0100 1010 0010 1000 0000 1111 1011 0101 0000 0110 1000 1100
1.0001010010100010100000001111101101010000011010001100 × 201111111110 –
011111111
= 1.0001010010100010100000001111101101010000011010001100 × 2–1
= 0.10001010010100010100000001111101101010000011010001100
Differentiation and Richardson Extrapolation
26
Decreasing Precision
From this, we see:0.10001010010100010100000001111101101010000011010001100
>> format long>> 1/2 + 1/32 + 1/128 + 1/1024 + 1/4096 + 1/65536 + 1/262144 +
1/33554432 ans = 0.540302306413651>> cos( 1 ) ans = 0.540302305868140
>> format hex>> 1/2 + 1/32 + 1/128 + 1/1024 + 1/4096 + 1/65536 + 1/262144 +
1/33554432 ans = 3fe14a2810000000>> cos(1) ans = 3fe14a280fb5068c
Differentiation and Richardson Extrapolation
27
Decreasing Precision
0 0 0111111101 10001010111010001001011011110010001010011101011000000 1 0 0111111110 10011111110001001100000110000100011011000001001110100 2 0 0111111110 11011100001100000001101111000010000011010010011110000 3 0 0111111110 11111001000001011101110110001001110100000111000000000 4 0 0111111111 00000011011111110110111010001101110101110100101110000 5 0 0111111111 00000110111011011110010010001110111011111011010000000 6 0 0111111111 00001000101000001111110010011011101000110100000000000 7 0 0111111111 00001001011110010111100111101010111001011000110000000 8 0 0111111111 00001001111001010111010000100110110111000100000000000 9 0 0111111111 0000101000011011011000000001001001000110110000000000010 0 0111111111 0000101000110110010100011011100001100100011000000000011 0 0111111111 0000101001000011110010010111011100101111000000000000012 0 0111111111 0000101001001010100001010001000101110111100000000000013 0 0111111111 0000101001001101111000101100110101010011000000000000014 0 0111111111 0000101001001111100100011010011011101110000000000000015 0 0111111111 0000101001010000011010010001001010101000000000000000016 0 0111111111 0000101001010000110101001100100001000000000000000000017 0 0111111111 0000101001010001000010101010001011110000000000000000018 0 0111111111 0000101001010001001001011001000001000000000000000000019 0 0111111111 0000101001010001001100110000011100000000000000000000020 0 0111111111 0000101001010001001110011100001000000000000000000000021 0 0111111111 0000101001010001001111010010000000000000000000000000022 0 0111111111 0000101001010001001111101100111000000000000000000000023 0 0111111111 0000101001010001001111111010010000000000000000000000024 0 0111111111 0000101001010001010000000001000000000000000000000000025 0 0111111111 0000101001010001010000000100000000000000000000000000026 0 0111111111 00001010010100010100000001100000000000000000000000000
0 0111111111 00001010010100010100000001111101101010000011010001100
n Approximation with h = 2–n
Differentiation and Richardson Extrapolation
28
Decreasing Precision
0 0111111111 00001010010100010100000001111101101010000011010001100
27 0 0111111111 0000101001010001010000001000000000000000000000000000028 0 0111111111 0000101001010001010000001000000000000000000000000000029 0 0111111111 0000101001010001010000000000000000000000000000000000030 0 0111111111 0000101001010001010000000000000000000000000000000000031 0 0111111111 0000101001010001010000000000000000000000000000000000032 0 0111111111 0000101001010001010000000000000000000000000000000000033 0 0111111111 0000101001010001010000000000000000000000000000000000034 0 0111111111 0000101001010001010000000000000000000000000000000000035 0 0111111111 0000101001010001010000000000000000000000000000000000036 0 0111111111 0000101001010001000000000000000000000000000000000000037 0 0111111111 0000101001010001000000000000000000000000000000000000038 0 0111111111 0000101001010000000000000000000000000000000000000000039 0 0111111111 0000101001010000000000000000000000000000000000000000040 0 0111111111 0000101001010000000000000000000000000000000000000000041 0 0111111111 0000101001010000000000000000000000000000000000000000042 0 0111111111 0000101001000000000000000000000000000000000000000000043 0 0111111111 0000101001000000000000000000000000000000000000000000044 0 0111111111 0000101000000000000000000000000000000000000000000000045 0 0111111111 0000101000000000000000000000000000000000000000000000046 0 0111111111 0000101000000000000000000000000000000000000000000000047 0 0111111111 0000100000000000000000000000000000000000000000000000048 0 0111111111 0000100000000000000000000000000000000000000000000000049 0 0111111111 0000000000000000000000000000000000000000000000000000050 0 0111111111 0000000000000000000000000000000000000000000000000000051 0 0111111111 0000000000000000000000000000000000000000000000000000052 0 0111111111 0000000000000000000000000000000000000000000000000000053 0 0000000000 00000000000000000000000000000000000000000000000000000
n Approximation with h = 2–n
Differentiation and Richardson Extrapolation
29
Decreasing Precision
This effect when subtracting two similar numbers is called
subtractive cancellation
In industry, it is also referred to as
catastrophic cancellation
Ignoring the effects of subtractive cancellation is one of the most significant sources of numerical error
Differentiation and Richardson Extrapolation
30
Decreasing Precision
Consequence:– Unlike calculus, we cannot make h arbitrarily small
Possible solutions:– Find a better formulas– Use completely different approaches
Differentiation and Richardson Extrapolation
31
Better Approximations
Idea: find the line that interpolates the two points
(x, u(x)) and (x + h, u(x + h))
Differentiation and Richardson Extrapolation
32
Better Approximations
The slope of this interpolating line is our approximation of the derivative: u x h u x
h
Differentiation and Richardson Extrapolation
33
Better Approximations
What happens if we find the interpolating quadratic going through the three points
(x – h, u(x – h)) (x, u(x)) (x + h, u(x + h))
?
Differentiation and Richardson Extrapolation
34
Better Approximations
The interpolating quadratic is clearly a local approximation
Differentiation and Richardson Extrapolation
35
Better Approximations
The slope of the interpolating quadratic is easy to find:
Differentiation and Richardson Extrapolation
36
Better Approximations
The slope of the interpolating quadratic is also closer to the slope of the original function at x
Differentiation and Richardson Extrapolation
37
Better Approximations
Without going through the process, finding the interpolating quadratic function gives us a similar formula
(1)
2
u x h u x hu x
h
Differentiation and Richardson Extrapolation
38
Better Approximations
Additionally, we can approximate the concavity (2nd derivative) at the point x by finding the concavity of the interpolating quadratic polynomial
(2)2
2u x h u x u x hu x
h
Differentiation and Richardson Extrapolation
39
Better Approximations
For those interested, this Maple code finds these formulas
Differentiation and Richardson Extrapolation
40
Better Approximations
Question: how much better are these two approximations?
(1)
(2)2
22
u x h u x hu x
hu x h u x u x h
u xh
Differentiation and Richardson Extrapolation
41
Better Approximations
Using Taylor series, we have approximations for both u(x + h) andu(x – h):
Here, and
(1) (2) 2 (3) 3
(1) (2) 2 (3) 3
1 1
2 61 1
2 6
u x h u x u x h u x h u h
u x h u x u x h u x h u h
,x x h ,x h x
Differentiation and Richardson Extrapolation
42
Better Approximations
Subtracting the second approximation from the first, we get
(1) (3) 3 (3) 3
(1) (3) (3) 3
1 12
6 61
26
u x h u x h u x h u h u h
u x h u u h
(1) (2) 2 (3) 3
(1) (2) 2 (3) 3
1 1
2 61 1
2 6
u x h u x u x h u x h u h
u x h u x u x h u x h u h
Differentiation and Richardson Extrapolation
43
Better Approximations
Solving the equation
for the derivative, we get:
(1) (3) (3) 312
6u x h u x h u x h u u h
(1) (3) (3) 21
2 12
u x h u x hu x u u h
h
Differentiation and Richardson Extrapolation
44
Better Approximations
The critical term is the h2
This says– If we halve h, the error goes down by a factor of 4– If we divide h by 10, the error goes down by a factor of 100
(1) (3) (3) 21
2 12
u x h u x hu x u u h
h
Differentiation and Richardson Extrapolation
45
Better Approximations
Adding the two approximations
(1) (2) 2 (3) 3 (4) 4
(1) (2) 2 (3) 3 (4) 4
1 1 1
2 6 241 1 1
2 6 24
u x h u x u x h u x h u x h u h
u x h u x u x h u x h u x h u h
(2) 2 (4) 4 (4) 4
(2) 2 (4) (4) 4
1 12
24 241
224
u x h u x h u x u x h u h u h
u x u x h u u h
Differentiation and Richardson Extrapolation
46
Better Approximations
Solving the equation
for the 2nd derivative, we get:
(2) (4) (4) 22
2 1
24
u x h u x u x hu x u u h
h
(2) 2 (4) (4) 412
24u x h u x h u x u x h u u h
Differentiation and Richardson Extrapolation
47
Better Approximations
Again, the term in the error is h2
Thus, both of these formulas are reasonable approximations for the first and second derivatives
(2) (4) (4) 22
2 1
24
u x h u x u x hu x u u h
h
Differentiation and Richardson Extrapolation
48
Example
We will demonstrate this by finding the approximation of both the derivative and 2nd-derivative of u(x) = x3 e–0.5x at x = 0.8
Using Maple, the correct values to 17 decimal digits are: u(1)(0.8) = 1.1154125566033037
u(2)(0.8) = 2.0163226984752030
Differentiation and Richardson Extrapolation
49
Example
h Approximation Error Approximation Error Approximation Error
10-1 1.216270589620254 1.0085e-1 1.115614538793770 2.020e-04 2.013121016529673 3.2017e-3
10-2 1.125495976919111 1.0083e-2 1.115414523410804 1.9668e-6 2.016290701661316 3.1997e-5
10-3 1.116420737455270 1.0082e-3 1.115412576266073 1.9663e-8 2.016322378395330 3.2008e-7
10-4 1.115513372934029 1.0082e-4 1.115412556799700 1.9340e-10 2.016322686593242 1.1882e-8
10-5 1.115422638214847 1.0082e-5 1.115412556604301 9.9676e-13 2.016322109277269 5.8920e-7
10-6 1.115413564789503 1.0082e-6 1.115412556651485 4.8181e-11 2.016276035021747 4.6663e-5
10-7 1.115412656682580 1.0082e-7 1.115412555929840 6.7346e-10 2.015054789694660 1.2679e-3
10-8 1.115412562313622 5.7103e-9 1.115412559538065 2.9348e-9 0.555111512312578 1.4612
10-9 1.115412484598011 7.2005e-8 1.115412512353586 4.4250e-8 -55.511151231257820 57.5275
u x h u x
h
2
u x h u x h
h
2
2u x h u x u x h
h
u(1)(0.8) = 1.1154125566033037 u(2)(0.8) = 2.0163226984752030
Differentiation and Richardson Extrapolation
50
Better Approximations
To give names to these formulas:
u x h u x
h
2
u x h u x h
h
2
2u x h u x u x h
h
1st-order forward divided-difference formula
2nd-order centred divided-difference formula
2nd-order centred divided-difference formula
First Derivative
Second Derivative
Differentiation and Richardson Extrapolation
51
Better Approximations
Suppose, however, you don’t have access to both x + h and x – h , y– This is often the case in a time-dependant system
1 u t u t tu t
t
Differentiation and Richardson Extrapolation
52
Better Approximations
Using the same idea: find the interpolating polynomial, but now find the slope at the right-hand point:
(1) 3 4 2
2
u t u t t u t tu t
t
Differentiation and Richardson Extrapolation
53
Better Approximations
Using Taylor series, we have approximations for both u(t – Dt) and u(t – 2Dt):
Here, and
2 3(1) (2) (3)1
2 3(1) (2) (3)2
1 1
2 61 1
2 2 2 22 6
u t t u t u t t u t t u t
u t t u t u t t u t t u t
1 ,t t t 2 2 ,t t t
Differentiation and Richardson Extrapolation
54
Better Approximations
Expand the terms (2Dt)2 and (2Dt)3 :
Now, to cancel the order (Dt)2 terms, we must subtract the second equation from four times the first equation
2 3(1) (2) (3)1
2 3(1) (2) (3)2
1 1
2 64
2 2 23
u t t u t u t t u t t u t
u t t u t u t t u t t u t
Differentiation and Richardson Extrapolation
55
This leaves us a formula containing the derivative:
Better Approximations
2 3(1) (2) (3)1
2 3(1) (2) (3)2
24 4 4 2
34
2 2 23
u t t u t u t t u t t u t
u t t u t u t t u t t u t
3 3(1) (3) (3)1 2
3(1) (3) (3)2 1
2 44 2 3 2
3 32
3 2 23
u t t u t t u t u x t u t u t
u t u x t u u t
Differentiation and Richardson Extrapolation
56
Better Approximations
Solving
for the derivative yields
This is the backward divided-difference approximation of the derivative at the point t
3(1) (3) (3)2 1
24 2 3 2 2
3u t t u t t u t u t t u u t
2(1) (3) (3)2 1
3 4 2 12
2 3
u t u t t u t tu t u u t
t
Differentiation and Richardson Extrapolation
57
Better Approximations
Comparing the error term, we see that both are second order– The coefficient, however, for the centred divided difference formula, has
a smaller coefficient
Question: is a factor of ¼ or a factor of ½?
2(1) (3) (3)2 1
3 4 2 12
2 3
u x u x t u x tu x u u t
t
(1) (3) (3) 21
2 12
u x h u x hu x u u h
h
Differentiation and Richardson Extrapolation
58
Better Approximations
You will write four functions:
function [dy] = D1st( u, x, h )function [dy] = Dc( u, x, h )function [dy] = D2c( u, x, h )function [dy] = Db( u, x, h )
that implement, respectively, the formulas
2
22
3 4 2
2
u x h u x
hu x h u x h
hu x h u x u x h
hu x u x h u x h
h
Yes, they’re all one line…
Differentiation and Richardson Extrapolation
59
Better Approximations
For example, >> format long>> D1st( @sin, 1, 0.1 )ans = 0.497363752535389>> Dc( @sin, 1, 0.1 )ans = 0.539402252169760>> D2c( @sin, 1, 0.1 )ans = -0.840769992687418>> Db( @sin, 1, 0.1 )ans = 0.542307034066392
>> D1st( @sin, 1, 0.01 )ans = 0.536085981011869>> Dc( @sin, 1, 0.01 )ans = 0.540293300874733>> D2c( @sin, 1, 0.01 )ans = -0.841463972572898>> Db( @sin, 1, 0.01 )ans = 0.540320525678883
Differentiation and Richardson Extrapolation
60
Richardson Extrapolation
There is something interesting about the error terms of the centred divided-difference formulas for the 1st and 2nd derivatives:– If you calculate it out, we only have every second term…
1
3 5 7 92 4 6 8
21 1 1 1
6 120 5040 362880
u x h u x hu x
h
u x h u x h u x h u x h
2
4 6 8 102 4 6 8
2
21 1 1 1
12 360 20160 1814400
u x h u x u x hu x
h
u x h u x h u x h u x h
Differentiation and Richardson Extrapolation
61
Richardson Extrapolation
Let’s see if we can exploit this….– First, define
– Therefore, we have
, ,
2
def
c
u x h u x hD u x h
h
1 3 5 7 92 4 6 81 1 1 1, ,
6 120 5040 362880cu x D u x h u x h u x h u x h u x h
Differentiation and Richardson Extrapolation
62
Richardson Extrapolation
Let’s see if we can exploit this….
1 3 52 4 61 1( , , ) O
6 120cu x D u x h u x h u x h h
2 4
1 3 5 61 1, , O
2 6 2 120 2c
h h hu x D u x u x u x h
A better approximation: ¼ the error
Differentiation and Richardson Extrapolation
63
Richardson Extrapolation
Expanding the products:
1 3 52 4 61 1( , , ) O
6 120cu x D u x h u x h u x h h
1 3 52 4 61 1, , O
2 4 6 16 120c
hu x D u x u x h u x h h
Differentiation and Richardson Extrapolation
64
Richardson Extrapolation
Now, subtract the first equation from four times the second:
1 3 52 4 61 1( , , ) O
6 120cu x D u x h u x h u x h h
1 3 52 4 61 1, , O
2 4 6 16 120c
hu x D u x u x h u x h h
1 1 5 4 614 4 , , , , O
2 160c c
hu x u x D u x D u x h u x h h
Differentiation and Richardson Extrapolation
65
Richardson Extrapolation
Solving for the derivative:
By taking a linear combination of two previous approximations, we have an approximation which has an O(h4) error
1 5 4 6
4 , , , ,12
O3 480
c c
hD u x D u x h
u x u x h h
Differentiation and Richardson Extrapolation
66
Richardson Extrapolation
Let’s try this with the sine function at x = 1 with h = 0.01:
Doing the math,
we see neither approximation is amazing, five digits in the second case…
1 5 44 sin,1,0.005 sin,1,0.01 10.01
3 480c cD D
u x u x
sin(1.01) sin(0.99)sin,1,0.01 0.540293300874733666
0.2sin(1.005) sin(0.995)
sin,1,0.005 0.5403000546113460060.1
cos(1.0) 0.54030230586813971740
c
c
D
D
Differentiation and Richardson Extrapolation
67
Richardson Extrapolation
If we calculate the linear combination, however, we get:
All we did was take a linear combination of not-so-great approximations and we get an approximation good approximation…
Let’s reduce h by half– If the error is O(h6), reducing h by half should reduce the error by
1/64th
4 sin,1,0.005 sin,1,0.010.54030230585688345267
3cos(1.0) 0.54030230586813971740
c cD D
Differentiation and Richardson Extrapolation
68
Richardson Extrapolation
Again, we get almost more digits of accuracy…
How small must h be to get this accurate an answer?– The error is given by the formula
and thus we must solve
to get h = 0.00000224:
4 sin,1,0.0025 sin,1,0.0050.54030230586743619800
3cos(1.0) 0.54030230586813971740
c cD D
(3) 2 121sin 1 7.035 10
6h
(3) (3) 21
12u u h
sin(1.00000224) sin(0.99999776)0.54030230586769
2 0.00000224
Differentiation and Richardson Extrapolation
69
Richardson Extrapolation
As you may guess, we could repeat this again:– Suppose we are solving some function f with a formula F– Suppose the error is O(hn), then we can write
and now we can subtract the first formula from 2n times the second:
o
1o
2 2
n n
n nn
f F h Kh h
hf F Kh h
2 2 o2
n n nhf x f x F F h h
Differentiation and Richardson Extrapolation
70
Richardson Extrapolation
Solving for f(x), we get
Note that the approximation is a weighted average of two other approximations
2 , , , ,
2o
2 1
n
nn
hF f x F f x h
f x h
Differentiation and Richardson Extrapolation
71
Richardson Extrapolation
Question:– Is this formula subject to subtractive cancellation?
2 , , , ,
2o
2 1
n
nn
hF f x F f x h
f x h
Differentiation and Richardson Extrapolation
72
Richardson Extrapolation
Therefore, if we know that the powers of the approximation, we may apply the appropriate Richardson extrapolations…– Given an initial value of h, we can define:
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
Differentiation and Richardson Extrapolation
73
Richardson Extrapolation
If the highest-order error is O(h2), then each subsequent approximation will have an absolute error ¼ the previous– This applies for both centred divided-difference formulas for the
1st and 2nd derivatives
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
Differentiation and Richardson Extrapolation
74
Richardson Extrapolation
Therefore, we could now calculate further approximations according to our Richardson extrapolation formula:
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
2,1 1,12,2
4
3
R RR
3,1 2,13,2
4
3
R RR
4,1 3,14,2
4
3
R RR
5,1 4,15,2
4
3
R RR
Differentiation and Richardson Extrapolation
75
Richardson Extrapolation
These values are now dropping according to O(h4)– Whatever the error is for R2,2, the error of R3,2 is 1/16th that, and
the error for R4,2 is reduced a further factor of 16
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
2,1 1,12,2
4
3
R RR
3,1 2,13,2
4
3
R RR
4,1 3,14,2
4
3
R RR
5,1 4,15,2
4
3
R RR
Differentiation and Richardson Extrapolation
76
Richardson Extrapolation
Replacing n with 4 in our formula, we get:
and thus we haveR1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
2,1 1,12,2
4
3
R RR
3,1 2,13,2
4
3
R RR
4,1 3,14,2
4
3
R RR
5,1 4,15,2
4
3
R RR
4
44
2 , , , ,2
o2 1
hF f x F f x h
f x h
3,2 2,23,3
16
15
R RR
4,2 3,24,3
16
15
R RR
5,2 4,25,3
16
15
R RR
Differentiation and Richardson Extrapolation
77
Richardson Extrapolation
Again, now the errors are dropping by a factor of O(h6) and each approximation has 1/64th the error of the previous– Why not give it another go?
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
2,1 1,12,2
4
3
R RR
3,1 2,13,2
4
3
R RR
4,1 3,14,2
4
3
R RR
5,1 4,15,2
4
3
R RR
3,2 2,23,3
16
15
R RR
4,2 3,24,3
16
15
R RR
5,2 4,25,3
16
15
R RR
4,3 3,34,4
64
63
R RR
5,3 4,35,4
64
63
R RR
Differentiation and Richardson Extrapolation
78
Richardson Extrapolation
We could, again, repeat this process:
Thus, we would have a matrix of entries
of which R5,5 is the most accurate
5,4 4,45,5
256
255
R RR
1,1
2,1 2,2
3,1 3,2 3,3
4,1 4,2 4,3 4,4
5,1 5,2 5,3 5,4 5,5
R
R R
R R R
R R R R
R R R R R
Differentiation and Richardson Extrapolation
79
Richardson Extrapolation
You will therefore be required to write a Matlab functionfunction [du] = richardson22( D, u, x, h, N_max, eps_abs )
that will implement Richardson extrapolation:1. Create an (Nmax + 1) × (Nmax + 1) matrix of zeros
2. Calculate R1,1 = D(u, x, h)
3. Next, create a loop that iterates a variable i from 1 to Nmax that
a. Calculates the value Ri + 1,1 = D(u, x, h/2i ) and
b. Loops to calculate Ri + 1,j + 1 where j running from 1 to i using
c. If , return the value Ri + 1,i + 1
4. If the loop finishes and nothing was returned, throw an exception indicating that Richardson extrapolation did not converge
1, ,1, 1
4
4 1
ji j i j
i j j
R RR
1, 1 , absi i i iR R
Differentiation and Richardson Extrapolation
80
Richardson Extrapolation
The accuracy is actually quite impressive
>> richardson22( @Dc, @cos, 1, 0.1, 5, 1e-12 )ans = -0.841470984807898
>> -sin( 1 )ans = -0.841470984807897
>> richardson22( @Dc, @cos, 2, 0.1, 5, 1e-12 )ans = -0.909297426825698
>> -sin( 2 )ans = -0.909297426825682
>> richardson22( @Dc, @sin, 1, 0.1, 5, 1e-12 )ans = 0.540302305868148
>> cos( 1 )ans = 0.540302305868140
>> richardson22( @Dc, @sin, 2, 0.1, 5, 1e-12 )ans = -0.416146836547144
>> cos( 2 )ans = -0.416146836547142
Differentiation and Richardson Extrapolation
81
Richardson Extrapolation
In reality, expecting an error as small as 10 is
>> richardson22( @D2c, @cos, 1, 0.1, 5, 1e-12 )??? Error using ==> richardson22 at 20Richard extrapolation did not converge >> richardson22( @D2c, @cos, 1, 0.1, 5, 1e-10 )ans = -0.540302305869316
>> -cos( 1 )ans = -0.540302305868140
>> richardson22( @D2c, @cos, 2, 0.1, 5, 1e-10 )ans = 0.416146836545719
>> -cos( 2 )ans = 0.416146836547142
>> richardson22( @D2c, @sin, 1, 0.1, 5, 1e-12 )ans = -0.841470984807975
>> -sin( 1 )ans = -0.841470984807897
>> richardson22( @D2c, @sin, 2, 0.1, 5, 1e-12 )??? Error using ==> richardson22 at 35Richard extrapolation did not converge >> richardson22( @D2c, @sin, 2, 0.1, 5, 1e-10 )ans = -0.909297426827381
>> -sin( 2 )ans = -0.909297426825682
Differentiation and Richardson Extrapolation
82
Richardson Extrapolation
The Taylor series for the backward divided-difference formula
does not drop off so quickly:
Once you finish richardson22, it will be trivial to write richardson21 which is identical except it uses the formula:
(1) 3 4 2
2
u t u t t u t tu t
t
1
2 3 4 5 63 4 5 6 7
3 4 2
21 1 7 1 31
3 4 60 24 2520
u t u t t u t tu t
t
u t t u t t u t t u t t u t t
11, ,
1, 1 1
2
2 1
ji j i j
i j j
R RR
Differentiation and Richardson Extrapolation
83
Richardson Extrapolation
Question:– What happens if an error is larger than that expected by
Richardson extrapolation? Will this significantly affect the answer?
– Fortunately, each step is just a linear combination with significant weight placed on the more accurate answer
• It won’t be worse than just calling, for example, Dc( u, x, h/2^N_max )
Differentiation and Richardson Extrapolation
84
Summary
In this topic, we’ve looked at approximating the derivative– We saw the effect of subtractive cancellation– Found the centred-divided difference formulas
• Found an interpolating function• Differentiated that interpolating function• Evaluated it at the point we wish to approximate the derivative
– We also found one backward divided-difference formula– We then applied Richardson extrapolation
Differentiation and Richardson Extrapolation
85
References
[1] Glyn James, Modern Engineering Mathematics, 4th Ed., Prentice Hall, 2007, p.778.
[2] Glyn James, Advanced Modern Engineering Mathematics, 4th Ed., Prentice Hall, 2011, p.164.
Differentiation and Richardson Extrapolation