numerical methods in computation study guide
Post on 14-Jul-2016
237 Views
Preview:
DESCRIPTION
TRANSCRIPT
Taylor Polynomial: !
Taylor Remainder (truncation error): ! !
Error Growth two types: Linear: ! , Exponential: ! (Classify linear vs exponential for recursive equations?) Intermediate Value Theorem: If ! and k is between ! and ! , then there exists a number ! for which ! Mean Value Theorem: If ! and differentiable on ! , then there exists a number
! , such that: ! (average secant slope)
Bisection Method: Bracket root with ! then cut the interval in half and choose the correct half. Continue until within accuracy limit. Fixed Point Iteration: ! is a fixed point of ! , if ! . Provable that ! exists on ! if: (i) ! is continuous on ! — With (ii) sufficient but not necessary (not uniqueness) (ii) ! is bounded on ! by ! — With (i) sufficient but not necessary (not uniqueness) (iii) ! < 1 for all ! — Guarantees uniqueness
FPI Convergence: ! —> ! where !
(i) If ! < 1 for all ! , converges for any ! (ii) If ! < 1, converges for some ! Identify Convergence of FP method: Taylor expand ! about ! in ! :
! and take term with lowest order derivative of ! ,
discard the higher order derivatives (small as ! )
Convergence Rates: ! where {α: Order of convergence, λ: Asymptotic error constant}
Newton’s Method: Derived by letting! and setting ! :
! , guaranteed quadratic convergence for some ! provided ! . If
! , then convergence is linear because ! .
Modified Newton’s Method: Let ! where ! has multiplicity ! at ! and
! . Apply Newton's Method to ! , thus !
Secant Method: Newton’s Method with approximate derivative !
Pn (x) =f (k )(x0 )k!
(x − x0 )k
k=0
N
∑ = f (x0 )+f '(x0 )(x − x0 )
1!+ f ''(x0 )(x − x0 )
2
2!+ ...
RN (x) =f (n+1)(ξ )(x − x0 )
N+1
(N +1)!{x ∈[x, x0 ]}
εn ~ kNε0 εn ~ kNε0
f ∈C[a,b] f (a) f (b)c∈(a,b) f (c) = k
f ∈C[a,b] (a,b)
c∈(a,b) f '(c) = f (b)− f (a)b − a
p0, p1
P g(x) g(p) = p p [a,b]g(x) [a,b]g(x) [a,b] [a,b]| g '(x) | x ∈[a,b]
g '(c) = g(pn−1)− g(p)pn−1 − p
= εnεn−1
εn =| g '(c) | εn−1 c∈[pn−1, pn ]
| g '(x) | x ∈[a,b] p0 ∈[a,b]| g '(p) | p0 ∈[a,b]
g(pn ) p εn+1 =| g(pn )− g(p) |
εn+1 = g '(p)εn +g ''(p)εn
2
2!+ g '''(p)εn
3
3!+ ... g(p) ≠ 0
n− > ∞
limn→∞
| εn+1 || εn |
α = λ
g(x) = x −φ(x) f (x) g '(p) = 0
pn+1 = pn −f (pn )f '(pn )
p0 ∈[a,b] f '(p) ≠ 0
f '(p) = 0 g '(p) = m −1m
≠ 0
u(x) = f (x)f '(x)
f (x) m x = p
u '(p) ≠ 0 u(x) pn+1 = pn −u(pn )u '(pn )
pn+1 = pn −f (pn )(pn − pn−1)f (pn )− f (pn−1)
Requires two starting values, don’t need to bracket root, super-linear convergence,
! .
Aitken’s ! Method: ! where ! . Compute
! from ! , ! from ! , etc. Faster convergence, ! , still linear ! . Stephenson’s Method: Same formula as Aitken’s, but get! from ! then! Next get! from ! then ! . Converges quadratically (! ). Newton’s Method System of Non-Linear Equations: ! where
!
Polynomial Interpolation: ! where ! Kronecker Delta
Lagrange Polynomial: ! where ! satisfies Kron. Delta
Easy to use, but inefficient: weighted sum of N+1 Nth order polynomials.
Error term: !
Chebyshev Optimal Points: A minimal maximum error exists on the interval ! if we use the
roots of the ! order chebyshev polynomials for ! . ! and can be mapped to
the interval !
Neville’s Method: Iteratively find interpolating polynomial by evaluating lower order polynomials:
! final solution is ! . Ex:
!
α = 5 +12
= 1.618
Δ2 !pn = pn −(pn+1 − pn )
2
pn+2 − 2pn+1 + pn= pn −
(Δpn )2
Δ2pnΔpn = pn+1 − pn
!p0 p0, p1, p2 !p1 p1, p2, p3 λ =| g '(p) |2 α = 1!p0 p0, p1, p2 p3 = g( !p0 )
!p1 p3, p4 , p5 p6 = g( !p1) α = 2xn+1 = g(xn ) = xn − J
−1(xn ) f (xn )
J =
δ f1δ x1
…δ f1δ xn
! " !δ fnδ x1
!δ fnδ xn
⎛
⎝
⎜⎜⎜⎜⎜⎜
⎞
⎠
⎟⎟⎟⎟⎟⎟
PN (x) = f (x j )bj (x)j=0
N
∑ bj (xi ) = δ i, j =1| i = j0 | i ≠ j
⎧⎨⎩
⎫⎬⎭
PN (x) = f (x j )LN , j (x)j=0
N
∑ LN , j (x) =(x − xk )(x j − xk )k=0
k≠ j
N
∏
f (x)− PN (x) =f (N+1)(ξ )(N +1)!
(x − xi )i=0
N
∏[−1,1]
N +1 {xi} xi = cos2(i +1)π2(N +1)
xi =a + b2
+ b − a2
xi
pi,i (x) = f (xi )
pi, j (x) =pi+1, j (x)(x − xi )− pi, j−1(x)(x − x j )
x j − xi
p0,N (x)
p0,0 = f (x0 )
p0,1 =p1,1(x − x0 )− p0,0 (x − x1)
x1 − x0
p1,1 = f (x1) p0,2 =p1,2 (x − x0 )− p0,1(x − x2 )
x2 − x1
p2,2 = f (x2 )p1,2 =
p2,2 (x − x1)− p1,1(x − x2 )x2 − x1
Hermite Polynomial: Matches ! and ! for ! | ! . Solution is ! order:
! where:
!
Error Term: !
Cubic Spline: Make 1st and 2nd derivatives continuous across interval. Conditions:
! !
First Forward Difference Approximation (2 pts): !
First Forward Difference Approximation (3 pts): !
First Centered Difference Approximation: !
Second Forward Difference Approximation: !
Second Centered Difference Approximation: !
Truncation and Round-Off Error: ! (for
first centered difference)! . Find ! by setting ! .
Polynomial Curve Fitting: Minimize error, ! for polynomial
! by setting ! . ! normal eqs:
! , matrix form: !
f (x) f '(x) {xi} i = 0...N 2N +1
P2N+1(x) = f (x j )H2N+1, j (x)+j=0
N
∑ f '(x j )H2N+1, j (x)j=0
N
∑H2N+1, j (x) = [1− 2(x − x j )L
'N , j (x j )]LN , j
2 (x) | { H2N+1(xi ) = δ i, j H '2N+1(xi ) = 0 }
H2N+1, j (x) = (x − x j )LN , j2 (x) | { H2N+1(xi ) = 0 H '
2N+1(xi ) = δ i, j }
f (2N+2)(ξ )(2N + 2)!
(x − x j )2
j=0
N
∏
1. s j (x j ) = f (x j ) j = 0,1,...N −1
2. s j (x j+1) = f (x j+1) j = 0,1,...N −1
3. s j' (x j+1) = s j+1
' (x j+1) j = 0,1,...N − 2
4. s j'' (x j+1) = s j+1
'' (x j+1) j = 0,1,...N − 2
s0' (x0 ) = f ' (x0 )sn−1' (xn ) = f ' (xn )
} Clamped Cubic Spline *most accurate
s0'' (x0 ) = 0sn−1'' (xn ) = 0
} Natural Cubic Spline *most common
f j' =
f j+1 − f jh
+O(h) | O(h) = h2f '' (ξ )
f j' =
− f j+2 + 4 f j+1 − 3 f j2h
+O(h2 ) | O(h2 ) = h2
3f '' (ξ )
f j' =
f j+1 − f j−12h
+O(h2 ) | O(h2 ) = − f ''' (ξ )6
h2
f j'' =
f j+2 − 2 f j+1 + f jh2
+O(h) | O(h) = hf ''' (ξ )
f j'' =
f j+1 − 2 f j + f j−1h2
+O(h2 ) | O(h2 ) = f (iv)(ξ )12
h2
ε total = ε truncation + ε round−off =O(hm )+ε j+1 − ε j−1
2h≤ Mh
2
6− εh
limh→0
ε total = ∞ & limh→∞
ε total = ∞ hoptimalδε totalδh
= 0
E = [ f (x j )− !p(x j )]2
j=0
N
∑
!p(x) = pN (x) = aNxN + aN−1x
N−1 + ...+ a1x + a0δEδa0
, δEδa1
,..., δEδaN
= 0 N +1
akk=0
N
∑ xij+k
i=1
m
∑⎡⎣⎢
⎤⎦⎥= yixi
j
i=1
m
∑
m xi∑ ! xiN∑
xi∑ xi2∑ !
! " !xiN∑ # # xi
2N∑
⎛
⎝
⎜⎜⎜⎜⎜
⎞
⎠
⎟⎟⎟⎟⎟
a0a1!aN
⎛
⎝
⎜⎜⎜⎜
⎞
⎠
⎟⎟⎟⎟
=
yi∑yixi∑!yixi
N∑
⎛
⎝
⎜⎜⎜⎜⎜
⎞
⎠
⎟⎟⎟⎟⎟
Non-polynomial Curve Fitting (ex): Linearize data and create linear fit, or solve normal equations using nonlinear system of equations root finding method.
Numerical Quadrature: ! Look at
order of polynomial that can be integrated exactly to determine order of error: !
Newton-Cotes Closed Formulas: ! where ! points used
Trapezoid Rule: Use forward difference approximation to keep two terms:
! Can integrate linear poly. exactly
Simpson’s Rule: Use centered difference approximation will first derivative eliminated
!
Newton-Cotes Open Formulas:! where ! points used
Midpoint Rule: ! can integrate linear poly. exactly
Other Rule: ! can only integrate linear poly. exactly
Newton-Cotes Rule: !
Composite Integration: Use low-order Newton-Cotes formulas on sub intervals = panels
Composite Trapezoid Rule: !
Error becomes: ! loses factor of h
when applied N times (composite)
Composite Simpson’s Rule: !
y = beax− > ln y = lnb + ax
I(b) = f (x)dxa
b
∫ = hfa +h2
2!fa' + h
3
3!fa'' + ...= ai fi + (error term)
i=0
N
∑PN(N+1)(x) = 0
h = b − aN
N +1
I(b) = hfa +h2
2!fa' + h
3
3!fa'' = h2( fa + fb )−
h3
12f '' (ξ ) | O(h3)
I(b) = I(c)+ hfc +h2
2!fc' + h
3
3!fc'' + h
4
4!fc''' + h
5
5!fciv
− I(a) = I(c)− hfc +h2
2!fc' − h
3
3!fc'' + h
4
4!fc''' − h
5
5!fciv
− − − − − − − − − − − − − − − − − − − − − − − − − −
I(b) = 2hfc +2h3
3!fc'' + 2h
5
5!f (iv)(ξ )
I(b) = h3fa + 4 fc + fb[ ]− h
5
90f (iv)(ξ )
h = b − aN + 2
N +1
I(b) = 2hf0 +2h3
3!f '' (ξ )
I(b) = 3h2
f0 + f1[ ]+ 9h3
4f '' (ξ )
PN+1∫ canbeapproximated exactly
PN∫ canbeapproximated exactly
N is evenN is odd
⎧
⎨⎪
⎩⎪
⎫
⎬⎪
⎭⎪
I(b) = h2
f0 + 2 f jj=1
N−1
∑ + fN⎛
⎝⎜⎞
⎠⎟+ h3
12f '' (ξ j ) =
h3
12Nf '' (ξ j )
j=1
N
∑
Etruncaction =h3
12f '' (ξ j ) =
h3
12N
j=1
N
∑ f '' (ξ ) = b − a12
h2 f '' (ξ ) O(h2 )
I(b) = h3
f0 + 2 f2 jj=1
N2−1
∑ + 4 f2 j−1j=1
N2−1
∑ + fN⎛
⎝
⎜⎜
⎞
⎠
⎟⎟+ b − a180
h4 f (iv)(ξ ) ||O(h4 )
Numerical Integration Round-Off Error: Optimal error always with smallest h
!
Romberg Integration: Iteratively compute quadrature each time combining quadratures of different
panel sizes to cancel the leading error term. ! error: !
where ! and !
!
Adaptive Quadrature: Continually subdivide an interval in half, computing a quadrature rule until criterion is filled (ex. Simpson’s Rule):
! . ! is
the section index and ! is the refinement.
Gaussian Quadrature: Set of optimal points for approximating ! in that can integrate
exactly the max order polynomial with min # points. Can integrate ! order polynomial exactly with ! points.
!
Eround = aiε i ≤ ε ai =i=0
N
∑i=0
N
∑ ε b − a where ε = max ε i( ) and dx = (b − a) = aii=0
N
∑a
b
∫
Rm, j =4 j Rm, j−1 − Rm−1, j−1
4 j −1E ~O h
2m⎛⎝⎜
⎞⎠⎟2 j+2⎛
⎝⎜⎞
⎠⎟
m = #of panel doublings j = #of error removalsh R0,0 = RN
h / 2 R1,0 = R2N
h / 4 R2,0 = R4N
h / 8 R3,0 = R8N
R1,1 =4R1,0 − R0,04 −1
R2,1 =4R1,0 − R0,04 −1
R3,1 =4R1,0 − R0,04 −1
R2,2 =42R2,1 − R1,142 −1
R3,2 =42R3,1 − R2,142 −1
R3,3 =43R3,2 − R2,243 −1
Si, j − Si, j+1 − Si+1, j+1 < Mε 12
⎛⎝⎜
⎞⎠⎟j
where ε is the toleranceand M is specifictoquadraturerule i
j
f (x)dxa
b
∫2N −1
N
f (x)dx = b − a2
ai f (xi )i=1
N
∑a
b
∫ wherexi =a + b2
+ b − a2
xi and ai are the gaussian weights
top related