chapter 7 numerical methods for the solution of systems of equations
Post on 30-Dec-2015
37 Views
Preview:
DESCRIPTION
TRANSCRIPT
1
Chapter 7Numerical Methods for the Solution of
Systems of Equations
2
Introduction
This chapter is about the techniques for solving linear and nonlinear systems of equations.
Two important problems from linear algebra:– The linear systems problem:
– The nonlinear systems problem:
3
7.1 Linear Algebra Review
4
5
Theorem 7.1 and Corollary 7.1
Singular v.s. nonsingular
6
Tridiagonal Matrices
Upper triangular:
Lower triangular:
Symmetric matrices, positive definite matrices…… The concepts of independence/dependence, spanning, b
asis, vector space/subspace, dimension, and orthogonal/orthonormal should review……
7
7.2 Linear Systems and Gaussian Elimination
In Section 2.6, the linear system can be written as a single augmented matrix:
Elementary row operations to solve the linear system problems:
Row equivalent: if we can manipulate from one matrix to another using only elementary row operations, then the two matrices are said to be row equivalent.
8
Theorem 7.2
9
Example 7.1
10
Example 7.1 (con.)
11
Partial Pivoting
12
The Problem of Naive Gaussian Elimination
The problem of naive Gaussian elimination is the potential division by a zero pivot.
For example: consider the following system
The exact solution:
What happens when we solve this system using the naive algorithm and the pivoting algorithm?
13
Discussion
Using the naive algorithm:
Using the pivoting algorithm:
correct
incorrect
14
7.3 Operation Counts
You can trace Algorithms 7.1 and 7.2 to evaluate the computational time.
15
7.4 The LU Factorization
Our goal in this section is to develop a matrix factorization that allows us save the work from the elimination step.
Why don’t we just compute A-1 (to check if A is nonsingular)?– The answer is that it is not cost-effective to do so.– The total cost is (Exercise 7)
What we will do is show that we can factor the matrix A into the product of a lower triangular and an upper triangular matrix:
16
The LU Factorization
17
Example 7.2
18
Example 7.2 (con.)
19
The Computational Cost The total cost of the above process:
If we already have done the factorization, then the cost of the two solution steps:
Constructing the LU factorization is surprisingly easy.– The LU factorization is nothing more than a very slight
reorganization of the same Gaussian elimination algorithm we studied earlier in this chapter.
20
The LU Factorization : Algorithms 7.5 and 7.6
21
22
23
Example 7.3
24
Example 7.3 (con.)
L
U
25
Pivoting and the LU Decomposition
Can we pivoting in the LU decomposition without destroying the algorithm?– Because of the triangular structure of the LU factors,
we can implement pivoting almost exactly as we did before.
– The difference is that we must keep track of how the rows are interchanged in order to properly apply the forward and backward solution steps.
26
Example 7.4
Next page
27
Example 7.4 (con.)
We need to keeptrack of the row interchanges.
28
Discussion
How to deep track of the row interchanges?– Using an index array
– For example: In Example 7.4, the final version of J is
you can check that this is correct.
29
7.5 Perturbation, Conditioning, and Stability
Example 7.5
30
7.5.1 Vector and Matrix Norms
For example:– Infinity norm:
– Euclidean 2-norm:
31
Matrix Norm
The properties of matrix norm: (1) (2) For example:
– The matrix infinity norm:
– The matrix 2-norm:
32
Example 7.6
0
14211
25622
112217
33
7.5.2 The Condition Number and Perturbations
Note thatCondition number
34
Definition 7.3 and Theorem 7.3
35
AA-1= I
36
Theorem 7.4
37
Theorems 7.5 and 7.6
38
Theorem 7.7
39
Definition 7.4
An example: Example 7.7
40
41
Theorem 7.9
42
Discussion Is Gaussian elimination with partial pivoting a stable process?
– For a sufficiently accurate computer (u small enough) and a sufficiently small problem (n small enough), then Gaussian elimination with partial pivoting will produce solutions that are stable and accurate.
43
7.5.3 Estimating the Condition Number
Singular matrices are perhaps something of a rarity, and all singular matrices are arbitrarily close to a nonsingular matrix.
If the solution to a linear system changes a great deal when the problem changes only very slightly, then we suspect that the matrix is ill conditioned (nearly singular).
The condition number is an important indicator to find the ill conditioned matrix.
44
Estimating the Condition Number
Estimate the condition number
45
Example 7.8
46
7.5.4 Iterative Refinement Since Gaussian elimination can be adversely affected by rounding
error, especially if the matrix is ill condition. Iterative refinement (iterative improvement) algorithm can use to
improve the accuracy of a computed solution.
47
Example 7.9
48
Example 7.9 (con.)
compare
49
7.6 SPD Matrices and The Cholesky Decomposition
SPD matrices: symmetric, positive definite matrices
You can prove this theorem using induction method.
50
The Cholesky Decomposition
There are a number of different ways of actually constructing the Cholesky decomposition.
All of these constructions are equivalent, because the Cholesky factorization is unique.
One common scheme uses the following formulas:
This is a very efficient algorithm. You can read Section 9.22 to learn more about Cholesky
method.
n
51
7.7 Iterative Method for Linear Systems: a Brief Survey
If the coefficient matrix is a very large and sparse, then Gaussian elimination may not be the best way to solve the linear system problem.
Why? – Even though A=LU is sparse, the individual factors L and U may not be
as sparse as A.
52
Example 7.10
53
Example 7.10 (con.)
54
Splitting Methods (details see Chapter 9)
55
Theorem 7.13
56
Definition 7.6
57
Theorem 7.14
Conclusion:
58
Example of Splitting Methods--Jacobi Iteration
Jacobi iteration:
In this method, matrix M = D.
59
Example 7.12
60
Example 7.12 (con.)
61
Example of Splitting Methods--Gauss-Seidel Iteration
Gauss-Seidel Iteration :
In this method, matrix M = L.
62
Example 7.13
63
Theorem 7.15
64
Example of Splitting Methods--SOR Iteration
SOR: successive over-relaxation iteration
65
Example 7.14
66
Theorem 7.16
top related