iterative solution methods
DESCRIPTION
Iterative Solution Methods. Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem Ax=b During the iterations A, matrix is not changed so sparcity is preserved Each iteration involves a matrix-vector product - PowerPoint PPT PresentationTRANSCRIPT
1
Iterative Solution Methods Starts with an initial approximation for the
solution vector (x0)
At each iteration updates the x vector by using the sytem Ax=b
During the iterations A, matrix is not changed so sparcity is preserved
Each iteration involves a matrix-vector product
If A is sparse this product is efficiently done
2
Iterative solution procedure Write the system Ax=b in an equivalent formx=Ex+f (like x=g(x) for fixed-point iteration)
Starting with x0, generate a sequence of approximations {xk} iteratively byxk+1=Exk+f
Representation of E and f depends on the type of the method used
But for every method E and f are obtained from A and b, but in a different way
3
Convergence As k, the sequence {xk} converges to
the solution vector under some conditions on E matrix
This imposes different conditions on A matrix for different methods
For the same A matrix, one method may converge while the other may diverge
Therefore for each method the relation between A and E should be found to decide on the convergence
4
Different Iterative methods
Jacobi Iteration Gauss-Seidel Iteration Successive Over Relaxation (S.O.R)
SOR is a method used to accelerate the convergence
Gauss-Seidel Iteration is a special case of SOR method
5
Jacobi iteration
nnnnnn
nn
nn
bxaxaxa
bxaxaxa
bxaxaxa
2211
22222121
11212111
0
02
01
0
nx
x
x
x
)(1 0
102121
11
11 nnxaxaba
x
1
1 1
1 1 i
j
n
ij
kjij
kjiji
ii
ki xaxab
ax
)(1 0
11022
011
1 nnnnnn
nnn xaxaxaba
x
)(1 0
20323
01212
22
12 nnxaxaxaba
x
6
xk+1=Exk+f iteration for Jacobi method
A can be written as A=L+D+U (not decomposition)
000
00
0
00
00
00
0
00
000
23
1312
33
22
11
3231
21
333231
232221
131211
a
aa
a
a
a
aa
a
aaa
aaa
aaa
n
ij
kjij
i
j
kjiji
ii
ki xaxab
ax
1
1
1
1 1 xk+1=-D-1(L+U)xk+D-
1b
E=-D-1(L+U)
f=D-1b
Ax=b (L+D+U)x=b
Dxk+1 =-(L+U)xk+b
kk UxLxDxk+1
7
Gauss-Seidel (GS) iteration
nnnnnn
nn
nn
bxaxaxa
bxaxaxa
bxaxaxa
2211
22222121
11212111
0
02
01
0
nx
x
x
x
1
1 1
11 1 i
j
n
ij
kjij
kjiji
ii
ki xaxab
ax)(
1 01
02121
11
11 nnxaxaba
x
)(1 1
11122
111
1 nnnnnn
nnn xaxaxaba
x
)(1 0
20323
11212
22
12 nnxaxaxaba
x
Use the latestupdate
8
x(k+1)=Ex(k)+f iteration for Gauss-Seidel
1
1 1
11 1 i
j
n
ij
kjij
kjiji
ii
ki xaxab
ax
xk+1=-(D+L)-1Uxk+(D+L)-1b
E=-(D+L)-1U
f=-(D+L)-1b
Ax=b (L+D+U)x=b
(D+L)xk+1 =-Uxk+b
k1k UxLx Dxk+1
9
Comparison
Gauss-Seidel iteration converges more rapidly than the Jacobi iteration since it uses the latest updates
But there are some cases that Jacobi iteration does converge but Gauss-Seidel does not
To accelerate the Gauss-Seidel method even further, successive over relaxation method can be used
10
Successive Over Relaxation Method
GS iteration can be also written as follows
ki
ki
ki
i
j
n
ij
kjij
kjiji
ii
ki
ki
xx
xaxaba
xx
1
1
1
11 1
Correction term
0ix
1ix
2ix
3ix
0i
1i
2i
0i
1i
2i
1Multiply with
Faster convergence
11
SOR
1
1 1
11
1
1
11
1
1)1(
1
i
j
n
ij
kjij
kjiji
ii
ki
ki
i
j
n
ij
kjij
kjiji
ii
ki
ki
ki
ki
ki
xaxaba
xx
xaxaba
xx
xx
1<<2 over relaxation (faster convergence)0<<1 under relaxation (slower convergence)There is an optimum value for Find it by trial and error (usually around 1.6)
12
x(k+1)=Ex(k)+f iteration for SOR
1
1 1
11 1)1(
i
j
n
ij
kjij
kjiji
ii
ki
ki xaxab
axx
Dxk+1=(1-)Dxk+b-Lxk+1-Uxk
(D+ L)xk+1=[(1-)D-U]xk+b
E=(D+ L)-1[(1-)D-U]
f= (D+ L)-1b
13
The Conjugate Gradient Method
iiii
iTi
iTi
i
iiii
iTi
iTi
i
drd
rr
rr
Adxx
Add
rr
Axbrd
111
111
1
000
• Converges if A is a symmetric positive definite matrix
• Convergence is faster
14
Convergence of Iterative Methods
x̂Define the solution vector as
Define an error vector as ke
xex kk ˆSubstitute this into fExx kk 1
kkk EefxEfxeExe ˆ)ˆ(ˆ1
0)1(211 eEEEEeEEeEee kkkkk
15
Convergence of Iterative Methods
0)1(0)1(1 eEeEe kkk
Convergence condition
0Lim0Lim )1(1
k
k
k
kEe if
iteration power
The iterative method will converge for any initial iteration vector if the following condition is satisfied
16
Norm of a vector
A vector norm should satisfy these conditions
yxyx
αxαx
xx
xx
scalar for
vectorzero a is iff
vector nonzeroevery for
0
0
Vector norms can be defined in different forms as long as the norm definition satisfies these conditions
17
Commonly used vector norms
nxxxx 211
Sum norm or ℓ1 norm
Euclidean norm or ℓ2 norm
222
212 nxxxx
Maximum norm or ℓ norm
ii xx max
18
Norm of a matrix
A matrix norm should satisfy these conditions
BABA
αAαA
AA
A
scalar for
matrix zero a is iff
0
0
Important identitiy
vectora is xxAAx
19
Commonly used matrix norms
Maximum column-sum norm or ℓ1 norm
Spectral norm or ℓ2 norm
Maximum row-sum norm or ℓ norm
m
iij
njaA
111max
AAA T of eigenvalue maximum2
n
jij
miaA
11max
20
Example
Compute the ℓ1 and ℓ norms of the matrix
186
427
593 17
13
15
A
16 19 10
1A
21
Convergence condition
0lim0lim )1(1
k
k
k
kEe if
Express E in terms of modal matrix P and :Diagonal matrix with eigenvalues of E on the diagonal
1)1()1(
111)1(
1
PPE
PPPPPPE
PPE
kk
k
1
12
11
1
kn
k
k
k
,...,n,i
PPE
iki
k
k
k
k
k
k
k
21for 10lim
0lim0lim0lim
1
)1(1)1()1(
22
Sufficient condition for convergence
If the magnitude of all eigenvalues of iteration matrixE is less than 1 than the iteration is convergent
ExExxEEx
xEx
xEx
It is easier to compute the norm of a matrix than to compute its eigenvalues
1E is a sufficient condition for convergence
23
Convergence of Jacobi iteration
E=-D-1(L+U)
0
0
0
11
11
1
22
2
22
23
22
21
11
1
11
12
nn
nn
nn
n
nn
nn
n
n
a
a
a
aa
a
a
a
a
a
a
aa
a
a
a
E
24
Convergence of Jacobi iteration
n
jij
ijii
n
jij ii
ij
aa
a
aE
1
1
11
n1,2,...,ifor
Evaluate the infinity(maximum row sum) norm of E
Diagonally dominant matrix
If A is a diagonally dominant matrix, then Jacobi iteration converges for any initial vector
25
Stopping Criteria
Ax=b
At any iteration k, the residual term is
rk=b-Axk
Check the norm of the residual term
||b-Axk||
If it is less than a threshold value stop
26
Example 1 (Jacobi Iteration)
15
21
7
512
184
114
3
2
1
x
x
x
0
0
00x
5
215
8
421
4
7
02
011
3
03
011
2
03
021
1
xxx
xxx
xxx
0.35
15
625.28
21
75.14
7
7395.262
0 Axb
0452.102
1 Axb
Diagonally dominant matrix
27
Example 1 continued...
5
215
8
421
4
7
12
112
3
13
112
2
13
122
1
xxx
xxx
xxx
225.45
625.275.1215
875.38
375.1421
65625.14
3625.27
7413.6
2
2 Axb
8875.25
875.365625.1215
98125.38
225.465625.1421
6625.14
225.4875.37
33
32
31
x
x
x
9534.12
2 Axb
Matrix is diagonally dominant, Jacobi iterations are converging
28
Example 2
7
21
15
114
184
512
3
2
1
x
x
x
0
0
00x 7395.26
2
0 Axb
02
01
13
03
011
2
03
021
1
47
8
421
2
515
xxx
xxx
xxx
0.7
625.28
21
5.72
15
The matrix is not diagonally dominant
8546.542
1 Axb
29
Example 2 continued...
625.39625.25.747
25.08
75.7421
3125.112
75625.215
13
12
11
x
x
x
3761.2082
2 Axb
The residual term is increasing at each
iteration, so the iterations are diverging.
Note that the matrix is not diagonally
dominant
30
Convergence of Gauss-Seidel iteration
GS iteration converges for any initial vector if A is a diagonally dominant matrix
GS iteration converges for any initial vector if A is a symmetric and positive definite matrix
Matrix A is positive definite if
xTAx>0 for every nonzero x vector
31
Positive Definite Matrices
A matrix is positive definite if all its eigenvalues are positive
A symmetric diagonally dominant matrix with positive diagonal entries is positive definite
If a matrix is positive definite All the diagonal entries are positive
The largest (in magnitude) element of the whole matrix must lie on the diagonal
32
Positive Definitiness Check
5225
21512
251220
2525
21512
51220
Not positive definiteLargest element is not on the diagonal
Not positive definiteAll diagonal entries are not positive
2525
21512
51220 Positive definiteSymmetric, diagonally dominant, all diagonal entries are positive
33
Positive Definitiness Check
2528
21512
51220
A decision can not be made just by investigating the matrix.
The matrix is diagonally dominant and all diagonal entries are positive but it is not symmetric.
To decide, check if all the eigenvalues are positive
34
Example (Gauss-Seidel Iteration)
15
21
7
512
184
114
3
2
1
x
x
x
0
0
00x
5
215
8
421
4
7
12
111
3
03
111
2
03
021
1
xxx
xxx
xxx
0.35
5.375.1215
5.38
75.1421
75.14
7
7395.262
0 Axb
0414.32
1 Axb
Diagonally dominant matrix
0452.102
1 Axb
Jacobi iteration
35
Example 1 continued...
5
215
8
421
4
7
22
212
3
13
212
2
13
122
1
xxx
xxx
xxx
9625.25
9375.3875.1215
9375.38
3875.1421
875.14
35.37
4765.02
2 Axb
7413.62
2 Axb
Jacobi iteration
When both Jacobi and Gauss-Seidel iterations converge, Gauss-Seidel converges faster
36
Convergence of SOR method
If 0<<2, SOR method converges for any initial vector if A matrix is symmetric and positive definite
If >2, SOR method diverges
If 0<<1, SOR method converges but the convergence rate is slower (deceleration) than the Gauss-Seidel method.
37
Operation count The operation count for Gaussian
Elimination or LU Decomposition was 0 (n3), order of n3.
For iterative methods, the number of scalar multiplications is 0 (n2) at each iteration.
If the total number of iterations required for convergence is much less than n, then iterative methods are more efficient than direct methods.
Also iterative methods are well suited for sparse matrices