power method
TRANSCRIPT
Numerical Solution of Eigenvalues Problems
Presented by : Nashat Al-ghrairiSubject : MathematicSupervisor : Assoc. prof MURAT AlTEKINDATE : 23, December 2015
Eigenvalue problemsEigenvalue problems occur in many areas of
science and engineering, such as structural analysis
Eigenvalues are also important in analyzing numerical methods
Theory and algorithms apply to complex matrices as well as real matrices
With complex matrices, we use conjugate transpose, AH, instead of usual transpose, AT
Formulation
Matrix expands or shrinks any vector lying in direction of eigenvector by scalar factor
Expansion or contraction factor is given by corresponding eigenvalue
Eigenvalues and eigenvectors decompose complicated behavior of general linear transformation into simpler actions
Properties of eigenvalue problems
Properties of eigenvalue problem affecting choice of algorithm and software◦ Are all eigenvalues needed, or only a few?◦ Are only eigenvalues needed, or are corresponding
eigenvectors also needed?◦ Is matrix real or complex?◦ Is matrix relatively small and dense, or large and sparse?◦ Does matrix have any special properties, such as
symmetry, or is it general matrix?Condition of eigenvalue problem is sensitivity of
eigenvalues and eigenvectors to changes in matrix
Conditioning of eigenvalue problem is not same as conditioning of solution to linear system for same matrix
Different eigenvalues and eigenvectors are not necessarily equally sensitive to perturbations in matrix
Method of eigenvalue problems
Linear Algebra and EigenvaluesOrthogonal Matrices and Similarity
TransformationsThe Power MethodHouseholder’s MethodThe QR AlgorithmSingular Value DecompositionSurvey of Methods and Software
Power methodThe Power method is an iterative technique used to determine the dominant eigenvalueof a matrix—that is, the eigenvalue with the largest magnitude. By modifying the methodslightly, it can also used to determine other eigenvalues. One useful feature of the Powermethod is that it produces not only an eigenvalue, but also an associated eigenvector. In fact,the Power method is often applied to find an eigenvector for an eigenvalue that is determinedby some other means.
Outline of power method
* Iterative Solutions: Highest Eigenvalue: Power
Method Lowest Eigenvalue: Inverse
Power Method Other Eigenvalues: Eigenvalue
Substitution
Power methodTo apply the Power method, we assume that the n × n matrix A has n eigenvalues λ1, λ2, . . . , λn with an associated collection of linearly independent eigenvectors {v(1), v(2),v(3), . . . , v(n)}. Moreover, we assume that A has precisely one eigenvalue, λ1, that is largestin magnitude, so that|λ1| > |λ2| ≥ |λ3| ≥ ・・・ ≥ |λn| ≥ 0.If x is any vector in Rn, the fact that {v(1), v(2), v(3), . . . , v(n)} is linearly independent implies that constants β1, β2, . . . , βn exist with
Power Method formulasTarget: y=A = 𝐱 𝜆𝐱Start with all 1’s x vector: =[1,1,……1
=A= (Iteration number = 1)
= element in with highest absolute value = = A ,
=A= (Iteration number = 2)= element in with highest absolute value= = ………..=A → = → ≈
Power Method: Dominant Eigenvalue Proof Assume x = + + …..+Where are linearly independent eigenvectorsAx =𝐱 λAx=+……+ x=+……+ After k iterations: x= +……+ x =+……+
If is considerably higher than ……: x →
Inverse Power Method: Smallest Absolute Eigenvalue
Target: y= = Bx=At iteration k: x →Dominant ∝ is equivalent to smallest absolute λ Use LU factorization to solve for y: A =Find dominant element in y(k) as ∝ Keep on, then least 𝜆 =
Example on Inverse Power Method: lowest Eigenvalue
Find the smallest eigenvalue of the following matrix
A= det(A) =13
Solution : A==9.38 =1.38=
[3 74 5 ]
Example on Inverse Power Method: lowest Eigenvalue
Example on Highest Eigenvalue
(in Magnitude, sign ignored)
Summary The power method can be used to compute the dominant
eigenvalue(real) and a corresponding eigenvector.
Variants of the power method can compute the smallest eigenvalue or the eigenvalue closest to a given number (shift).
General projection methods consist in approximating the eigenvectors of a matrix with vectors belonging to a subspace of approximants with dimension smaller than the dimension of the matrix.
Subspace iteration method is a generalization of the power method that computes a given number of dominant eigenvalues and their corresponding eigenvectors.