Analysis and numerical solution ofeigenvalue problems
Volker MehrmannTU Berlin, Institut für Mathematik
Research Center MATHEONMathematics for key technologies
Chemnitz School on Applied Analysis 21.9.-25.9.2015
Outline of lectures
. Lecture I: Applications and Modeling.
. Lecture II: Linear Algebra and Analysis
. Lecture III: Numerical analysis.
Eigenvalue problems 2 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 3 / 219
Nonlinear Eigenvalue Problems
Determine eigenvalues λ and right eigenvectors x of
f (λ, α)x = 0, x ∈ Fn, λ ∈ F,
where F is a field, typically F = R or F = C.
f : F× Fp → F`,
and α denotes a set of p parameters.Sometimes we also want left eigenvectors y ∈ F` such that
y∗f (λ, α) = 0.
Eigenvalue problems 4 / 219
Typical functions
. Linear evps f = λI + A0.
. Generalized linear evps f = λA1 + A0.
. Quadratic evps f = λ2A2 + λA1 + A0.
. Polynomial evps f =∑k
i=0 λiAi with coefficient matrices
Ai ∈ F`,n.. Rational evps f =
∑ki=−j λ
iAi with coefficient matricesAi ∈ F`,n.
. General nonlinear evps f , e.g. f = exp(∑k
i=−j λiAi).
Eigenvalue problems 5 / 219
Typical questions
. Find all eigenvalues λ and associated eigenvectors x for agiven parameter value α.
. Find some important eigenvalues λ and associatedeigenvectors x for a given parameter α.
. Find all eigenvalues in a given subset of C for a givenparameter α.
. Optimize eigenvalue positions over parameter set.
. . . ..
Eigenvalue problems 6 / 219
Applications areas
. Structural analysis of buildings, vehicles, machines;
. Energy levels of molecules and atoms;
. Stability analysis of dynamical systems;
. Wave propagation in different media, electromagnetic waves,sound, water, crystals;
. Optimal control of dynamical systems;
. Model reduction.
Eigenvalue problems 7 / 219
Eigenproblems in structural analysis
The analysis of vibrations of structures or vehicles underexternal forces needs the numerical solution of large, nonlinear,parameter dependent linear systems and eigenvalue problems.
. Such systems have been solved for decades!
. The mathematics is well-known and used in industrialengineering every day!
. Numerical methods are available in (commercial) software!(NASTRAN, ANSYS, ABACUS)
. We just buy bigger computers to handle higher complexity?
. Do we still need to talk about it?
. Do we need improved numerical methods?
. Do we understand the problems and their perturbation theory?
Eigenvalue problems 8 / 219
Millennium Bridge London
. On 01.01.2000 in London the new millennium bridge wasopened and closed after the first day.
. The step frequency of 0.9 Hz was close to a resonancefrequency of the bridge.
. A sideward swaying mode was ignored in the model.
. Rigid connections were replaced by stiff springs.
. The pedestrians started to synchronize with the bridgevibrations and made things worse. → film
. Cost to build in new dampers 5 million pounds.
Eigenvalue problems 9 / 219
Can we avoid such problems
. We can avoid fancy designs. But:→ film: Wolgograd bridge
. We can use improved models.
. We can design better, faster and more accurate numericalmethods.
. We can do better analysis of model and methods.
. We can try to match the structure of the physical problem,mathematical model and numerical method.
. Will this really help, when technology goes into furtherextremes?
Eigenvalue problems 10 / 219
Optimality through mathematics
. We would like to design optimal buildings, cars, airplanes.
. There is an increasing demand for optimal solutions. Minimalenergy consumption, minimal noise, pollution, waste.
. Optimal solutions need mathematical techniques, such asmodel based optimization/control.
. To optimize and control we need good reduced order models.
. The solution of large scale nonlinear evps is a key to successin all these areas.
Eigenvalue problems 11 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 12 / 219
Resonances in rail tracksProject with company SFE in Berlin 2004/2006. Diploma thesisAndreas Hilliges
Eigenvalue problems 13 / 219
Tasks. Modeling of excitation of the tracks by the train. Influence
noise emissions and move resonances.. Discretization of rail and gravel bed with finite elements.. Parametric eigenvalue problem for excitation frequencies.. Optimization of parameters.. Goal: Higher safety and reduction of noise.
Eigenvalue problems 14 / 219
Rail element
Eigenvalue problems 15 / 219
Sleeper bay model
Eigenvalue problems 16 / 219
Tie model
Eigenvalue problems 17 / 219
Discrete FEM modeling
. There is a PDE in the background, but:
. the model is generated as discrete model in space;
. pre-built models for beams and plates are connected to formthe finite element model;
. No adaptivity or error control in the discrete modeling;
. We get the coefficient matrices directly.
Eigenvalue problems 18 / 219
The dynamical systemUnder the assumption of an infinite rail, FEM in space leads to
Mz +Dz +Kz = F ,
with symmetric infinite block tridiagonal coefficient matrices(operators)M,D,K, whereM, z,F are given by
. . . . . . 0 . . . 0
. . . Mj−1,0 Mj,1 0 . . .0 MT
j,1 Mj,0 Mj+1,1 0... . . . MT
j+1,1 Mj+1,0 Mj+2,1
0 . . . 0 . . . . . .
,
...
zj−1
zj
zj+1...
,
...Fj−1
Fj
Fj+1...
.
Operators D,K have the same structure asM.Furthermore, Mj,0 > 0, Dj,0, Kj,0 ≥ 0.
Eigenvalue problems 19 / 219
Block-structure
. Each block corresponds to a finite element model of a patch inone sleeper bay;
. mass, damping and stiffness matrices are obtained from thediscrete finite elements;
. the excitation is modeled as an outer force/load;
. the blocks have very different scalings from 101 to 1011.
Eigenvalue problems 20 / 219
Solution ansatz
Fourier expansion
Fj = Fjeiωt , zj = zjeiωt ,
where ω is the excitation frequency.Using periodicity and combining l parts into one vector
yj =[
zTj zT
j+1 . . . zTj+l
]Tgives a difference equation
A(ω)yj+1 + B(ω)yj + A(ω)T yj−1 = Gj .
with B(ω) = BT (ω) block tridiagonal and A(ω) singular of ranksmaller than n/2.
Eigenvalue problems 21 / 219
The associated nonlinear evp
Ansatz: yj+1 = κyj , leads to the palindromic rational eigenvalueproblem
R(κ)x = (κA(ω) + B(ω) +1κ
A(ω)T )x = 0.
Alternative representation as palindromic polynomial eigenvalueproblem
P(λ)x = (λ2A(ω) + λB(ω) + A(ω)T )x = 0.
Eigenvalue problems 22 / 219
Why did SFE need help?
. Even for very coarse discretization (of the track and gravelbed), commercial programs needed several hours of cpu timeto solve the evp for the whole frequency range.
. Commercial packages delivered no correct digit in the evs.
. Accuracy went down when the discretization was made finer.
. Optimization of parameters not possible with current tools.
Eigenvalue problems 23 / 219
Problems
. Matrices are badly scaled.
. The connecting beam element on the bottom of the raildetermined the achievable accuracy.
. The leading and trailing matrix is highly singular, i.e. there aremany eigenvalues at 0,∞.
. There are very large and very small evs in (modulus).
. The infinite and large evs destroy the accuracy of the finiteeigenvalues.
. The commercial solvers eliminate all interior nodes(condensation) via a block Gaussian elimination which is veryinaccurate.
. One should use the structure.
Eigenvalue problems 24 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 25 / 219
Acoustic field computation
Industrial Project with company SFE in Berlin 2007/2010.
. Computation of acoustic field inside car.
. SFE has its own parameterized FEM model which allowsgeometry and topology changes. (→ film)
. Problem is needed within optimization loop that changesgeometry, topology, damping material, etc.
. Model reduction and reduced order models for optimization.
. Ultimate goal: Minimize noise in important regions in carinterior.
Modeling similar as in rail problem.
Eigenvalue problems 26 / 219
Acoustic field
SFE GmbH, BerlinCEO: Hans [email protected]://www.sfe-berlin.de
© SFE GmbH 2007
SFE AKUSMOD
DLOAD 1 = symmetrical excitationDLOAD 2 = antimetrical excitation
Unit force = 1 N mm
grid-ID 31010
grid-ID 31011
Eigenvalue problems 27 / 219
Tasks in the project
. Numerical methods for large scale structured parameterdependent polynomial eigenvalue problems.
. Compute eigenvalues in trapezoidal region around 0.
. Determine projectors on important spectral subspaces formodel reduction.
. Model reduction for parameterized model.
. Optimization of frequencies.
. Implementation of parallel solver in SFE Concept.
Eigenvalue problems 28 / 219
Full model
[Ms 0DT
sf Mf
] [ud
pd
]+
[Ds 00 Df
] [ud
pd
]+
[Ks(ω) Dsf
0 Kf
] [ud
pd
]=
[fs0
].
. Ms,Mf ,Kf are real symm. pos. semidef. mass/stiffnessmatrices of structure and air, Ms is singular and diagonal, Ms
is a factor 1000− 10000 larger than Mf .. Ks(ω) = Ks(ω)T = K1(ω) + ıK2.. Ds,Df are real symmetric damping matrices.. Dsf is real coupling matrix between structure and air.. Parts depend on geometry, topology and material parameters.
Eigenvalue problems 29 / 219
Eigenvalue problem
(λ2[
Ms 0DT
sf Mf
]+ λ
[Ds 00 Df
]+
[Ks(ω) Dsf
0 Kf
])[xs
xf
]= 0,
or after scaling second block row with λ−1 and second blockcolumn with λ one has the complex symmetric quadratic evp(λ2[
Ms 00 Mf
]+ λ
[Ds Dsf
DTsf Df
]+
[Ks(ω) 0
0 Kf
])[xs
λ−1xf
]= 0.
Eigenvalue problems 30 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 31 / 219
Brake Squeal
. Disc brake squeal is a frequent and annoying phenomenon(with cars, trains, bikes).
. Important for customer satisfaction, even if not a safety risk.
. Nonlinear effect that is hard to detect in experiments.
. The car industry is trying for decades to improve this, bychanging the designs of brake and disc.
Can we do this model based?Eigenvalue problems 32 / 219
Model based approach
Interdisciplinary project with car manufacturers + SMEsSupported by German Minist. of Economics via AIF foundation.University: N. Gräbner, U. von Wagner, TU Berlin, Mechanics,N. Hoffmann, TU Hamburg-Harburg, Mechanics,S. Quraishi, C. Schröder, TU Berlin Mathematics.Goals:. Develop model of brake system with all effects that may cause
squeal. (Friction, circulatory, gyroscopic effects, etc).. Simulate brake behavior for many different parameters (disk
speed, material geometry parameters).. Our task: Model reduction, solution of eigenvalue problems.. Long term: Stability/bifurcation analysis for a given parameter
region.
Eigenvalue problems 33 / 219
Experiment
1
. Experiments indicate nonlinear behavior (subcritical Hopfbifurcation)→ film.
1Institute f. Mechanics, TU BerlinEigenvalue problems 34 / 219
Modeling in industrial practice, macroscaleMulti-body system based on Finite Element Modeling (FEM)
. Write displacements of structure z(x , t) as linear combinationof basis functions (e.g. but not always piecewise polynomials),
z(x , t) ≈N∑
i=1
qi(t)φi(x , t).
. Integrate against test functions (Petrov Galerkin)→discretized model for the vibrations in weak form.
. Add friction and damping as macroscopic surrogate modelfitted from experimental data.
. Simplifications: Remove some nonlinearities, asymptoticanalysis for small parameters, etc.
. Produce reduced order model for large parameter set?
Eigenvalue problems 35 / 219
Brake pad
2
Figure: View of the brake model
2Institute f. Mechanics TU HHEigenvalue problems 36 / 219
Mathematical model detailsLarge differential-algebraic equation (DAE) system and evpdep. on parameters (here only disk speed displayed).
Mq + (C1 +ωr
ωCR +
ω
ωrCG)q + (K1 + KR + (
ω
ωr)2KG)q = f ,
. M symmetric, pos. semidef., singular matrix (constraints),
. C1 symmetric matrix, material damping,
. CG skew-symmetric matrix, gyroscopic effects,
. CR symmetric matrix, friction induced damping,(phenomenological)
. K1 symmetric stiffness matrix,
. KR nonsymmetric matrix modeling circulatory effects,
. KG symmetric geometric stiffness matrix.
. ω rotational speed of disk with reference velocity ωr .Eigenvalue problems 37 / 219
Nature of FE matrices
n = 842,638, ωr = 5, ω = 17× 2πmatrix pattern 2-norm structural
rankM symm 5e-2 842,623C1 symm 1e-19 160CG skew 1.5e-1 217500CR symm 7e-2 2120K1 symm 2e13 fullKR - 3e4 2110KG symm 40 842,623
Eigenvalue problems 38 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 39 / 219
Linear quadratic optimal control
Minimize cost functional:
J (x ,u) =12
x(T )T Mx(T ) +12
∫ T
t0(xT Wx + 2xT Su + uT Ru) dt ,
W = W T ∈ Rn,n, S ∈ Rn,m, R = RT ∈ Rm,m, M = MT ∈ Rn,n.subject to constraint:
Ex = Ax + Bu + f , x(t0) = x0,
E ∈ Rn,n, A ∈ Rn,n, B ∈ Rn,m, f ∈ C0(I,Rn), x0 ∈ Rn. Note thatE is usually singular.Here: Determine optimal controls u ∈ U = C0(I,Rm).
Eigenvalue problems 40 / 219
Industrial example
Software based control of automatic transmission.Project with Daimler AG (Dissertation: Peter Hamann). → film.
Eigenvalue problems 41 / 219
A current half-toroid model
Eigenvalue problems 42 / 219
Technological application, Tasks
. Modeling of multi-physics model: multi-body system, elasticity,hydraulics, friction, . . . .
. Real time simulation of transmission.
. Development of control methods for coupled system.
. Model reduction and observer design.
. Real time control of transmission.
Ultimate goals: Decrease fuel consumption,improve smoothness of switching
Eigenvalue problems 43 / 219
Calculus of variations for ODEs (E=I)
Introduce Lagrange multiplier function λ(t) and couple constraintinto cost function, i.e. minimize
J (x ,u, λ) =12
x(T )T Mx(T ) +12
∫ T
t0(xT Wx + 2xT Su + uT Ru)
+ λT (x − Ax + Bu + f ) dt .
Consider x + δx , u + δu and λ + δλ.For a minimum the cost function has to go up in theneighborhood, so we get optimality conditions (Euler-Lagrangeequations):
Eigenvalue problems 44 / 219
Optimality system
Well known result from calculus of variations.
TheoremIf (x ,u) is a solution to the optimal control problem, then thereexists a Lagrange multiplier function λ ∈ C1(I,Rn), such that(x , λ,u) satisfy the optimality boundary value problem
x = Ax + Bu + f , x(t0) = x0,
−λ = Wx + Su + ATλ, λ(T ) = Mx(T ),
0 = ST x + Ru + BTλ.
Same result also in the case T =∞.
Eigenvalue problems 45 / 219
Naive Idea for DAEs
Replace the identity in front of x by E and then do the analysis inthe same way.For DAEs the formal optimality system then could be
(a) Ex = Ax + Bu + f , x(t0) = x0
(b) − ddt (ETλ) = Wx + Su + ATλ, (ETλ)(T ) = Mx(T ),
(b) 0 = ST x + Ru + BTλ.
This does not work in general.
Eigenvalue problems 46 / 219
The formal optimality system
Theorem (Kunkel/M. 2011)
Let all data of the given optimal control problem be sufficientlysmooth and let the formal necessary optimality conditions have asolution (x ,u, λ). Then, there exist a function η replacing λ suchthat (x ,u, η) solves the true necessary optimality conditions.
Thus, we can try to solve the formal optimality system.If it has a solution then we just ignore the Lagrange multiplier λ.Same result also in the case T =∞.
Eigenvalue problems 47 / 219
Alternating pencil
The associated matrix pencil
L(λ) = λN −M = λ
0 E 0−ET 0 0
0 0 0
− 0 A B
AT W SBT ST R
with skew-symmetric N and symmetricM is called an evenpencil, since L(λ) = LT (−λ). It is a special case of an alternatingpencil, which is either even or odd if L(λ) = −LT (−λ).
Eigenvalue problems 48 / 219
Reduced optimality systemIf E = I and R is positive definite, then we can solve for u andobtain the reduced optimality system:
x = (A− BR−1ST )x − BR−1BTλ + f , x(t0) = x0,
λ = (W − SR−1ST )x + (A− BR−1ST )λ, λ(T ) = Mx(T ).
We can write the differential equation as
J[λx
]= A
[λx
]+
[f0
], x(t0) = x0, λ(T ) = Mx(T ),
with
J =
[0 I−I 0
], A =
[−BR−1BT A− BR−1ST
(A− BR−1ST )T W − SR−1ST
].
Eigenvalue problems 49 / 219
Hamiltonian system
Typically this is written as a Hamiltonian system[xλ
]= H
[xλ
]+
[f0
], x(t0) = x0, λ(T ) = Mx(T ),
with the Hamiltonian matrix
H = AJ T =
[A− BR−1ST −BR−1BT
W − SR−1ST −(A− BR−1ST )T
].
The flow Φ (fundamental solution matrix) of this Hamiltoniansystem is symplectic, i.e. ΦTJΦ = J .
Eigenvalue problems 50 / 219
Optimal solution
In the case T =∞ the optimal (stabilizing) solution (undercontrollability assumptions) is a linear feedback
u(t) = −R−1BT Xx ,
where X is the positive semidefinite solution of the algebraicRiccati equation
0 = AT X + XA + W − SR−1BT − XGX , G = BR−1BT .
In the case T <∞ analogous but X is the solution of the Riccatidifferential equation
X = AT X + XA + W − SR−1BT − XGX ,X (T ) = M.
Eigenvalue problems 51 / 219
Riccati equations/Lagrangian inv. subsp.
TheoremIf the Riccati solution is X = X T is positive semidefinite, then thecolumns of [
In−X
]span the Lagrangian invariant subspace associated with theeigenvalues in the left half plane of the Hamiltonian matrix
H =
[A GW −F T
].
Under some further controllability and observabilityassumptions, the converse holds as well.
Subspace approach works also in the finite T case.Eigenvalue problems 52 / 219
Challenges
. Compute Lagrangian invariant subspace of Hamiltonianmatrix.
. Problems if dimension is very large. X has n(n − 1)/2elements and is not sparse.
. Low Rank approximation to Riccati solution.
. One can use the Lagrange subspace rather than the Riccatiequation.
. Extension to case E = I nontrivial.
Eigenvalue problems 53 / 219
Summary Lecture I
Many real world engineering and science problems lead to largescale linear or nonlinear evps with or without structure.. Vibrations of structures,. Acoustic field computation,. Brake squeal,. Optimal control problems,. . . .
Eigenvalue problems 54 / 219
References
. A. Hilliges, C. Mehl and V. Mehrmann. On the solution ofpalindromic eigenvalue problems. Proceedings of the 4thECCOMAS, Jyväskylä, Finland, 2004.
. V. M. and C. Schröder. Nonlinear eigenvalue and frequencyresponse problems in industrial practice. J. Math. in Industry, 1:7,2011.
. N. Gräbner, V. M., S. Quraishi, C. Schröder, and U. von Wagner.Numerical methods for parametric model reduction in the simulationof disc brake squeal, Preprint TU Berlin 2015.
Eigenvalue problems 55 / 219
Outline of lectures
. Lecture I: Modeling and Applications.
. Lecture II: Linear Algebra and Analysis
. Lecture III: Numerical analysis.
Eigenvalue problems 56 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 57 / 219
Typical functions
. Linear evps f = λI + A0.
. Generalized linear evps f = λA1 + A0.
. Quadratic evps f = λ2A2 + λA1 + A0.
. Polynomial evps f =∑k
i=0 λiAi with coefficient matrices
Ai ∈ F`,n.. Rational evps f =
∑ki=−j λ
iAi with coefficient matricesAi ∈ F`,n.
. General nonlinear evps f , e.g. f = exp(∑k
i=−j λiAi).
Normal forms for linear evps (Jordan, Kronecker), or for rationalevs (Smith, Mc Millan).
Eigenvalue problems 58 / 219
Jordan canonical form
Theorem (Jordan canonical form (JCF))
For every A0 ∈ Cn,n there exists nonsingular Q ∈ Cn,n such that
Q−1A0Q = diag(Jρ1 , . . . , Jρv ), Jρj =
λj 1
. . .
. . . 1λj
∈ Cρj ,ρj ,
Eigenvalues, algebraic and geometric multiplicities, minimalpolynomial, characteristic polynomial, left and right eigenvectorsand principal vectors, invariant subspaces can read off.In the real case, real Jordan form.
Eigenvalue problems 59 / 219
Kronecker canonical form (KCF)
Theorem (Kronecker canonical form (KCF))For every pair (A1,A0),Ai ∈ C`,n there exist nonsingularP ∈ C`,`,Q ∈ Cn,n such that P(λA1 + A0)Q =diag(Lε1 , . . . ,Lεp ,Mη1 , . . . ,Mηq , Jρ1 , . . . , Jρv ,Nσ1 , . . . ,Nσw ), with
Lεj = λ
0 1
. . .. . .0 1
+
1 0
. . .. . .1 0
, Mηj = λ
1
0. . .
. . . 10
+
0
1. . .
. . . 01
,
Jρj = λ
1
. . .1
+
λj 1
. . .
. . . 1λj
, Nσj = λ
0 1
. . .. . .
. . . 10
+
1
. . .1
Eigenvalue problems 60 / 219
Regularity, index
DefinitionA matrix pencil λA1 + A0, A0,A1 ∈ C`,n, is called regular if ` = nand if
P(λ) = det(λA1 + A0)
does not vanish identically, otherwise singular.The size νd of the largest nilpotent (N)-blocks in the KCF iscalled the index of λA1 + A0.Values λ ∈ C such that rank(λA1 + A0) < min(`,n) are calledfinite eigenvalues of λA1 + A0.The eigenvalue µ = 0 of A1 + µA0 is called the infinite eigenvalueof λA1 + A0.
Eigenvalue problems 61 / 219
Deflating subspaces
DefinitionA subspace L ⊂ Cn is called deflating subspace for the pencilλA1 + A0 if for a matrix XL ∈ Cn,r with full column rank andrangeXL = L there exist matrices YL ∈ Cn,r , RL ∈ Cr ,r , andSL ∈ Cr ,r such that
A1XL = YLRL, A0XL = YLSL.
Eigenvalue problems 62 / 219
What do learn from KCF?
. Finite eigenvalues and infinite eigenvalues.
. Algebraic and geometric multiplicities.
. Index, Kronecker indices (sizes of singular blocks).
. Regularity, non-regularity.
. Eigenvectors, principal vectors, deflating subspaces, reducingsubspaces (subspaces associated with singular blocks).
Eigenvalue problems 63 / 219
Evaluation of Normal Form Approach. Normal forms are essential for the theoretical analysis evps.. They allow linear stability/ bifurcation analysis of dynamical
systems, the points where ranks change are a superset of theset of critical points.
. They are typically not numerically stably computable, sincethe structure can be changed by arbitrary small perturbations(discontinuous behavior).
. The classical normal forms do not preserve structure.
To preserve even or (palindromic) structure, we use congruence(real case)
λN − M = λUTNU − UTMU,
with nonsingular U. (Replace T by ∗ in the complex case.)Eigenvalue problems 64 / 219
Why not just λQNU + QMU ?
Example Consider a 3× 3 even pencil with matrices
N = Q
0 1 0−1 0 00 0 0
QT , M = Q
0 0 10 1 01 0 0
QT ,
where Q is a random real orthogonal matrix. The pencil iscongruent to
λ
0 1 00 0 10 0 0
− 1 0 0
0 1 00 0 1
For different randomly generated orthogonal matrices Q the QZalgorithm in MATLAB produced all variations of eigenvalues thatare possible in a general 3× 3 pencil.
Eigenvalue problems 65 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 66 / 219
Structured Kronecker for even pencilsTheorem (Thompson 91)
If N, M ∈ Rn,n with N = −NT ,M = MT , then there exists anonsingular matrix X ∈ Cn,n such that
X T (λN + M)X = diag(BS ,BI ,BZ ,BF),
is in structured Kronecker form, where
BS = diag(Oη,Sξ1 , . . . ,Sξk ),
BI = diag (I2ε1+1, . . . , I2εl +1, I2δ1 , . . . , I2δm ) ,
BZ = diag (Z2σ1+1, . . . ,Z2σr +1,Z2ρ1 , . . . ,Z2ρs ) ,
BF = diag(Rφ1 , . . . ,Rφt , Cψ1 , . . . , Cψu )
This structured Kronecker canonical form is unique up topermutation of the blocks.
Eigenvalue problems 67 / 219
Properties of blocks
1. Oη = λ0η + 0η;2. Each Sξj is a (2ξj + 1)× (2ξj + 1) block that combines a rightsingular block and a left singular block, both of minimal index ξj .It has the form
λ
1 0... . . .
1 0−1
... 0−1 ...
0
+
0 1... . . .
0 10
... 10 ...
1
;
Eigenvalue problems 68 / 219
3. Each I2εj +1 is a (2εj + 1)× (2εj + 1) block that contains asingle block corresponding to the eigenvalue∞ with index2εj + 1. It has the form
λ
1 0... . . .
1 0−1 0
... 0−1 ...
0
+
0 1... . . .
0 10 s
. . . 10 ...
1
,
where s ∈ 1,−1 is the sign-index or sign-characteristic of theblock;
Eigenvalue problems 69 / 219
4. Each I2δj is a 4δj × 4δj block that combines two 2δj × 2δjinfinite eigenvalue blocks of index δj . It has the form
λ
1 0... . . .
1 ...
0−1 0
... . . .
−1 ...
0
+
1...
11
...
1
;
Eigenvalue problems 70 / 219
5. Each Z2σj +1 is a (4σj + 2)× (4σj + 2) block that combines two(2σj + 1)× (2σj + 1) Jordan blocks corresponding to theeigenvalue 0. It has the form
λ
1...
1−1
...
−1
+
1 0... . . .
1 ...
01 0
... . . .
1 ...
0
;
Eigenvalue problems 71 / 219
6. Each Z2ρj is a 2ρj × 2ρj block that contains a single Jordanblock corresponding to the eigenvalue 0. It has the form
λ
1...
1−1
...
−1
+
1 0... . . .
1 ...
s 01 0
... . . .
1 ...
0
,
where s ∈ 1,−1 is the sign characteristic of this block;
Eigenvalue problems 72 / 219
7. Each Rφj is a 2φj × 2φj block that combines two φj × φj Jordanblocks corresponding to nonzero real eigenvalues aj and −aj . Ithas the form
λ
1...
1−1
...
−1
+
1 aj. . . . . .
1 ...
aj1 aj
. . . . . .
1 ...
aj
.
Eigenvalue problems 73 / 219
8 a. Either Cψj is a 2ψj × 2ψj block combining two ψj × ψj Jordanblocks with purely imaginary eigenvalues ibj ,−ibj (bj > 0). It hasthe form
λ
1...
1−1
...
−1
+ s
1 bj. . . . . .
1 ...
bj1 bj
. . . . . .
1 ...
bj
,
where s ∈ 1,−1 is the sign characteristic.
Eigenvalue problems 74 / 219
8 b. or Cψj is a 4ψj × 4ψj block combining ψj × ψj Jordan blocksfor each of the complex eigenvaluesaj + ibj ,aj − ibj ,−aj + ibj ,−aj − ibj (with aj 6= 0 and bj 6= 0). In thiscase it has form
λ
Ω. . .
Ω
−Ω. . .
−Ω
+
Ω Λj. . . . . .
Ω . . .
ΛjΩ Λj
. . . . . .
Ω . . .
Λj
with Ω =
[0 11 0
]and Λj =
[−bj ajaj bj
].
Analogous results exist for complex even pencils.
Eigenvalue problems 75 / 219
Structured KCF for palindromic pencils
Theorem (Horn/Sergejchuk 06, Schröder 06)
If A ∈ Rn,n, then there exists a nonsingular matrix X ∈ Rn,n suchthat
λX T AT X + X T AX = diag(λA1 + AT1 , . . . , λAT
` + A`)
is in structured Kronecker form.This structured Kronecker canonical form is unique up topermutation of blocks, i.e., the kind, size and number of theblocks as well as the sign characteristics are characteristic of thepencil λAT + A.
Eigenvalue problems 76 / 219
Properties of blocks
Sp =
00p+1 . . . 1
0 ...
11 0
... . . . 0p1 0
∈ R2p+1,2p+1,p ∈ N0;
L1,p(λ) =
λ0p . . . 1
... . . .
λ 11
...
. . . 0p1
∈ R2p,2p,
where p ∈ N, λ ∈ R, |λ| < 1;
Eigenvalue problems 77 / 219
L2,p(α, β) =
Λ
02p . . .I2
. . .. . .
Λ I2I2
. . .
. . .02p
I2
∈ R4p,4p ,
where p ∈ N,Λ =
[α −ββ α
], α, β ∈ R \ 0, β < 0, |α + iβ| < 1;
σU1,p = σ
10b p
2 c. . .
1
1 . . .
1 11
. . .0b p
2 c1
∈ Rp,p , where p ∈ N is odd,
σ ∈ 1,−1;
Eigenvalue problems 78 / 219
U2,p = Lp(1) =
10p . . .
1. . .
. . .
1 11
. . .
. . .0p
1
∈ R2p,2p ,
where p ∈ N is even;
U3,p = Lp(−1) =
−10p . . .
1. . .
. . .
−1 11
. . .
. . .0p
1
∈ R2p,2p , where p ∈ N is
odd;
Eigenvalue problems 79 / 219
σU4,p = σ
−10 p
2. . . 1
... . . .
−1 11 1
...
. . .
1
∈ Rp,p,
where p ∈ N is even, σ ∈ 1,−1;
Eigenvalue problems 80 / 219
σU5,p(α, β) = σ
Λ02b p
2 c. . . I2
Λ . . .
Λ12 I2
I2. . . 02b p
2 cI2
∈ R2p,2p,
where p ∈ N is odd, |α + iβ| = 1, β < 0,Λ =
[α −ββ α
],
Λ12 is defined as rotation matrix with rotation angle φ
2 ∈ (0, π),where φ = arctan(β
α) is the rotation angle of the rotation matrix Λ,
σ ∈ 1,−1;
Eigenvalue problems 81 / 219
σU6,p(α, β) = σ
Λ02 p
2. . . I2
. . . . . .
Λ I2I2 I2
. . .
. . .
I2
∈ R2p,2p,
where p ∈ N is even,
|α + iβ| = 1, β < 0,Λ =
[α −ββ α
], σ ∈ 1,−1.
Eigenvalue problems 82 / 219
Other structured KCF
. Analogous results for symmetric pencils.
. Analogous results exist for complex even pencils.
. Analogous results exist for complex palindromic pencils.
. Hermitian pencils and complex T -symmetric pencils can betreated like complex even pencils (Set λ = iµ).
Eigenvalue problems 83 / 219
Consequences
. Even, palindromic, symmetric KCF for pencils exist.
. But the transformation matrix X may be arbitrarilyill-conditioned.
. The even, palindromic, symmetric KCF cannot be computedwell with finite precision algorithms.
. The information given in the even, palindromic, symmetricKCF is essential for the understanding of the computationalproblems.
. We need alternatives, from which we can derive theinformation.
Eigenvalue problems 84 / 219
Structured staircase form for even pencils
Theorem (Byers/M./Xu 07)For λN + M with N = −NT ,M = MT ∈ Rn,n, there exists areal orthogonal matrix U ∈ Rn,n, such that
UT NU =
N11 . . . . . . N1,m N1,m+1 N1,m+2 . . . N1,2m 0...
. . ....
.
.
.... . . .
. . .
.
.
.. . .
.
.
.... Nm−1,m+2 . . .
−NT1,m · · · · · · Nm,m Nm,m+1 0
−NT1,m+1 . . . . . . −NT
m,m+1 Nm+1,m+1
−NT1,m+2 · · · −NT
m−1,m+2 0
.
.
. . . .. . .
−NT1,2m
. . .
0
n1......
nmlqm
.
.
.q2q1
Eigenvalue problems 85 / 219
UT MU =
M11 · · · · · · M1,m M1,m+1 M1,m+2 . . . . . . M1,2m+1...
. . ....
.
.
.... . . .
.
.
.. . .
.
.
....
.
.
. . . .
MT1,m . . . . . . Mm,m Mm,m+1 Mm,m+2
MT1,m+1 . . . . . . MT
m,m+1 Mm+1,m+1
MT1,m+2 . . . . . . MT
m,m+2... . . .
.
.
. . . .
MT1,2m+1
n1......
nmlqm
.
.
.
.
.
.q1
,
where q1 ≥ n1 ≥ q2 ≥ n2 ≥ . . . ≥ qm ≥ nm,Nj,2m+1−j ∈ Rnj ,qj+1 , 1 ≤ j ≤ m − 1,
Nm+1,m+1 =
[∆ 00 0
], ∆ = −∆T ∈ R2p,2p
,
Mj,2m+2−j =[
Γj 0]∈ Rnj ,qj , Γj ∈ Rnj ,nj , 1 ≤ j ≤ m,
Mm+1,m+1 =
[Σ11 Σ12ΣT
12 Σ22
], Σ11 = ΣT
11 ∈ R2p,2p, Σ22 = ΣT
22 ∈ Rl−2p,l−2p,
and the blocks Σ22 and ∆ and Γj , j = 1, . . . ,m are nonsingular.Eigenvalue problems 86 / 219
. The middle block
λNm+1,m+1 + Mm+1,m+1 = λ
[∆ 00 0
]+
[Σ11 Σ12
ΣT12 Σ22
],
contains all the blocks associated with finite eigenvalues and1× 1 blocks associated with the eigenvalue∞.
. The finite spectrum of is obtained from the even pencilλ∆ + Σ = λ∆ + (Σ11 − Σ12Σ−1
22 ΣT12) with ∆ invertible.
. The matrix ∆ has a skew-Cholesky factorization ∆ = LJLT ,
with J =
[0 I−I 0
],
. The spectral information can be obtained from theHamiltonian matrix H = JL−1ΣL−T .
Eigenvalue problems 87 / 219
Consequences. Similar staircase forms for palindromic, symmetric pencils.. All information about the invariants (Kronecker indices) can be
read off. Byers/M./Xu 05.. Singularities and high order blocks to ev∞ can be deflated.. Production code Brüll/M. 2008.. The treatment of infinite eigenvalue in the middle blockλNm+1,m+1 + Mm+1,m+1 has been analyzed in M./Xu 2015. Itcan be done in a unitary way, surprisingly sometimes Schurcomplements gives better results.
. The Hamiltonian part associated with the finite eigenvalues inthe middle block can be treated with the structured methodsfor Hamiltonian problems Benner/M./Xu 98,Byers/Benner/M./Xu 02, Chu/Liu/M. 04. SeeBenner/Kressner’s HAPACK. Survey: Benner/Losse/M./Voigt15.
Eigenvalue problems 88 / 219
Computational procedure. The procedure consists of a recursive sequence of singular
value decompositions.. The staircase form essentially determines a least generic even
pencil within the rounding error cloud surrounding λN + M.. Rank decisions face the usual difficulties and have to be
adapted to the recursive procedure.. Similar difficulties as in standard staircase form, GUPTRI
Demmel/Kågström 93.. What to do in case of doubt? In applications, assume worst
case, see Mattheij/Wijckmans 98.. Perturbation analysis is essentially open for singular and
higher order blocks associated with∞. Adhikari, Ahmad,Alam, Bora, Godunov, Sadkane, M., Mehl, Ran, Rodman, etal..
Eigenvalue problems 89 / 219
Example revisited
Our MATLAB implementation of the structured staircaseAlgorithm determined that in the cloud of rounding-error smallperturbations of each even λN + M, there is an even pencil withstructured staircase form
λ
0 1 0−1 0 0
0 0 0
− 0 0 1
0 1 01 0 0
,with one block I3 with sign-characteristic 1.The algorithm successfully located a least generic even pencilwithin the cloud.
Eigenvalue problems 90 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 91 / 219
Polynomial Evps
We study polynomial evps
P(λ) x = (k∑
i=0
λiAi)x = 0,
where. Ai ∈ F`,n;. x is a real or complex eigenvector;. λ is a real or complex eigenvalue;. and P(λ) may have some further structure.Unfortunately there is no Jordan or Kronecker like form.
Eigenvalue problems 92 / 219
Unimodular transformations
DefinitionA matrix function S(λ) : F→ Fn,n is called unimodular ifdet S(λ) = c 6= 0 is constant in λ.
For unimodular matrix functions P(λ),Q(λ) we have
det(λA1 + A0) = 0 if and only if det(P(λ)(λA1 + A0)Q(λ)) = 0.
Thus, the regularity of a pencil is invariant under unimodulartransformations.The canonical form under unimodular transformations is theSmith form for matrix polynomials or the Smith-Mc Millan fromfor rational matrix functions.
Eigenvalue problems 93 / 219
’Linearization’
Definition: For a matrix polynomial P(λ), a matrix pencilL(λ) = λX + Y is called linearization of P(λ), if there existnonsingular unimodular matrices (i.e., of constant nonzerodeterminant) S(λ),T (λ) such that
S(λ)L(λ)T (λ) = diag(P(λ), In, . . . , In).
The reversal of P(λ) is the polynomial
rev P(λ) := λkP(1/λ) =k∑
i=0
λiAk−i .
If L(λ) is a linearization for P(λ) and rev L(λ) is a linearization forrev P(λ), then L(λ) is said to be a strong linearization for P(λ).
Eigenvalue problems 94 / 219
Properties of linearization
. Linearization preserves the algebraic and geometricmultiplicities of all finite eigenvalues, but not necessarily thoseof infinite eigenvalues.
. Strong linearization preserves the algebraic and geometricmultiplicities of all finite and infinite eigenvalues.
. The geometric multiplicity of the eigenvalue∞ and the sizesof singular blocks are not invariant under general unimodulartransformations.
Eigenvalue problems 95 / 219
Companion linearization
The classical companion linearization for polynomial eigenvalueproblems
P(λ)x =k∑
i=0
λiAix
is to introduce new variables
yT =[
yT1 , y
T2 , . . . , y
T`
]T=[
xT , λxT , . . . , λ`−1xT]T
and to turn it into a generalized linear eigenvalue problem
L(λ)y := (λX + Y )y = 0
of size nk × nk .
Eigenvalue problems 96 / 219
Difficulties with companion formExample The even quadratic eigenvalue problem
(λ2M + λG − K )x = 0
with M = MT ,K = K T > 0, G = −GT has a spectrum that issymmetric with respect to both axis, but in the companionlinearization[
O I−K −G
] [xy
]= λ
[I OO M
] [xy
],
one does not see this structure.. Numerical methods destroy eigenvalue symmetries in finite
arithmetic !. Perturbation theory requires structured perturbation for
stability near the imaginary axis.. We need structure preserving linearizations.
Eigenvalue problems 97 / 219
Optimal Linearizations
Goal: Find a large class of linearizations for which:. the linear pencil is easily constructed;. structure preserving linearizations exist;. the conditioning of the linear problem can be characterized
and optimized;. evs/eves of the original problem are easily read off;. a structured perturbation analysis is possible.
Eigenvalue problems 98 / 219
Vector spaces of potential linearizations
Notation: Λ := [λk−1, λk−2, . . . , λ, 1]T , ⊗ - Kronecker product.
Definition Mackey2/Mehl/M. 2006. For a given n × n matrixpolynomial P(λ) of degree k define the sets:
VP = v ⊗ P(λ) : v ∈ Fk, v is called right ansatz vector,WP = wT ⊗ P(λ) : w ∈ Fk, w is called left ansatz vector,
L1(P) =
L(λ) = λX + Y : X ,Y ∈ Fkn×kn, L(λ) (Λ⊗ In) ∈ VP
,
L2(P) =
L(λ) = λX + Y : X ,Y ∈ Fkn×kn,(ΛT ⊗ In
)L(λ) ∈ WP
,
DL(P) = L1(P) ∩ L2(P) .
Eigenvalue problems 99 / 219
Properties of these sets
LemmaFor any n × n matrix polynomial P(λ) of degree k,
L1(P) is a vector space of dimension k(k − 1)n2 + k,L2(P) is a vector space of dimension k(k − 1)n2 + k,DL(P) is a vector space of dimension k.
These are not all linearizations but they form a large class.
Eigenvalue problems 100 / 219
Example
Consider the cubic matrix polynomial (Antoniou andVologiannidis 2004), P(λ) = λ3A3 + λ2A2 + λA1 + A0. Then thepencil
L(λ) = λ
0 A3 0I A2 00 0 I
+
−I 0 00 A1 A0
0 −I 0
is a linearization for P but neither in L1(P) nor in L2(P).Recently other classes of linearizations such as this, calledFiedler linearizations have become popular.
Eigenvalue problems 101 / 219
Characterization of L1(P)
TheoremLet P(λ) =
∑ki=0 λ
iAi be an n × n matrix polynomial, and v ∈ Fk
any vector. Then the set of pencils in L1(P) with right ansatzvector v consists of all L(λ) = λX + Y such that
X =[
v ⊗ Ak −W]
andY =
[W +
(v ⊗
[Ak−1 · · · A1
])v ⊗ A0
].
with W ∈ Fkn×(k−1)n chosen arbitrarily.
Corollary
L2(P) =[L1(PT )
]T .
Eigenvalue problems 102 / 219
A linearization CheckProcedure for linearization condition for L1(P).
1) Suppose P(λ) is regular and L(λ) = λX + Y ∈ L1(P) has0 6= v ∈ Fk , i.e., L(λ) · (Λ⊗ In) = v ⊗ P(λ).
2) Select any nonsingular matrix M such that Mv = αe1.
3) Form L(λ) := (M ⊗ In)L(λ), which must be of the form
L(λ) = λX + Y = λ
[X11 X12
0 −Z
]+
[Y11 Y12
Z 0
],
where X11 and Y12 are n × n.4) Check det Z 6= 0 , the linearization condition for L(λ).This procedure can be implemented as a numerical algorithm:choose M to be unitary, then use a rank revealing factorization tocheck if Z is nonsingular.
Eigenvalue problems 103 / 219
Eigenvector Recovery Property
TheoremLet P(λ) be an n × n matrix polynomial of degree k, and let L(λ)be any pencil in L1(P) with ansatz vector v 6= 0.
Then x ∈ Cn is a right eigenvector for P(λ) with finiteeigenvalue λ ∈ C if and only if Λ⊗ x is a right eigenvector forL(λ) with eigenvalue λ.If in addition P is regular, i.e. det P(λ) 6≡ 0, and L ∈ L1(P) is alinearization then every eigenvector of L with finite eigenvalueλ is of the form Λ⊗ x for some eigenvector x of P.
A similar result holds for the space L2(P) and also foreigenvalues 0,∞.
Eigenvalue problems 104 / 219
Strong Linearization Property
TheoremLet P(λ) be a regular matrix polynomial and L(λ) ∈ L1(P) (orL(λ) ∈ L2(P)). Then the following statements are equivalent.
(i) L(λ) is a linearization for P(λ).(ii) L(λ) is a regular pencil.(iii) L(λ) is a strong linearization for P(λ).
Eigenvalue problems 105 / 219
Example
The first and second companion forms
C1(λ) := λ
Ak 0 · · · 0
0 In. . . ...
... . . . . . . 00 · · · 0 In
+
Ak−1 Ak−2 · · · A0
−In 0 · · · 0... . . . . . . ...0 · · · −In 0
C2(λ) := λ
Ak 0 · · · 0
0 In. . . ...
... . . . . . . 00 · · · 0 In
+
Ak−1 −In · · · 0
Ak−2 0 . . . ......
... . . . −InA0 0 · · · 0
.
are strong linearizations in L1(P), L2(P), respectively.
Eigenvalue problems 106 / 219
Characterization of DL(P)
LemmaConsider an n × n matrix polynomial P(λ) of degree k. Then, forv = (v1, . . . , vk )T and w = (w1, . . . ,wk )T in Fk , the associatedpencil satisfies L(λ) = λX + Y ∈ DL(P) if and only if v = w.
Once an ansatz vector v has been chosen, a pencil from DL(P)is uniquely determined.
Eigenvalue problems 107 / 219
Are the classes large enough?
Theorem
For any regular n × n matrix polynomial P(λ) of degree k,almost every pencil in L1(P) (L2(P)) is a linearization for P(λ).For any regular matrix polynomial P(λ), pencils in DL(P) arelinearizations of P(λ) for almost all v ∈ Fk .
’Almost every’ means for all but a closed, nowhere dense set ofmeasure zero.
Eigenvalue problems 108 / 219
Linearization property for DL(P)
TheoremConsider an n × n matrix polynomial P(λ) of degree k. Then forgiven ansatz vector v = w = [v1, . . . , vk ]T the associated linearpencil in DL(P) is a linearization if and only if no root of thev-polynomial
p(v ; x) := v1xk−1 + . . . + vk−1x + vk
is an eigenvalue of P.
Eigenvalue problems 109 / 219
ExampleConsider P(λ) = λ3A + λ2B + λC + D and
L(λ) = λ
A 0 −A0 −A− C −B − D−A −B − D −C
+
B A + C DA + C B + D 0
D 0 −D
in DL(P) with ansatz vector v =
[1 0 −1
]T ,
p(v ; x) = x2 − 1.
Using the check one easily finds that
det[
A + C B + DB + D A + C
]6= 0
is the linearization condition for L(λ).This is equivalent to saying that neither −1 nor +1 is aneigenvalue of the matrix polynomial P(λ).
Eigenvalue problems 110 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 111 / 219
Polynomial evps with structure
P(λ) x = 0,
where. P(λ) is polynomial matrix valued function;. x is a real or complex eigenvector;. λ is a real or complex eigenvalue;. and P(λ) has some further structure.
Eigenvalue problems 112 / 219
Which structures?
DefinitionA nonlinear matrix polynomial P(λ) of degree k is called. T-even (H-even) if P(λ) = PT (−λ) (P(λ) = PH(−λ));. T-palindromic (H-palindromic) if P(λ) = PT (λ−1)λk
(P(λ) = PH(λ−1)λk .. T-symmetric (Hermitian) if P(λ) = PT (λ) (P(λ)H = P(λ));
Eigenvalue problems 113 / 219
Examples
. A T-even quadratic problem has the form λ2M + λG + K withM = MT ,K = K T ,G = −GT .
. A H-palindromic cubic problem has the formλ3A3 + λ2A2 + λA1 + A0 where A3 = AH
0 ,A2 = AH1 .
. A quadratic symmetric problem has the form λ2M + λD + K ,with M = MT ,K = K T ,D = DT .
Eigenvalue problems 114 / 219
Even Matrix Polynomials
. Singularities (cracks) in anisotropic materials as functions ofmaterial or geometry parameters
. Optimal control of differential equations
. Gyroscopic systems
. Optimal Waveguide Design,
. H∞ control
Eigenvalue problems 115 / 219
Palindromic Matrix Polynomials
. Excitation of rail tracks by high speed trains
. Periodic surface acoustic wave filters
. Control of (high order) difference equations
. Computation of the Crawford number
Eigenvalue problems 116 / 219
Symmetric/Hermitian Matrix Polynomials
. Mass, spring, damper systems, dynamic simulation ofstructures.
. Acoustic field problem.
. Quantum dot simulation.
. . . .
Eigenvalue problems 117 / 219
Companion formExample The even quadratic eigenvalue problem
(λ2M + λG + K )x = 0
with M = MT ,K = K T > 0, G = −GT has structure but thecompanion linearization[
O IK −G
] [xy
]= λ
[I OO M
] [xy
],
does not preserve this structure.. Numerical methods destroy eigenvalue symmetries in finite
arithmetic !. Perturbation theory requires structured perturbation for
stability near the imaginary axis.. We need structure preserving linearizations.
Eigenvalue problems 118 / 219
Example
LetP(λ) = λ2M + λG + K
be even, i.e. M = MT , G = −GT , K = K T .If K is singular, then the commonly used even linear pencil(obtained with v = e1)
λ
[M 00 K
]+
[G K−K 0
]is not a linearization, since L(λ) is not regular.
Eigenvalue problems 119 / 219
Examples
v L(λ) ∈ DL(P) for given v Linearization condition[10
]λ
[A 00 −C
]+
[B CC 0
]det(C) 6= 0[
01
]λ
[0 AA B
]+
[−A 00 C
]det(A) 6= 0[
11
]λ
[A AA B − C
]+
[B − A C
C C
]det(A− B + C) = det[P(−1)] 6= 0[
αβ
]λ
[αA βAβA βB − αC
]+
[αB − βA αCαC βC
]det(β2A− αβB + α2C) 6= 0 ;
equivalently, det[P(− βα
)] 6= 0 .
Eigenvalue problems 120 / 219
Example
LetP(λ) = λ2M + λG + K
be even and take v = e1.Then p(v ; x) = 1x1 + 0x0 has the root 0, which is an eigenvalueof P iff K is singular.
Eigenvalue problems 121 / 219
Structured linearization
LemmaConsider an n × n even matrix polynomial P(λ) of degree k.
For an ansatz vector v = (v1, . . . , vk )T ∈ Fk the linearizationL(λ) = λX + Y ∈ DL(P) is even, i.e. X = X T and Y = −Y T , ifand only if p(v ; x) is even.
Consider an n × n palindromic matrix polynomial P(λ) of degreek.
Then, for a vector v = (v1, . . . , vk )T ∈ Fk the linearizationL(λ) = λX + Y ∈ DL(P) is (the permutation of) a palindromic,if and only if p(v ; x) is palindromic.
Eigenvalue problems 122 / 219
Example
For the palindromic polynomial
P(λ)y = (λ2AT1 + λA0 + A1)y = 0, A0 = AT
0
a palindromic vector v = [α, α]T , α 6= 0 leads to a palindromicpencil
(κZ + Z T )z = 0, Z =
[AT
1 A0 − A1
AT1 AT
1
].
This is a linearization if and only if −1 is not an eigenvalue ofP(λ).
Eigenvalue problems 123 / 219
Symmetric linearizations
For symmetric P, a simple argument shows that every pencil inDL(P) is also symmetric:L ∈ DL(P) with ansatz vector v implies that LT is also in DL(P)with the same ansatz vector v , and then L = LT follows from theuniqueness.
Eigenvalue problems 124 / 219
Deflation of ’bad eigenvalues’
To have structure preserving linearizations we need to deflatebad eigenvalues and singular blocks first.
. For linear problem we can use staircase forms, extension tomatrix polynomials Byers/M./Xu 2008
. Open Question: Can we linearize first and then deflate in thelinear problem? M./Xu 2015, de Teran/Mackey 2015
Eigenvalue problems 125 / 219
Summary lecture II
. Structured canonical and staircase forms for structuredpencils .
. We have discussed large classes of potential linearizations.
. They are easily constructed and the linearization property canbe checked
. They can be used to study the conditioning of thelinearizations.
. They can be used to get structure preserving linearizations.
Eigenvalue problems 126 / 219
References. E. N. Antoniou and S. Vologiannidis. A new family of
companion forms of polynomial matrices. Electron. J. LinearAlgebra, 11:78–87, 2004.
. R. Byers, V. M. and H. Xu. Trimmed linearization for structuredmatrix polynomials. LAA, Vol. 429, 2373–2400, 2008.
. P. Benner, P. Losse, V. M. and M. Voigt, Numerical LinearAlgebra Methods for Linear Differential-Algebraic Equations: Asurvey DAE Forum, 2015.
. D.S. Mackey, N. Mackey, C. Mehl, and V. M. Vector spaces oflinearizations for matrix polynomials. SIMAX, Vol. 28, pages971-1004, 2006.
. D.S. Mackey, N. Mackey, C. Mehl, and V. M.. StructuredPolynomial Eigenvalue Problems: Good Vibrations from GoodLinearizations. SIMAX, Vol. 28, 1029-1051, 2006.
. V.M. and H. Xu, Structure preserving deflation of infiniteeigenvalues in structured pencils. ETNA, Vol. 44, 1–24, 2014.
Many more: see works by de Teran, Dopico, Mackey, Mackey,Mehl, M., Xu . . .
Eigenvalue problems 127 / 219
Outline of lectures
. Lecture I: Modeling and Applications.
. Lecture II: Linear Algebra and Analysis
. Lecture III: Numerical Methods.
Eigenvalue problems 128 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 129 / 219
Small scale problems
Small scale problems n several 1000, compute all theeigenvalues.. QR algorithm (LAPACK, MATLAB (eig),. . . ) for P(λ) = λI + A0
. QZ algorithm (LAPACK, MATLAB (eig) ,. . . ) forP(λ) = λA1 + A0 regular, square.
. GUPTRI for P(λ) = λA1 + A0 general.
. QZ, GUPTRI algorithm (LAPACK, MATLAB (quadeig,polyeig),. . . ) for P(λ) =
∑ki=0 λ
iAi via linearization.. As long as there is enough storage, works full dense matrices
of dimension several 10000.
Eigenvalue problems 130 / 219
The QZ algorithm
Given a regular pencil λA1 + A0
. First determine unitary/orthogonal P,Q such that PA1Q = T1
is upper triangular and PA0Q = H0 is upper Hessenberg.. Deflate eigenvalues∞. Apply implicitly the QR algorithm to the upper Hessenberg
matrix T−11 H0 without ever forming it.
. After convergence we have PA1Q = S1 and PA0Q = S0
(quasi)-upper triangular.
Eigenvalue problems 131 / 219
Summary full dense methods
. Good linearization plus full dense QZ algorithm solves manyproblems.
. As a quick solution it may be the best to do.
. For larger problems quadeig, polyeig, or a good linearizationand eigs are good choices.
. Special methods for even and palindromic pencils.
. Special implementations for larger scale full problemsKagström, Kressner 2008
Eigenvalue problems 132 / 219
Newton’s method
Consider P(λ)x = 0 where P is an arbitrary matrix function.Kublanovskaya 1969: Use a QR-decomposition with columnpivoting P(λ)Π(λ) = Q(λ)R(λ), where Π(λ) is such that|r11(λ)| ≥ |r22(λ)| ≥ · · · ≥ |rnn(λ)|.Then λ is an eigenvalue if and only if rnn(λ) = 0.Applying Newton’s method, one obtains
λk+1 = λk −1
eHn Q(λk )HP ′(λk )Π(λk )R(λk )−1en
for approximations to an eigenvalue.Approximations to left and right eigenvectors can be obtainedfrom
yk = Q(λk )en and xk = Π(λk )R(λk )−1en.
Eigenvalue problems 133 / 219
Analysis
. Method converges quadratically.
. This relatively simple idea is not efficient, since it computeseigenvalues one at a time and needs several O(n3)factorizations per eigenvalue.
. It is, however, useful in the context of iterative refinement ofcomputed eigenvalues and eigenvectors.
Eigenvalue problems 134 / 219
Nonlinear inverse iteration. For Ax = λx inverse iteration is equivalent to Newton’s
method for [Ax − λxvHx − 1
]= 0,
where v ∈ Cn is suitably chosen.. For the nonlinear problem[
P(λ)xvHx − 1
]= 0
one step of Newton’s method gives[P(λk ) P ′(λk )xk
vH 0
] [xk+1 − xk
λk+1 − λk
]= −
[P(λk )xk
vHxk − 1
].
. This gives uk+1 := P(λk )−1P ′(λk )xk . and with vHxk+1 = vHxk ,then λk+1 = λk − vHxk
vHuk+1.
Eigenvalue problems 135 / 219
Inverse iteration
1: Start with λ0, x0 such that vHx0 = 12: for k = 0,1,2, . . . until convergence do3: solve P(λk )uk+1 = P ′(λk )xk for uk+1
4: λk+1 = λk − (vHxk )/(vHuk+1)5: normalize xk+1 = uk+1/vHuk+1
6: end for
Eigenvalue problems 136 / 219
Analysis. This algorithm is a variant of Newton’s method. It converges
locally and quadratically to some (x , λ).. It was suggested by Ruhe 1973 to use vk = P(λk )Hyk for the
normalization, where yk is an approximation to a left ev.. Then the update for λ becomes
λk+1 = λk −yH
k P(λk )xk
yHk P ′(λk )xk
,
which is the nonlinear Rayleigh functional Lancaster 2002.. This functional can be interpreted as one Newton step for
solving the equation fk (λ) := yHk P(λ)xk = 0.
. For linear Hermitian problems this gives cubic convergence ifλk is updated by the Rayleigh quotient Crandall 1951.
. The same holds for symmetric nonlinear problems if we set inStep 4 λk+1 = p(uk+1), where p(uk+1) denotes the real root ofuH
k+1P(λ)uk+1 = 0 closest to λk .Eigenvalue problems 137 / 219
Simplified Newton. Inverse iteration needs a large number of factorizations.. The obvious idea then is to fix the shift σ i.e. to use,
xk+1 = (A− σI)−1xk .. However, in general this method does not converge in the
nonlinear case.. Neumaier 1985. Assume that P(λ) is twice continuously
differentiable, then inverse iteration gives
xk − xk+1 = xk + (λk+1 − λk )P(λk )−1P ′(λk )xk
= P(λk )−1(P(λk ) + (λk+1 − λk )P ′(λk ))xk
= P(λk )−1P(λk+1)xk +O(|λk+1 − λk |2).
. Neglecting the second order term one gets
xk+1 = xk − P(λk )−1P(λk+1)xk .
Eigenvalue problems 138 / 219
Residual inverse iteration
1: Let v be a normalization vector and start with anapproximations σ and x1 to an eigenvalue and correspondingeigenvector such that vHx1 = 1
2: for k = 1,2, . . . until convergence do3: solve vHP(σ)−1P(λk+1)xk = 0 for λk+1
or xHk P(λk+1)xk = 0 if P(λ) is Hermitian and λk+1 is real
4: compute the residual rk = P(λk+1)xk
5: solve P(σ)dk = rk for dk
6: set zk+1 = xk − dk
7: normalize xk+1 = zk+1/vHzk+1
8: end for
Eigenvalue problems 139 / 219
Analysis
Theorem (Neumaier 1985)
If P(λ) is twice continuously differentiable, λ a simple zero ofdet P(λ) = 0, and if x is an eigenvector normalized by vH x = 1,then the residual inverse iteration converges for all σ sufficientlyclose to λ, and one has the estimate
‖xk+1 − x‖‖xk − x‖
= O(|σ − λ|) and |λk+1 − λ| = O(‖xk − x‖q),
where q = 2 if P(λ) is Hermitian, λ is real, and λk+1 solvesxH
k P(λk+1)xk = 0 in Step 3, and q = 1 otherwise.
Eigenvalue problems 140 / 219
Method of successive linear problems
1: Start with an approximation λ1 to an eigenvalue2: for k = 1,2, . . . until convergence do3: solve the linear eigenproblem P(λk )u = θP ′(λk )u4: choose an eigenvalue θ smallest in modulus5: λk+1 = λk − θ6: end for
Eigenvalue problems 141 / 219
Analysis
Theorem (Ruhe 1973)
If P is twice continuously differentiable, and λ is an eigenvaluesuch that P ′(λ) is nonsingular and 0 is an algebraically simpleeigenvalue of P ′(λ)−1P(λ), then the method of successive linearproblems converges quadratically to λ.
Eigenvalue problems 142 / 219
Newton and inverse iteration
. The discussed Newton and inverse iteration methods can beused for general nonlinear evps.
. For Hermitian problems and real eigenvalues they convergefaster if the eigenvalue approximations are updated using theRayleigh functional.
. One typically gets only one eigenvalue/vector at a time.
. Sometimes the methods repeatedly converge to the same ev.
. Deflation is problematic.
. We do not have a guarantee that we find all eigenvalues in agiven set.
. Matrix factorizations are needed to solve the linear system.With sparse solvers this can be done (if not too often) for verylarge sizes MUMPS, UMFPACK, PARDISO, . . . .
Eigenvalue problems 143 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 144 / 219
Safeguarded iterationFor Hermitian problems that allow a variational characterizationof their eigenvalues we can use the safeguarded iterationWerner 1970, Voss/Werner 1982.. Let J ⊂ R be an open interval and assume that F (λ) ∈ Cn,n is
a family of Hermitian matrices, where the elements aredifferentiable in λ.
. Assume that for every x ∈ Cn \ 0 the real equation
f (λ, x) := xHF (λ)x = 0
has at most one solution λ ∈ J.. Then f defines a functional ρ on some subset D ⊂ Cn called
Rayleigh functional of the nonlinear evp which generalizes theRayleigh quotient for linear pencils F (λ) = λA1 + A0.
. Assume further that xHF ′(ρ(x))x > 0 for every x ∈ D thendifferentiating the identity xHF (ρ(x))x = 0 one obtains that thee’vecs are stationary points of ρ.
Eigenvalue problems 145 / 219
Minimax principle
Under the described assumtions a minimax principle for thenonlinear eigenproblem holds if the eigenvalues are enumeratedappropriately.
. A value λ ∈ J is an eigenvalue of F (λ)x = 0 if and only ifµ = 0 is an eigenvalue of the matrix F (λ), and by Poincaré’smax-min principle there exists m ∈ N such that
0 = maxdim V =m
minx∈V , x 6=0
xHF (λ)x‖x‖2 .
. One assigns this m to λ as its number and calls λ an m-theigenvalue of the problem.
Eigenvalue problems 146 / 219
Minimax theoremTheorem (Voss/Werner 1982)
Under the above assumptions, for every m ∈ 1, . . . ,n,F (λ)x = 0 has at most one m-th eigenvalue in J, given by
λm = mindim V =m,D∩V 6=∅
supv∈D∩V
ρ(v).
Conversely, if
λm := infdim V =m,D∩V 6=∅
supv∈D∩V
ρ(v) ∈ J,
then λm is an m-th eigenvalue of F (λ)x = 0.The minimum is attained by the invariant subspace of F (λm)corresponding to its m largest eigenvalues, and the supremum isattained by any eigenvector of F (λm) corresponding to µ = 0.
Eigenvalue problems 147 / 219
Safeguarded iteration
The enumeration of eigenvalues and the fact that theeigenvectors are the stationary vectors of the Rayleigh functionalsuggests the following Algorithm.
1: Start with an approximation σ1 to the m-th eigenvalue2: for k = 1,2, . . . until convergence do3: determine an eigenvector xk corresponding to the
m-largest eigenvalue of F (σk )4: solve xH
k F (σk+1)xk = 0 for σk+1
5: end for
Eigenvalue problems 148 / 219
Analysis
. If λ1 := infx∈D ρ(x) ∈ J and x1 ∈ D then the iteration convergesglobally to λ1.
. If λm ∈ J is an m-th eigenvalue which is simple, then theiteration converges locally and quadratically to λm.
. Let F (λ) be twice continuously differentiable, and assume thatF ′(λ) is positive definite for λ ∈ J. If, in Step 3 of theAlgorithm, xk is chosen to be an eigenvector corresponding tothe m-th largest eigenvalue of the generalized evpF (σk )x = µF ′(σk )x , then the convergence is even cubic.
Eigenvalue problems 149 / 219
Evaluation
. Whenever the problem has a variational background and theeigenvalues are real this is the best method.
. One gets information about the eigenvalues that no othermethod provides.
. It can be easily combined with grid refinement and multilevelapproaches.
. It is used in a huge number of applications with great success.
. More information and applications see H. Voss website.
Eigenvalue problems 150 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 151 / 219
Nonlinear Newton evp. solverNonlinear evp F (λ)x = 0. Apply Newton to function
fw (x , λ) =
[F (λ)x
wHx − 1
]= 0.
The Newton system for λk+1 = λk + µk and xk+1 = xk + sk is[F (λk ) F ′(λk )xk
wH 0
] [sk
µk
]= −
[F (λk )xk
wHxk − 1
]or
λk+1 = λk −1
wHF (λk )−1F ′(λk )xk
xk+1 = (λk − λk+1)F (λk )−1F ′(λk )xk .
Eigenvalue problems 152 / 219
Difficulties
. In many applications we want all evs in a given set.
. How do we guarantee that we find all.
. Deflation of computed evs.
. Need to use sparse solvers.
. Need to get into convergence intervals for Newton.
. No global analysis and easy to use industrial implementation.
Eigenvalue problems 153 / 219
Finding all evs in a region R of C
For the general case F (λ)x = 0 and most iterative methods it isan open problem to guarantee that we find all eigenvalues in agiven set R ⊂ C.. We can use Bendixon’s theorem or Gersgorin type results to
analyze the number of eigenvalues.. The computation can in principle be done with any solver and
many start points.. We could use the sign function method (not for large
problems) or the Cauchy integral theorem (Beyn 2009).. Homotopy or path following seem to be the only option.. None of these methods is really satisfactory.
Eigenvalue problems 154 / 219
Homotopy
. Replace F (λ) by P(λ) + g(t)Q(λ), where the problemP(λ)x = 0 is ’easy’ and where g is a monotonically increasingfunction of t with g(0) = 0, g(1) = 1.
. Compute all the eigenvalues λi(0) of P in the given set R andpossible the associated ev.
. Large potential for parallelism.
. Follow the eigenvalue curves λi(t).
. Determine eigenvalues that leave R.
. Determine eigenvalues that come into R from outside.
. Determine bifurcation points.
. Use step size control to guarantee that no ev. is missed.
. Use Newton method for fully nonlinear problem.
Eigenvalue problems 155 / 219
Difficulties
. If the ’hard part’ is large, then evs. move a lot.
. Small homotopy steps may be needed to track ev’s ofnonlinear problem.
. Many factorizations may be needed.
. Need to use out-of-core sparse solvers.
. Need to get into convergence intervals for Newton.
. Need to update search directions in a clever way to makestepsizes small.
Eigenvalue problems 156 / 219
Nonlinear parametric evp
f (λ, α)x = 0, x ∈ Fn, λ ∈ F
where F is a field, typically F = R or F = C.
f : F× F` → Fk ,
and α denotes a set of ` parameters.
Eigenvalue problems 157 / 219
Example: Acoustics evp.
P(λ) := λ2[
Ms 00 Mf
]+ λ
[Ds DT
sfDsf Df
]+
[Ks(λ) 0
0 Kf
],
with complex symmetric coefficients.. Ms,Mf ,Kf are real symm. pos. semidef. mass/stiffness
matrices of structure and air, Ms is singular and diagonal, Mf
is sparse. Ms is a factor 1000− 10000 larger than Mf .. Ks(ω) = Ks(ω)T = K1(ω) + ıK2.. Ds,Df are a real symmetric pos. semidef. damping matrices.. Dsf is real coupling matrix.. Part of the matrices depend on parameters α.. Compute all eigenvalues in a given region R.. Project the problem into the subspace spanned by these
eigenvectors. (Model reduction).Eigenvalue problems 158 / 219
Model/modal reduction. After desired ev’s and deflating subspaces U = [u1, . . . ,uk ]
have been computed, the projected system
UT M(α)Uz + UT D(α)Uz + UT K (α)Uz = UT f
is formed and optimization is done on this system.. The original decoupled projection (fluid and structure
separately) does not work.. We really need nonlinear model reduction (open problem).. We need to use the fact that only a small part of the system is
changed in every optimization step.. We need to integrate ev computation, gradient computation,
discretization.. An adaptive multilevel approach would be great (Reduced
order modeling)
Eigenvalue problems 159 / 219
Eigenvalue tracking
−200 −100 0 1000
100
200
300
400
500
600
−200 −100 0 1000
100
200
300
400
500
600
1
2
3
1We use Krylov subspace methods with many different shifts: Atypical trapezoidal region – within which all eigenvalues aresought – at beginning and after three shifts have beenprocessed.
Eigenvalue problems 160 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 161 / 219
Projection methods, linear evps.
. For large sparse linear evps Ax = λx , iterative projectionmethods like the Lanczos, Arnoldi, rational Krylov orJacobi–Davidson method are well established.
. Basic idea: Construction of a search space (typically a Krylovsubspace) followed by projection into this subspace.
. This gives a small dense problem, handled by a dense solverand the eigenvalues of the projected problem are used asapproximations.
. Main features: Matrix factorizations are avoided as much aspossible (except in the context of preconditioning), and thegeneration of the search space is usually done via an iterativeprocedure that is based on matrix vector products that can becheaply obtained.
Eigenvalue problems 162 / 219
Basic types
Two basic types:. Methods which expand the subspaces independently of the
eigenpair of the projected problem and which use Krylovsubspaces of A or (A− σI)−1 for some shift σ. These methodsinclude the Arnoldi, Lanczos or rational Krylov method.
. Methods that aim at a particular eigenpair and choose theexpansion q such that it has a high approximation potential fora desired eigenvalue/eigenvector or invariant subspace. Anexample for this approach is the Jacobi–Davidson method.
Eigenvalue problems 163 / 219
Krylov subspace methods. For the Arnoldi and other Krylov subspace methods, the
search space is a Krylov space
Kk (A, v1) = spanv1,Av1,A2v1, . . . ,Ak−1v1,
where v1 is an appropriately chosen initial vector.. Arnoldi produces an orthogonal basis Vk of Kk (A, v1) such
that the projected matrix Hk is upper Hessenberg and satisfies
AVk = VkHk + fkeTk ,
where ek ∈ Rk is the k -th unit vector and fk is orthogonal tothe columns of Vk , i.e. V H
k fk = 0.. The orthogonality of Vk implies that V H
k AVk = Hk is theorthogonal projection of A to Kk (A, v1).
Eigenvalue problems 164 / 219
Ritz pairs
. If (y , θ) is an eigenpair of the projected problem, and x = Vkyis the corresponding approximation to an eigenvector ofAx = λx (which is called a Ritz vector corresponding to theRitz value θ), then the residual satisfies
r := Ax−θx = AVky−θVky = VkHky−θVky +fkeHk y = (eH
k y)fk .
. Hence, one obtains an error indicator ‖r‖ = |eTk y | · ‖fk‖ for the
eigenpair approximation (x , θ) without actually computing theRitz vector x .
. If A is Hermitian then this is even an error bound.
Eigenvalue problems 165 / 219
Evaluation
. The Arnoldi method together with its variants, such asshift-and-invert and implicit restart, is today a standard solver.
. It is implemented in the package ARPACK and the MATLABcommand eigs.
. It typically converges to the extreme eigenvalues first.
. If one is interested in eigenvalues in the interior of thespectrum, or in eigenvalues close to a given focal point τ , thenone can apply the method in a shift-and-invert fashion, i.e. tothe matrix (A− τ I)−1 or an approximation of it.
. In this case one has to determine a factorization of A− τ I,which, however, may be prohibitive for very large problems.
. One may use a preconditioned iterative solver here.
Eigenvalue problems 166 / 219
Jacobi-Davidson method
An alternative is the Jacobi–Davidson method.. Let (x , θ) be an approximation to an eigenpair obtained by a
projection method with subspace V .. We assume that ‖x‖ = 1, θ = xHAx and r := Ax − θx ⊥ x .. Then the most desirable orthogonal correction z solves the
equationA(x + z) = λ(x + z), z ⊥ x .
. As z ⊥ x , the operator A can be restricted to the subspaceorthogonal to x yielding A := (I − xxH)A(I − xxH), and fromθ = xHAx it follows that
A = A + AxxH + xxHA− θxxH .
Eigenvalue problems 167 / 219
Correction equation. Hence, (A− λI)z = −r + (λ− θ − xHAz)x .. Since both the left hand side and r are orthogonal to x , it
follows that the factor λ− θ− xHAz must vanish, and thereforethe correction z has to satisfy (A− λI)z = −r .
. Since λ is unknown, it is replaced by a Ritz approximation θ,and one ends up with the correction equation
(I − xxH)(A− θI)(I − xxH)z = −r .
. The expanded space [V , z] for the Jacobi–Davidson methodcontains u = (A− θI)−1x , obtained by one step of inverseiteration.
. One can expect similar approximation properties, i.e.quadratic or even cubic convergence, if the problem isHermitian.
. The Jacobi–Davidson method is aiming at a particulareigenvalue (close to the shift θ).
Eigenvalue problems 168 / 219
Comparison Arnoldi, JD
. Both, the shift-and-invert Arnoldi method and theJacobi-Davidson method have to solve a large linear system.
. In the Arnoldi method this system in general needs to besolved very accurately to get fast convergence.
. In the Jacobi–Davidson method it suffices to solve this systemapproximately to maintain fast convergence.
. Typically only a small number of steps of a preconditionediterative method are sufficient to obtain a good expansion z forthe search space V .
. Implementations of JD in FORTRAN and MATLAB can bedownloaded fromhttp://www.math.ruu.nl/people/sleijpen.
Eigenvalue problems 169 / 219
Struct. proj. methods
. Structure preserving linearizations plus structure preservingArnoldi methods for linear problems are available for manystructures. (Even, palindromic, . . . ).
. The use of structure saves computing time and one gets moreaccurate results.
. One has to design specific spaces and specific projections.Apel/M./Watkins 2002, M./Schröder/Simoncini 2009.
. Many open problems.
Eigenvalue problems 170 / 219
Proj. methods for nonl. evpsConsider F (λ)x = 0.. Expand the search space by directions that has a high
approximation potential for the next desired eigenvector.. Assume that V is an orth. basis of current search space.. Let (θ, y) be a solution of the proj. problem V HF (λ)Vy = 0,
and let x = Vy be the Ritz vector.. Two candidates for expanding V : v = x − F (σ)−1F (θ)x
motivated by residual inverse iteration, and v = F (θ)−1F ′(θ)xcorresponding to inverse iteration.
. Expanding search space V by v results in Arnoldi typemethods.
. Expanding it by v requires the solution of a large linear systemin every iteration step. This can be avoided by aJacobi–Davidson approach .
Eigenvalue problems 171 / 219
Nonlinear Arnoldi. We consider the expansion of V by v = x − F (σ)−1F (θ)x ,
where σ is a fixed shift (not too far away from focal point).. The new search direction is orthonormalized against the
previous ansatz vectors.. Since the Ritz vector x is contained in the span of V , one may
choose the new direction v = F (σ)−1F (θ)x as well.. For the linear problem F (λ) = A0 + λA1 this is exactly the
Cayley transformation with pole σ and zero θ
(A0 + σA1)−1(A0 + θA1) = I + (θ − σ)(A0 + σA1)−1A1
. Krylov spaces are shift-invariant, the resulting projectionmethod expanding V by v the shift-and-invert Arnoldi method.
. If it is too expensive to solve the linear system F (σ)v = F (θ)xfor v , one may choose v = MF (θ)x with M ≈ F (σ)−1.
Eigenvalue problems 172 / 219
Nonlinear Arnoldi, Ruhe 19981: start with an initial shift σ and an initial basis V , V HV = I;2: determine a preconditioner M ≈ F (σ)−1,3: for m = 1,2, . . . , number of wanted eigenvalues do4: comp. ev µ and evec y of FV (µ)y := V HF (µ)Vy = 0.5: determine Ritz vector u = Vy and residual r = F (µ)u6: if ‖r‖/‖u‖ < ε then7: accept approximate eigenpair λm = µ, xm = u,8: if m == number of desired eigenvalues then STOP end if9: choose new shift σ and precond. M ≈ F (σ)−1 if indicated
10: restart if necessary11: choose approximations µ and u12: determine residual r = F (µ)u13: end if14: v = Mr15: v = v − VV Hv ,v = v/‖v‖, V = [V , v ]16: reorthogonalize if necessary17: end for
Eigenvalue problems 173 / 219
Discussion
. In Step 1 any pre-information such as known approximateeigenvectors should be used.
. If no information on eigenvectors is at hand, and one isinterested in evs near τ ∈ D, choose an initial vector atrandom, execute a few Arnoldi steps for the linear evpF (τ)u = θu or F (τ)u = θF ′(τ)u, and choose V byorthogonalizing evecs. Starting with a random vector withoutthis preprocessing does not lead to convergence.
. The preconditioner in Step 2 should be chosen on the basis ofthe underlying problem. If this is not available, then use full orincomplete sparse LU decompositions of F (σ).
. Update the preconditioner if the convergence measured bythe quotient of the last two residual norms before convergencehas become too slow.
Eigenvalue problems 174 / 219
Nonlinear Jacobi–Davidson. Suppose that the columns of V ⊂ Cn form an orthonormal
basis of the current search space, and let (x , θ) be a Ritz pairi.e. V HF (θ)Vy = 0, x = Vy .
. Consider the correction equation(I − pxH
xHp
)F (θ)
(I − xxH
xHx
)z = −r , z ⊥ x ,
where p := F ′(θ)x and r := F (θ)x .. Rewrite this as F (θ)z − αp = −r , where α has to be chosen
such that z ⊥ x .. Solving for z we obtain
z = −x + αF (θ)−1p = −x + αF (θ)−1F ′(θ)x ,
and x = Vy yields that z := F (θ)−1F ′(θ)x ∈ span[V , z].
Eigenvalue problems 175 / 219
Discussion. The new search space span[V , z] contains the vector obtained
by one step of inverse iteration with shift θ and initial vector x. We expect quadratic or even cubic convergence, if the
correction equation is solved exactly.. Usually a few steps of a Krylov solver with an appropriate
preconditioner suffice to obtain a good expansion direction.. If a Krylov solver is used and the initial approximation is
orthogonal to x then all iterates are orthogonal to x as well.. The operator F (θ) is restricted to map the subspace x⊥ to
(F ′(θ)x)⊥.. If K ≈ F (θ) is a preconditioner of F (θ) then a preconditioner
for an iterative solver is
K := (I − pxH
xHp)K (I − xxH
xHx).
Eigenvalue problems 176 / 219
Template1: start with an initial shift σ and an initial basis V , V HV = I;2: determine a preconditioner M ≈ F(σ)−1,3: for m = 1,2, . . . , number of wanted eigenvalues do4: compute ev µ and evec y of FV (µ)y := V HF(µ)Vy = 0.5: determine Ritz vector u = Vy and residual r = F(µ)u6: if ‖r‖/‖u‖ < ε then7: accept approximate eigenpair λm = µ, xm = u,8: if m == number of desired evs then STOP end if9: choose new shift σ and a precond. M ≈ F(σ)−1
10: restart if necessary11: choose approx. µ and u12: determine residual r = F(µ)u13: end if14: Find an appr. solution of (I − F ′(µ)uuH
uHF ′(µ)u )F (µ)(I − uuH
uHu )t = −r
15: v = v − VV Hv ,v = v/‖v‖, V = [V , v ]16: reorthogonalize if necessary17: end for
Eigenvalue problems 177 / 219
Rational Krylov. Linearize the nonlinear family F (λ) by Lagrange interpolation
between two points µ and σ.
F (λ) =λ− µσ − µ
F (σ) +λ− σµ− σ
F (µ) + higher order terms.
. Keep σ fixed for several steps, iterate on µ, neglect theremainder in the Lagrange interpolation, and multiply byF (σ)−1 from the left:
F (σ)−1F (λj−1)w = θw with θ =λj − λj−1
λj − σ,
. This predicts a singularity at
λj = λj−1 +θ
1− θ(λj−1 − σ).
. For large and sparse matrices combine with linear Arnoldiprocess.
Eigenvalue problems 178 / 219
Rational Krylov. After j steps, approx. evs λ1, . . . , λj , orthon. Vj = [v1, . . . , vj ], and
upper Hessenb. Hj,j−1 ∈ Cj,j−1 are generated, with
F (σ)−1F (λj−1)Vj−1 = VjHj,j−1
. Updating the matrix Hj,j−1 according to the linear theory yields
Hj+1,j =
[Hj,j−1 kj
0 ‖r⊥‖
],
where kj = V Hj rj , rj = F (λj)vj , and r⊥ = rj − VjV H
j vj .
. Use Lagrangian interpolation to satisfy next Arnoldi relation via
G(λj) ≈λj − σλj−1 − σ
G(λj−1)−λj − λj−1
λj−1 − σI =
11− θ
G(λj−1)− θ
1− θI,
where G(λ) := F (σ)−1F (λ), and updates H according to
Hj+1,j =
[ 11−θHj,j−1 − θ
1−θ Ij,j−1 kj0 ‖r⊥‖
].
Eigenvalue problems 179 / 219
Discussion
. This first version of the rational Krylov method is not veryefficient.
. Ruhe 2000 suggested to modify λ and H in an inner iterationuntil the residual r = F (σ)−1F (λ)Vjs is enforced to beorthogonal to Vj
. Expand the search space only after the inner iteration hasconverged
Eigenvalue problems 180 / 219
Rational Krylov algorithm1: V = [v1] with ‖v1‖ = 1, init. λ, σ ; j = 1, hj = 0j ; s = ej ; x = vj ;2: compute r = F (σ)−1F (λ)x and kj = V H
j r3: while ‖kj‖ >ResTol do4: orthogonalize r = r − V H
j kj
5: set hj = hj + kjs−1j
6: θ = min eig Hj,j with corresponding eigenvector s7: x = Vjs8: update λ = λ+ θ
1−θ (λ− σ)
9: update Hj,j = 11−θHj,j − 1
1−θ I10: compute r = F (σ)−1F (λ)x and kj = V H
j r11: end while12: compute hj+1,j = ‖r‖13: if |hj+1,jsj | >EigTol then14: vj+1 = r/hj+1,j ; j = j + 1; GOTO 2:15: end if16: Accept eigenvalue λi = λ and eigenvector xi = x17: If more eigenvalues wanted, choose next θ and s, and GOTO 8:
Eigenvalue problems 181 / 219
Variation of Rat. Krylov
1: start with initial vector V = [v1] with ‖v1‖ = 1, initial λ and σ2: for j = 1,2, . . . until convergence do3: solve projected eigenproblem V HF (σ)−1F (λ)Vs = 0 for
(λ, s)by inner iteration
4: compute Ritz vector x = Vs and residual r = F (σ)−1F (λ)x5: orthogonalize r = r − VV Hr6: expand searchspace V = [V , r/‖r‖]7: end for
Eigenvalue problems 182 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 183 / 219
Spectral transformation
Consider full problem Pω(λ)x = 0.. Set λτ (ω) = λ(ω)− τ , where τ is such that det(Pω(τ)) 6= 0.. New parametric QEP
Pω,τ (λ(ω))x(ω) = (λτ (ω)2Mτ + λτ (ω)Cτ (ω) + Kτ (ω))x(ω) = 0,
where Mτ = M, Cτ = 2τM + C and Kτ = τ 2M + τC + K isnonsingular.
. Shift point τ is chosen in the right half plane, ideally near theexpected eigenvalue location.
. Consider reverse polynomial, then evs near τ become large inmodulus, while evs far away from τ become small.
Eigenvalue problems 184 / 219
Linearization, first order form.
We use classical companion linearization (first order form)
Aτ (ω)v(ω) = µτBτ (ω)v(ω)
with[Kτ (ω) 0
0 In
] [v(ω)
µτ (ω)v(ω)
]= µτ (ω)
[−Cτ (ω) −Mτ
In 0
] [v(ω)µτv(ω)
].
Eigenvalue problems 185 / 219
Shift and invert Arnoldi. Compute ev and evec approximations near shift τ via
shift-and-invert Arnoldi method.. Given v0 ∈ Cn and W ∈ Cn×n, the Krylov subspace of Cn of
order k associated with W is
Kk (W , v0) = spanv0,Wv0,W 2v0...,W k−1v0.
. Arnoldi obtains orthonormal basis Vk of this space and
WVk = VkHk + fe∗k ,
. Columns of Vk approx. k -dim. invariant subspace of W .
. Evs of Hk approximate evs of W associated to Vk .
. Apply with shift τ and frequency ω to W = Bτ (ω)−1Aτ (ω).Per step we multiply with Aτ (ω) and solve system with Bτ (ω).
Eigenvalue problems 186 / 219
SVD projection
. Construct measurement matrix V ∈ Rn,km containing’unstable’ evecs for a set of ωi ,
V = [V (ω1),V (ω2),V (ω3), ...V (ωk )]
. Perform (partial) SVD V = UΣZ H
V = [u1, u2, . . . , ukm]
σ1
σ2
σ3. . .
σkm
[z1, z2, . . . , zkm]H
with U,Z unitary.
Eigenvalue problems 187 / 219
Compression
. Use approximation
V ≈ [u1, u2, . . . , ud ]
σ1
σ2
σ3. . .
σd
[z1, z2, . . . , zd ]H
by deleting σd+1, σd+2, ...σkm that are small.(Actually these are not even computed).
. Choose Q = [u1, u2, . . . , ud ] to project Pω(λ) or dynamicalsystem.
Eigenvalue problems 188 / 219
Assessing ’accuracy of evs’
Do we believe we got have a good space?. Forward error: ∆f = |λexact − λcomputed |. Backward error: smallest in norm perturbation ∆b to M,C,K
such that v , λ satisfies QEVP defined by perturbed matricesM, C, K
. Computation of backward error: ∆b(λ) = ‖(λ2M+λC+K )‖|λ|2‖M‖+|λ|‖C‖+‖K‖
. The pseudospectrum gives the level curves of ∆b(λ).
Eigenvalue problems 189 / 219
Pseudospectrum of a toy brake modelBrake model with 5000 dof, one of the springs had stiffness 1018.
Eigenvalue problems 190 / 219
Pseudospectrum of a toy brake modelBrake model corrected with modeling high stiffness as rigid link.
Eigenvalue problems 191 / 219
Results with new POD methodIndustrial model 1 million dof
. Solution for every ωI Solution with 300 dimensional TRAD subspace ∼ 30 secI Solution with 100 dimensional POD subspace ∼ 10 sec
Eigenvalue problems 192 / 219
Summary
. Many methods for (nonlinear) evps.
. Careful analysis needed before starting.
. Modeling with very stiff springs is not advisable.
. New POD approach captures modal information better thantraditional one, but slower.
. Current black box methods are not efficient (in particular thosein commercially codes).
. Discrete FE and quasi-uniform grids followed by expensivemodel reduction is really a waste.
. Combine FE modeling and eigenvalue computation.
. Can we get error estimates and adaptivity? (AFEM , AMLS).
. Can we do better than uniform mesh and brute force linearalgebra.
Eigenvalue problems 193 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 194 / 219
Adapative Finite Element Method
. Adaptive Finite Element methods refine the mesh wherenecessary, and coarsen where solution is well represented.
. They use a priori and a posteriori error estimators to getinformation about the discretization error.
. They are well established for PDE boundary value problems.
. But here we want to use them for PDE eigenvalue problems,which is much harder.
. And in the brake problem we do not have a PDE.
. Furthermore we have a parametric problem.
Eigenvalue problems 195 / 219
Adaptive FEM
Solve→ Estimate→ Mark→ Refine
Eigenvalue problems 196 / 219
Model problem: Elliptic PDE evp
Consider a model problem like the disk brake without damping,gyroscopic, circulatory terms and reasonable geometry.
∆u = λu in Ωu = 0 on ∂Ω
This is just the traditional approach that is used in industry.(Note −λ2 in brake problem).
Eigenvalue problems 197 / 219
Weak formulation
Weak formulation:Determine ev/e.-function pair (λ,u) ∈ R× V := R× H1(Ω;R)with b(u,u) = 1 and
a(u, v) = λb(u, v) for all v ∈ V ,
where the bilinear forms a(·, ·) and b(·, ·) are defined by
a(u, v) :=
∫Ω
∇u · ∇v dx , b(u, v) :=
∫Ω
uv dx for u, v ∈ V .
Induced norms |||·||| := |·|H1(Ω) on V and ‖·‖ := ‖·‖L2(Ω) on L2(Ω).
Eigenvalue problems 198 / 219
Discrete/algebraic evpDetermine ev./e.-function pair (λ`,u`) ∈ R× V` with b(u`,u`) = 1and
a(u`, v`) = λ`b(u`, v`) for all v` ∈ V`.
Use coordinate representation to get finite-dim. generalized evp
A`x` = λ`B`x`
with stiffness matrix A` = [a(ϕi , ϕj)]i,j=1,...,N`, mass matrixB` = [b(ϕi , ϕj)]i,j=1,...,N`, in nodal basis V` = ϕ1, . . . , ϕN`.Discrete eigenvector: x` =: [x`,1, . . . , x`,N`]
T .Approximated eigenfunction:
u` =
N∑k=1
x`,kϕk ∈ V`.
Eigenvalue problems 199 / 219
Error estimation
This approach includes several errors:. Model error (PDE model vs. Physics). Discretization error (finite dim. subspace). Error in eigenvalue solver (iterative method). Roundoff errors in finite arithmetic.An error estimator η` is called efficient and reliable if there existmesh-size independent constants Ceff Crel such that
Ceffη` ≤ |||u − u`||| ≤ Crelη`.
Eigenvalue problems 200 / 219
A posteriori error estimate
Estimate the error a posteriori via
|λ− λ`|+ |||u − u`|||2 . η2` := |||u`−1 − u`|||2.
Here . denotes an inequality that holds up to a multiplicativeconstant.A posteriori error estimators for Laplace eigenvalue problemGrubisic/Ovall 2009, M./Miedlar 2011, Neymeyr 2002
Eigenvalue problems 201 / 219
AFEMLA M./Miedlar 2011. Compute approx. eigenpair (λH , xH) on the coarse mesh,. use iterative solver, i.e. Krylov subspace method,. but do not solve very accurately, stop after a few steps or
when tolerance tol is reached.. Balance residual vector and error estimate Miedlar 2011.
Standard AFEM AFEMLA
Eigenvalue problems 202 / 219
Conv. history AFEMLA
Eigenvalue problems 203 / 219
Conv. first 3 evs, L-shape domain.
Eigenvalue problems 204 / 219
Intermediate Conclusion. For purely elliptic problems we can compute evs and
efunctions very efficiently.. Can be used to compute the subspace for the traditional
approach.. We have a priori/a posteriori error estimates which allow to
adapt the mesh to the solution behavior.. With the AFEMLA approach we can even work in a purely
algebraic way if the underlying PDE is not available.. It works also for several evs at a time (invariant subspaces).. Proof of convergence M./Miedlar 2011 if saturation property
holds. Proof Carstensen/Gedicke/M./Miedlar 2013.. So we can do the traditional approach also with adaptivity and
tune in to the dominant evs.. But we want this for the full model.
Eigenvalue problems 205 / 219
Non-selfadjoint problems
. Can we modify ideas for general problem?
. We need to deal with left and right evecs, complex evs, Jordanblocks.
. What are the right spaces and norms?
. Let us bring the nonsymmetry in via homotopy.
H(t) = (1− t)L0 + tL1 for t ∈ [0,1],
where L0u := −∆u.Discrete homotopy for the model eigenvalue problem:
H`(t) = (A` + C`)(t) = (1− t)A` + t(A` + C`) = A` + tC`.
Eigenvalue problems 206 / 219
A non-self-adjoint model problem
Carstensen/Gedicke/M./Miedlar 2012Convection-diffusion eigenvalue problem:
−∆u + γ · ∇u = λu in Ω and u = 0 on ∂Ω
Discrete weak primal and dual problem:
a(u`, v`) + c(u`, v`) = λ`b(u`, v`) for all v` ∈ V`,
a(w`,u?` ) + c(w`,u?` ) = λ?`b(w`,u?` ) for all w` ∈ V`.
Generalized algebraic eigenvalue problem:
(A` + C`)u` = λ`B`u` and u?`(A` + C`) = λ?`u?`B`
Smallest real part ev. is simple and well separated Evans ’00.
Eigenvalue problems 207 / 219
A posteriori error estimator
Theorem (Carstensen/Gedicke/M./Miedlar 2012)
For model problem, the difference between the approx. ev. λ`(t)in the homotopy H`(t) and the ev. λ(1) of the original problemcan be estimated via
‖λ(1)− λ`(t)‖ . ν(λ`(t), u`(t), u?` (t)) + η2(λ`(t), u`(t), u?` (t))
+ µ2(λ`(t), u`(t), u?` (t))
in terms of
ν(λ`(t), u`(t), u?` (t)) := (1− t)‖γ‖∞ (|||u`(t)|||+ |||u?` (t)|||)
+ (1− t)‖γ‖∞(η(λ`(t), u`(t), u?` (t)) + µ(λ`(t), u`(t), u?` (t))
).
Eigenvalue problems 208 / 219
Adaptive homotopy algorithmsAlgorithm 1: Balances the homotopy, discretization, iterationerrors but uses fixed stepsize in homotopy.Algorithm 2: Adaptivity in homotopy and iteration via stepsizecontrol, discretization error is not decreased.Algorithm 3: Adaptivity in the homotopy error, the discretizationerror, the iteration error including step size control.
Eigenvalue problems 209 / 219
Figure: Conv. history of Algorithm 1, 2 and 3 with respect to #DOF.
Eigenvalue problems 210 / 219
Figure: Conv. history of Algorithm 1, 2 and 3 with respect to CPU time.
Eigenvalue problems 211 / 219
Intermediate Conclusions
. Extension of backward error analysis to PDE case Miedlar2011/2014
. Error estimates for hp-finite elements for non-self-adjoint PDEevps Giani/Grubisic/Miedlar/Ovall 2014
. Multiple evs self-adjoint case Galistil 2014
. No results on multiple, complex evs, Jordan blocks innon-self-adjoint case.
. Highly oscillatory eigenfunctions can only be captured withfine grids.
. Can we enrich the ansatz space with these eigenfunctions?
Eigenvalue problems 212 / 219
Outline
1 Introduction2 Resonances in rail tracks3 Acoustic field computation4 Brake squeal5 Optimal control6 Normal forms7 Structured normal forms8 Linearizations9 Polynomial Evps with structure
10 Numerical methods for small problems11 Minimax principle12 Newton’s method13 Iterative projection methods14 Application Brake squeal15 Adaptive Finite Elements for evp16 Automated multilevel substructuring
Eigenvalue problems 213 / 219
AMLSCompute smallest evs of self-adjoint evp (λM − K )x = 0 withM,K pos. def. as in traditional approach. Bennighof-Lehouq2004. Use symmetric reordering of matrix to block form or use
directly domain decomposition partition. (λM − K )x = 0, with
structure. Compute block Cholesky factorization of M = LDLT and form
K = L−1K L−T .. Compute smallest evs and evecs of ’substructure’ evps
(λDii − Kii)xi and project large problem (modal truncation).. Solve projected evp.
Eigenvalue problems 214 / 219
Analysis of AMLS
. This produces locally global (spectral) ansatz functions insubstructure.
. This is a domain decomposition approach, where efunctionsare used in substructures.
. Substructure efunctions are sparsely represented in FE basis.
. Analysis only for self-adjoint case and real simple evs.
. Works extremely well for mechanical structures with littledamping.
. Does not work for brake problem.
Eigenvalue problems 215 / 219
Conclusions
. Eigenvalue problems are everywhere.
. Although it is taught in the first semester, it is a widely openare of mathematics.
. Modeling, analysis and numerics must go hand in hand.
. How to bring physical intuition and mathematical intuitionbetter together?
. There are many things to do.
Eigenvalue problems 216 / 219
References
. T. Betke, N.J. Higham, V. M., C. Schröder, and F. Tisseur.NLEVP: A collection of nonlinear eigenvalue problems,Journal of the ACM TOMS, Vol. 39, 7:1–7:28, 2013.
. C. Conrads, V. M., and A. Miedlar, Adaptive NumericalSolution of Eigenvalue Problems arising from Finite ElementModels. AMLS vs. AFEM. Contemporary Mathematics,American Mathematical Society, 2015.
. V. Mehrmann and H. Voss. Nonlinear Eigenvalue Problems: AChallenge for Modern Eigenvalue Methods Mitteilungen derGesellschaft für Angewandte Mathematik and Mechanik.27:121–151, 2005.
. F. Tisseur and K. Meerbergen. The quadratic eigenvalueproblem. SIAM Review, 43:234–286, 2001.
Eigenvalue problems 217 / 219
Thanks
Thank you very muchfor your attention
and my sponsors for their support
. ERC Advanced Grant MODSIMCONMP
. (DFG) Research center MATHEON
. German Ministry of Economics via AIF foundation.
. Industrial funding from several SMEs and car manufacturers.
Details: http://www.math.tu-berlin.de/˜ mehrmann/Eigenvalue problems 218 / 219
Advertisement
Eigenvalue problems 219 / 219