math3063 lecture notes - studentvip
TRANSCRIPT
University of Sydney
MATH3063 LECTURE NOTES
Shane Leviton 6-7-2016
Contents Introduction: ........................................................................................................................................... 6
Order of ODE: ...................................................................................................................................... 6
Explicit ODEs ....................................................................................................................................... 6
Notation: ............................................................................................................................................. 7
Example: .......................................................................................................................................... 7
Systems of ODEs ..................................................................................................................................... 7
Example 0: ðð¥ = 0 .......................................................................................................................... 8
Eg 2: ðð¥ = ð (or ðð¥ = ð(ð¡) ............................................................................................................ 8
But what when ð(ð¥) is a standard function ....................................................................................... 8
Matrix ODEs (lecture 2) ........................................................................................................................... 9
Linear equation of ð¥: ....................................................................................................................... 9
Solveing: FINDING THE EIGENVALUES/EIGENVECTORSâŒ! .............................................................. 9
Why do we care about eigenvectors/values? ............................................................................... 10
Matrix exponentials .............................................................................................................................. 12
When does ððŽ exist? ........................................................................................................................ 12
exp(ðŽ) first class: full set of linearly independent eigenvectors ................................................. 12
Diagonalizable or semi-simple matrix ........................................................................................... 13
How big of a restriction is a full set of LI eigenvectors? ............................................................... 13
Matrix and other term exponentials:............................................................................................ 16
Complex numbers and matricies: ................................................................................................. 16
Real normal form of matricies ...................................................................................................... 18
Repeated eigenvalues ....................................................................................................................... 20
Generalized eigenspace: ............................................................................................................... 20
Primary decomposition theorym .................................................................................................. 20
Theorems: ..................................................................................................................................... 21
General algorithm for finding ððŽ .................................................................................................. 21
Derivative of ððŽð¡ ........................................................................................................................... 24
Phase Plots ............................................................................................................................................ 28
Example: ........................................................................................................................................ 28
Phase Plot/Phase line ........................................................................................................................ 30
Remarks: ....................................................................................................................................... 31
Constant solution: ......................................................................................................................... 31
Spectrum: .......................................................................................................................................... 32
Definition of spectrum: ................................................................................................................. 32
Spectrally stable definition ........................................................................................................... 32
Stable/unstable/centre sub spaces: ................................................................................................. 33
Primary decomposition theorem: ................................................................................................. 33
Generally: ...................................................................................................................................... 34
Hyperbolic: .................................................................................................................................... 34
Bounded for forward time: Definition .......................................................................................... 35
Linearly stable definition: ............................................................................................................. 35
Asymptotically linearly stable Definition ...................................................................................... 36
Restricted dynamics: ......................................................................................................................... 36
Generally: ...................................................................................................................................... 37
The theory: .................................................................................................................................... 37
Classification of 2D linear systems: ................................................................................................... 41
Our approach: ............................................................................................................................... 42
Discriminant: ................................................................................................................................. 42
Overall picture: ............................................................................................................................. 62
Linear ODEs with non constant coeffiencts ...................................................................................... 64
Principle fundamental solution matric. ........................................................................................ 65
Liouvilleâs/ Abelâs Formula: (Theorem) ......................................................................................... 65
PART 2: METRIC SPACES ....................................................................................................................... 66
Start of thinking: ............................................................................................................................... 66
Key idea: ........................................................................................................................................ 66
Formal definition of metric spaces: .................................................................................................. 66
Metric (distance function): ........................................................................................................... 66
Metric space .................................................................................................................................. 66
Topology of metric spaces: ............................................................................................................... 68
Open ball ....................................................................................................................................... 68
Examples: ...................................................................................................................................... 68
Open subset of metric space ........................................................................................................ 69
Closed subset of metric space ...................................................................................................... 69
Interior: ......................................................................................................................................... 69
Closure: ......................................................................................................................................... 70
Boundary: ...................................................................................................................................... 70
Limit point/accumulation point .................................................................................................... 70
Sequences and Completeness .......................................................................................................... 71
Definition: complete sequence ..................................................................................................... 72
Contraction Mapping Principle: ........................................................................................................ 73
A contraction is a map: ................................................................................................................. 73
Definition: Lipshitz function .......................................................................................................... 76
Defintion: Locally Liphschitz .......................................................................................................... 77
Existence and Uniqueness of solutions of ODEs: .............................................................................. 77
Equation * ..................................................................................................................................... 77
Solution ......................................................................................................................................... 77
Dependence on initial conditions and parameters ........................................................................... 81
Theorem (P-L again) ...................................................................................................................... 82
Theorem : Families of solutions .................................................................................................... 82
Maximal intervals of existence ......................................................................................................... 84
Definition: Maximal interval of existence ..................................................................................... 84
PART 3: NON LINEAR PLANAR AUTONOMOUS ODES ........................................................................... 86
Terminology: ..................................................................................................................................... 87
Function vector: ............................................................................................................................ 87
Talking about what solutions are: ..................................................................................................... 89
Eg: pendulum equation ................................................................................................................. 89
Phase portraits: ............................................................................................................................. 91
Isoclines ......................................................................................................................................... 92
Linearisation: ..................................................................................................................................... 93
Generally for linearlisation: .......................................................................................................... 93
Hyperbolic good, non-hyperbolic = bad ........................................................................................ 94
More Qualitative results ................................................................................................................... 99
Theorem: uniqun phase curves .................................................................................................... 99
Closed phase curves/periodic solutoins ....................................................................................... 99
Hamiltonian Systems: ................................................................................................................. 104
Lotka-Volterra/ Predator-Prey system ............................................................................................ 105
Definition of predator prey: ........................................................................................................ 105
Analysis of system: ...................................................................................................................... 106
Modifications to equations: ........................................................................................................ 108
Epidemics: ....................................................................................................................................... 109
SIR model .................................................................................................................................... 110
Abstraction, Global existsnece and Reparametrisation .................................................................. 116
Definition: 1 parameter family.................................................................................................... 117
Definition: Orbit/trajectory ......................................................................................................... 117
Existence of complete flow: ........................................................................................................ 117
Flows on â ...................................................................................................................................... 118
Thereom: types of flows in â...................................................................................................... 120
ðŒalpha and ð omega limit sets: .................................................................................................... 122
Asymptotic limit point ................................................................................................................. 122
Non- Linear stability (of critical points) ........................................................................................... 129
Lyapunov stable definition: ......................................................................................................... 129
Asymptotically stable: defininition ............................................................................................. 129
Lyapunov functions ..................................................................................................................... 131
LaSalleâs Invariance principle: ..................................................................................................... 133
Hilbertâs 16th problem and Poincare- Bendixon theorem ............................................................... 134
Questino: hilbertâs 16th problem ................................................................................................. 134
Poincare Bendixson theorem Proof: ............................................................................................... 136
Proof:........................................................................................................................................... 136
Definitions of sets: disconnected ................................................................................................ 136
Proof of Poincare Bendixson theorem: ....................................................................................... 139
Bendixsons Negative Criterion .................................................................................................... 139
INDEX THEORY (Bonus) ....................................................................................................................... 142
Set up: ............................................................................................................................................. 142
Theta: .......................................................................................................................................... 142
Index: definition .............................................................................................................................. 142
Theorem: on Index ...................................................................................................................... 144
Poincare Index: ........................................................................................................................... 145
Index at infinity: .............................................................................................................................. 148
Computing the index at infinity .................................................................................................. 148
Theorem: Poincare Hopf Index theorem .................................................................................... 150
Part 4 Bifurcation theory .................................................................................................................... 151
Lets start with an example in 1D: ................................................................................................... 152
Bifurcations: .................................................................................................................................... 153
Bifurcation diagram: ................................................................................................................... 153
Bifurcation: .................................................................................................................................. 153
Example 2 (in 2D) (Saddle node) bifurcation .............................................................................. 153
Transcritical bifurcation .................................................................................................................. 156
1D example: ................................................................................................................................ 156
Supercritical Pitchfork bifurcation .................................................................................................. 158
Example ....................................................................................................................................... 158
Logistic growth with harvesting ...................................................................................................... 160
Hopf Bifurcation .............................................................................................................................. 161
Definition of Hopf: ...................................................................................................................... 161
Some more examples of bifurcations ............................................................................................. 165
FIREFLIES AND FLOWS ON A CIRCLE ............................................................................................... 167
Non uniform oscillator ................................................................................................................ 167
Glycolysis ......................................................................................................................................... 170
Model: Selâkov 1968 .................................................................................................................... 170
Catastrophe:.................................................................................................................................... 180
Eg 1: fold bifurcation: .................................................................................................................. 181
Eg 2: swallowtail catastrophe ..................................................................................................... 185
Hysteresis and van der pol oscillator .............................................................................................. 186
Van der poll oscillator example: ................................................................................................. 188
MATH3963 NON LINEAR DIFFERENTIAL EQUATIONS WITH APPLICATIONS (biomaths) NOTES
- Robert Marangell [email protected]
Assessment:
- Midterm (Thursday April 21) 25%
- 2 assignments 10% each (due 4/4 and 30/5)
- 55% exam
Assignments are typed (eg LaTex)
PART 1: LINEAR SYSTEMS Introduction:
- What is an ODE; how to understand them
a DE is a function relationship between a function and its derivatives
An ODE is when the function has only 1 variable independent variable.
Eg; ð¡ independant (ð¥(ð¡); ðŠ(ð¡)dependant
Not just scalar, but also vector valued functions (Systems of ODES)
ᅵᅵ(ð¡) â âð
ᅵᅵ(ð¡) = (ð¥1(ð¡), ð¥2(ð¡),⊠ð¥ð(ð¡))
ð¥ð: â â â
Order of ODE: - Highest derivative appearing in equation
If you can isolate the highest derivative (eg ð¹ (ð¡, ᅵᅵ(ð¡),ððŠ(ð¡)
ðð¡, âŠ
ð(ð)ᅵᅵ(ð¡)
ðð¡(ð)) = 0
Explicit ODEs If you can isolate highet derivative: it is an explicit ODE
ð(ð)ᅵᅵ(ð¡)
ðð¡(ð)= ðº (ð¡, ᅵᅵ(ð¡),
ððŠ(ð¡)
ðð¡, âŠð(ðâ1)ᅵᅵ(ð¡)
ðð¡(ðâ1))
(explicit is easier; implicit is very hard)
Notation: Use dot
ð
ðð¡=
Example: ðŠ + 2ᅵᅵðŠ + 3(ᅵᅵ)2 + ðŠ = 0
You can reduce the order of an ODE (to 1), in exchange for making it a system of ODEs
- Introduce new variables into system
ðŠ0 = ðŠ
ðŠ1 = ðŠ0 = ᅵᅵ
ðŠ2 = ðŠ1 = ᅵᅵ
⎠ðŠ0 = ðŠ1
ðŠ1 = ðŠ2
ðŠ2 = â2ðŠ2ðŠ0 â 3ðŠ12 â ðŠ0
Which is a first order system in 3 variables, equivalent to the original 3rd order ODE
Systems of ODEs - We can play the same game for systems of ODEs
ᅵᅵ + ðŽï¿œï¿œ = 0 (ð€ðð¡âᅵᅵ â â2; ðŽ = (ð ðð ð
))
⎠(ðŠ1ðŠ2) + (
ð ðð ð
)(ðŠ1ðŠ2) = 0
⎠ððð¡ðððð¢ðð:
ðŠ1 = ðŠ3
ðŠ2 = ðŠ4 ðŠ3 = ðŠ1 = âððŠ1 â ððŠ2
ðŠ4 = ðŠ2 = âððŠ1 â ððŠ2
⎠(
ðŠ1ðŠ2ðŠ3ðŠ4
) = (
0 0 1 00 0 0 1âð âð 0 0âð âð 0 0
)(
ðŠ1ðŠ2ðŠ3ðŠ4
)
ððð¡ï¿œï¿œ = (ðŠ1, ðŠ2, ðŠ3, ðŠ4)
The system of equations is now:
ᅵᅵ = ðµï¿œï¿œ
Where ᅵᅵ = ð(ᅵᅵ)
Example 0: ð(ð¥) = 0 ⎠ᅵᅵ = 0
â ð¥ï¿œï¿œ = 0âð ᅵᅵ = ððððð¥, ð â âð
This is all the solutions to ᅵᅵ = 0; and the solutions form an ð dimensional vector space
Eg 2: ð(ð¥) = ð (or ð(ð¥) = ð(ð¡) ⎠ᅵᅵ = ð
â ᅵᅵ(ð¡) = ðð¡ + ð
But what when ð(ð¥) is a standard function - If ð(ð¥) is linear in ð¥
Remember: âlinearâ if:
ð(ðð¥ + ðŠ) = ðð(ð¥) + ð(ðŠ)
Recall any linear map can be represented by a matrix, by choosing a basis
⎠ᅵᅵ = ðŽ(ð¡)ð¥
If ðŽ(ð¡) = ðŽ(independent of ð¡), it is called a constant coefficient system
ᅵᅵ = ðŽð¥
Eg:
ðŽ = (
0 1 0 01 0 1 00 1 0 10 0 1 0
)
â(
ð¥1ð¥2ð¥3ð¥4
)
= (
0 1 0 01 0 1 00 1 0 10 0 1 0
)(
ð¥1ð¥2ð¥3ð¥4
)
⎠ð¥1 = ð¥2
ð¥2 = ð¥1 + ð¥3
ð¥3 = ð¥2 + ð¥4
ð¥4 = ð¥3
Is the system of ODEs which we now have to solve
Matrix ODEs (lecture 2)
ᅵᅵ = ð(ð¥)
Linear equation of ð¥:
ᅵᅵ = ðŽð¥(ð¥ â âð; ððððŽ â ðð(â))
Eg:
ðð: ðŽ =0 1 01 0 10 1 0
ᅵᅵ = ðŽð¥; ð¥ = (ð¥1, ð¥2, ð¥3)
(ð¥1ð¥2ð¥3
) = (0 1 01 0 10 1 0
)(
ð¥!ð¥2ð¥3)
⎠ð¥1 = ð¥2;
ð¥2 = ð¥1 + ð¥3
ð¥3 = ð¥2
So a solution is a triple vector which solves this equation
Solveing: FINDING THE EIGENVALUES/EIGENVECTORSâŒ!
Recall:
A value
ð â â
Is an eigenvalue of an ð à ð matrix ðŽ if a non zero vector ᅵᅵ â âð such that
ðŽð£ = ðð£
ð£ is the corresponding eigenvector
â (A â λI)ð£ = 0 â ð£ â ker(ðŽ â ððŒ)
â ðŽ â ððŒðð ððð¡ððð£ððð¡ðððð
â det(ðŽ â ððŒ) = 0
det(ðŽ â ððŒ) is a polynomial in ð of degree ð called the characteristic polynomial of ðŽ
ððŽ(ð)
Eigenvalues are roots of ððŽ(ð) = 0
Fundamental theorem of algebra:
ððŽ(ð) has exactly ð (possibly complex) roots counted according to algebraic multiplicity.
Algebraic multiplicity:
The algebraic multiplicity of a root ð of ððŽ(ð) = 0 is the largest ð such that ððŽ(ð) = (ð â ð)ðð(ð)
and ð(ð) â 0
The algebraic multiplicity of an eigenvalue ð of ðŽ is its algebraic multiplicity as a root of ððŽ(ð)
Geometric multiplicity:
The geometric multiplicity of an eigenvalue is the maximum number of linearly independent
eigenvectors
Alg multiplicity and geometric
ððððððððððð¢ðð¡ððððððð¡ðŠ ⥠ðððððð¡ððððð¢ðð¡ððððððð¡ðŠ
Returning to example:
(
ð¥1ð¥2ð¥3
) = (0 1 01 0 10 1 0
)(
ð¥!ð¥2ð¥3)
ðŽ = (0 1 01 0 10 1 0
)
det(ðŽ â ððŒ) = |âð 1 01 âð 10 1 âð
| = âð(ð2 â 2) = 0
â ð = 0;±â2
Eigenvectors are found by finding the kernel of ker(ðŽ â ððŒ) for an eigenvalue:
Giving eigenvectors:
ð1 = 0 âð£1 = (10â1)
ð2 = â2 â ð£2 = (1
â21
)
ð3 = ââ2 â ð£3 = (â1
â21
)
Why do we care about eigenvectors/values?
ᅵᅵ = ðŽï¿œï¿œ
Suppose that ð£ is an eigenvector of ðŽ â ðð(â), with eigenvalues of ð.
Consider the vector of functions:
ᅵᅵ(ð¡) = ð(ð¡)ᅵᅵ
ð is some unkown function ð: â â â
If ᅵᅵ solves ᅵᅵ = ðŽð¥, then
ᅵᅵ(ð¡)ᅵᅵ = ðŽð(ð¡)ᅵᅵ = ð(ð¡)ðŽï¿œï¿œ = ðð(ð¡)ᅵᅵ(ðð ðððððð£ððð¡ðð)
As ᅵᅵ â 0,
â ᅵᅵ(ð¡) = ðð(ð¡)
Which is a differential equation of one variable
⎠ð(ð¡) = ððð¡
SO:
ð¥(ð¡) = ððð¡ï¿œï¿œ
is a solution vector to
ᅵᅵ = ðŽð¥
So- we have a 1D subspace of solutions ð¥(ð¡) = ð0ððð¡ï¿œï¿œ
Notes:
1. Solutions to ᅵᅵ = ðŽð¥ are of the form ð¥ = ð0ððð¡ï¿œï¿œ where ð is a constant eigenvalue of ðŽ of ᅵᅵ is
the corresponding eigenvectors are called EIGENSOLUTIONS
2. What we just did has a name âansatzâ (guess what the solution could be, and see if it is
correct)
Back to example:
⎠ð¥(ð¡) = ð0ð¡ (10â1) ; ðŠ(ð¡) = ðâ2ð¡ (
1
â21
) ; ð§(ð¡) = ðââ2 (1
ââ21
)
So, as linear- any linear combination of these is the solution:
So:
ð(ð¡) = ð1ð0ð¡ (
10â1) + ð2ð
â2ð¡ (1
â21
) + ð3ðââ2 (
1
ââ21
)
IS OUR SOLUTIONâŒâŒ!
ð(ð¡) is actually a 3D vector space.
Matrix exponentials ðŽ â ððÃð ; what is:
ððŽ
(exp(ðŽ))
For a matrix: DEFINE
ððŽ = exp(ðŽ) = â1
ð!ðŽð
â
(ð=0)
= ðŒ + ðŽ +1
2ðŽ2 +
1
3!ðŽ3 +â¯
If the entries converge, then: ððŽ IS an ð à ð matrix
When does ððŽ exist?
exp(ðŽ) first class: full set of linearly independent eigenvectors - Hypothesis: Suppose ðŽ has a full set of linearly independent eigenvectors
- Assume that hypothesis for the rest of today(weâll look at if itâs not later)
Let ð£1, ⊠ð£ð be the eigenvectors; with ð1, ⊠, ðð eigenvalues
Form the following matrix:
ð = (ð£1, ð£2, ⊠ð£ð)
(whose columes are the eigenvectors)
⎠ðŽð = (ðŽð£1, ðŽð£2, âŠðŽð£ð) = (ð1ð£1, ⊠ððð£ð)(ððŠðð¢ðð¡ððððððð¡ððð)
= ððððð(ð1, ⊠, ðð)
â ðÎ
⎠ðâ1ðŽð = Î(= diag(λ1, ⊠λn))
⎠ðÎ = ðŒ + Î +1
2Î2 +â¯
= ðððð(ðð1 , ⊠. , ððð)
LHS:
ððâ1ðŽð
Lemma:
(ðâ1ðŽð)ð = ðâ1ðŽðð
So:
⎠ððŽ = ððÎðâ1 (ðÎ = ðððð(ðð1 , ⊠, ððð))
AND: the eigenvectors of ððŽ = ððð, where ðð is eigenvalues of ðŽ,
Example:
ðŽ = (0 1 01 0 10 1 0
) ;
⎠ð¥ (10â1) ;(
1
â21
) ;(1
ââ21
)
ð = (10â1
1
â21
1
ââ21
)
Î = (000
0
â20
00
ââ2
)
⎠ððŽ = ððÎðâ1
= (10â1
1
â21
1
ââ21
)(000
0
ðâ2
0
00
ðââ2)(
10â1
1
â21
1
ââ21
)
â1
= (ð¢ðððŠððð ð€ðð)
Diagonalizable or semi-simple matrix A matrix ðŽ for which there exists an invertible matrix ð such that
ðâ1ðŽð = Î
(where Î is diagonal is called diagonalizable or semisimple
This says, if ðŽ is semisimple, then ððŽ exists, and we can compute it
How big of a restriction is a full set of LI eigenvectors?
Defective matricies
If ðŽ does not have a full set of linearly independent eigenvectors, it is called defective
- How big of a restriction is being semisimple?
- Are there many semi simple matrices?
Proposition:
Suppose ð1 â ð2 are eigenvalues of a matrix ðŽ with eigenvectors ð£1ðððð£2. Then ð£1 and ð£2 are
linearly independent
Proof:
If â a dependence relation, then âð1ðððð2 scalars such that
ð1ð£1 + ð2ð£2 = 0(ð1, ð2 â 0)
Applying ðŽ:
ðŽ(ð1ð£1 + ð2ð£2) = 0
ðŽð1ð£1 + ðŽð2ð£2 = 0 ⎠ð1ðð£1 + ð2ð2ð£2 = 0(ðð ðððððð£ððð¡ððð )
â ð¡ððð â ð1ð1ð£1 â ð1ð2ð£2 = 0
\⎠(ð2 â ð1)ð2ð£2 = 0
â ð2 = 0(⎠ðððð¡ðððððð¡ððð)
Example:
ðŽ = (ð ðð ð
)
ððŽ(ð) = ð2 â (ð + ð)ð + ðð â ðð
= ð2 â ðð + ð¿(ð¡ðððððð = ð + ð; ð¿ = ðð â ðð)
(ð = ððððð(ðŽ) =âððð
ð
ð=1
(ð ð¢ðððð¡âðððððððððð ))
⎠ð =ð ± âð2 â 4ð¿
2ððððððð¡ð
⎠ð+ â ðâððð2 â 4ð¿ = 0
⎠ð2 â 4ð¿ = 0
â ð¿ =ð2
4
Therefore: any matrix which does not lie on this parabola in ð, ð¿ space will be diagonalizable
ð2 â 4ð¿ = (ð â ð)2 + 4ðð
The set of matricies with repeated eigenvalues, is the locus of points with
(ð â ð)2 + 4ðð = 0
A 3D hypersurface, in 4D space
So what if ðŽ is defective? Can we compute ððŽ
- Yes
Two ð à ð matricies ðŽ, ðµ are commutative, if
ðŽðµ = ðµðŽ
Commutator
A commutator we will define as:
[ðŽ, ðµ] â ðŽðµ â ðµðŽ = 0(ððððŽðµððððð¢ð¡ð)
Proposition
If ðŽ and ðµ commute, then
ððŽ+ðµ = ððŽ + ððµ
- Aside: if ððŽ+ðµ = ððŽððµ, and ðŽ and ðµ satisfy symmetric â ðŽðµ = ðµðŽ
Proof
ððŽ+ðµ = ðŒ + (ðŽ + ðµ) +1
2(ðŽ + ðµ)2
= ðŒ + ðŽ + ðµ +1
2(ðŽ2 + ðŽðµ + ðµðŽ + ðµ2) + â¯
= ðŒ + ðŽ + ðµ +1
2ðŽ2 + ðŽðµ +
1
2ðµ2(ðð ððððð¢ð¡ðð¡ðð£ð)
And:
ððŽððµ = (ðŒ + ðŽ +1
2ðŽ2âŠ) (ðŒ + ðµ +
1
2ðµ2âŠ)
= ðŒ + (ðŽ + ðµ) +1
2ðŽ2 + ðŽðµ +
1
2ðµ2 +â¯
Example:
ðŽ = (ð 00 ð
)(ð â ð); ðµ = (0 10 0
)
⎠ððŽ = (ðð 00 ðð
) ; ððµ =(1 10 1
)
ððŽ+ðµ = (ðððð â ðð
ð â ð0 ðð
)
ððŽððµ = ððŽ+ðµ â (ð â ð)ðð = ðð â ðð
If ð¥ = ð â ð
⎠ð¥ = 1 â ðâð¥
One example of a solution: is
ð¥ â â6.6022 â 736.693ð
Nilpotent matrix:
Definition:
A matrix ð is nilpotent if some power of it is = 0 (ðð = 0for some ð)
Eg; ð =0 1 00 0 10 0 0
â ð3 = 0(ððððð¢ððð¡ð)
This means that the power series terminates for ðð :
ðð = ðŒ + ð +1
2ð2 + 0 + 0 +â¯
So- how do we compute the exponential?
Jordan Canonical Form:
Let ðŽ be an ð à ð matrix, with complex entries â a basis of âð and an invertible matrix ðµ (of basis
vectors) such that
ðµâ1ðŽðµ = ð + ð
Where ð is diagonal (semisimple); and ð is nilpotent; and ð and ð commute (ðð â ðð = 0)
So; this means that ððŽ exists for all matricies:
ðµâ1ðŽðµ = ð + ð
â ðµâ1ððŽðµ = ðððð
ððŽ = ðµðððððµâ1
This is cool.
Matrix and other term exponentials:
So now: consider the follow ðŽð¡; where ðŽ â ðð(â) and ð¡ â â
So:
ððŽð¡ =â(ð¡ð
ð!)ðŽð
â
ð=0
For a fixed ðŽ; the exponential is a function map between real numbers and matricies
ððŽð¡: â â ðð(â)
expðŽ(ð¡) : ð¡ â ððŽð¡
expðŽ(ð¡ + ð ) = ððŽ(ð¡+ð ) = ððŽð¡ððŽð = expðŽ(ð¡) expðŽ(ð )
ððŽð¡ is invertible! Inverse is ðâðŽð¡
So- what is the derivative between the map of the real numbers to the matricies?
Complex numbers and matricies:
Let ðœ = (0 â11 0
)
â ðœ2 = âðŒ;ðœ2 = âðœ; ðœ4 = ðŒ
ððœð¡ = ðŒ + ðœð¡ +1
2ðœ2ð¡2 +â¯
= ðŒ + ðœð¡ â1
2ð¡2ðŒ â
1
3!ð¡3ðœ
= ðŒ âð¡2
2ðŒ +
ð¡4
4!+ â¯+ ðœ (ð¡ â
ð¡3
3!+ â¯)
= cos ð¡ ðŒ + sin ð¡ ðœ
= (cos ð¡ â sin ð¡sin ð¡ cos ð¡
)
Which is the rotation matrix;
So: the matrix ðœ is the complex number ð; and we wrote Eulerâs formula
ððð¡ = cos ð¡ + ð sin ð¡
- We can actually take any complex number ð§ = ð¥ + ððŠ; and find its corresponding matrix
ðð§ â ð¥ðŒ + ðŠðœ = (ð¥ âðŠðŠ ð¥ )
(the technical term is a field isomorphism)
Complex number algebra:
ðð§1+ð§2 = ðð§1 +ðð§2
ðð§1ð§2 = ðð§1ðð§2 = ðð§2ðð§1 = ðð§2ð§1
Complex exponentials:
ððð§ = ððð¡
ð§ = ð¥ + ððŠ:
ðð§ = ðð¥ cos ðŠ + ððð¥ sinðŠ = ðð¥ (cosðŠ â sinðŠsinðŠ cosðŠ
)
= ðð¥ðŒ + ððŠðœ
Complex number things:
det(ðð§) = ð¥2 + ðŠ2 = |ð§|2
ðᅵᅵ = (ð¥ ðŠâðŠ ð¥) = ðð§
ð
ðð = ððŒ(ððð â â)
ðᅵᅵðð = ð|ð§|2 = |ð§|2ðŒ
ððð§ â 0;ðð§ðð ððð£ððð¡ðððð; ððð
ðð§â1 =
1
|ð§|2ðð§
ð =1
det(ðð§)ððð
So lets compute ððœð¡ = (cos ð¡ â sin ð¡sin ð¡ cos ð¡
):
Eigenvalues of ðœð¡ are ±ðð¡
ð âvectors are (±ð1)
ð = (ð âð1 1
) ;ðâ1 =1
2ð(1 ðâ1 ð
)
Î = (ðð¡ 00 âðð¡
)
â ððœð¡ =1
2ð(ð âð1 1
) (ððð¡ 00 ðâðð¡
) (1 ðâ1 ð
)
= (cos ð¡ â sin ð¡sin ð¡ cos ð¡
)
So we got what we started with! (which is good)
Real normal form of matricies Moving towards a real ânormalâ form for matricesl with real entries but complex eigenvalues
Fact: if ðŽ â ðð(â) then if ð is an eigenvalue; so ᅵᅵ is an e-value. (complex conjugate theorem). Also-
eigenvectors come in conjugate pairs as well
ðŽð£ = ðð£ â ðŽï¿œï¿œ = ᅵᅵᅵᅵ
- If ð is an eigenvalue ð = ð + ðð; then ᅵᅵ = ð â ðð is also an e-value
- If ᅵᅵ = ᅵᅵ + ðᅵᅵ;(ð¢, ð€ â âð) â ᅵᅵ = ᅵᅵ â ðᅵᅵ
So:
2ð¢ = ð£ + ᅵᅵ
ðŽ(2ð¢) = ðð£ + ðᅵᅵ = 2(ðᅵᅵ â ðᅵᅵ)
â ðŽï¿œï¿œ = (ðᅵᅵ â ðᅵᅵ)
Similarly;
ðŽï¿œï¿œ = (ðᅵᅵ + ðᅵᅵ)
So; if ð is the ð Ã 2 matrix
ð = (ð¢ ð€)
â ðŽð = ð (ð âðð ð
)
(we are close to finding the algorithm for computing the matrix exponential of any matrix)
Eg:
ðŽ =
0 1 0 0â1 0 1 00 â1 0 10 0 â1 0
Characteristic polynomial is:
ð¥4 + 3ð¥2 + 1
e-values of ðŽ are ±ððððð ±ð
ð (with ð =
1+â5
2 the golden ratio)
so eigenvectors:
ð£1 = (
ðâðâðð1
) = (
0âð01
) + ð(
10âð1
)= ð¢1 + ðð€ð
; ð£2 = (
ððâ1ðâð
)
â
So
ðŽð1 = ð1 (0 âðð 0
)
ð = (ð¢1ð€1ð¢2ð€2)ðâ1ðŽð =
(
0 âð 0 0ð 0 0 0
0 0 0 â1
ð
0 01
ð0)
this is called a block diagonal matrix (has blocks of diagonal matricies; and 0s everywhere else)
= ððœ ââ1
ððœ
Direct sum:
ðŽâðµ = (ðŽ 00 ðµ
)
ðŽ and ðµ donât need to be the same size
Lemma: if
ðŽ = ðµâð¶
Then
ððŽ = ððµâðð¶
Proof:
(ðŽ = (ðµ 00 0
) + (0 00 ð¶
)) ð€âððâððððð¢ð¡ð
â ððŽ = ð(ðµ 00 0
)ð(0 00 ð¶
)= (ð
ðµ 00 ðŒ
) (ðŒ 00 ðð¶
) = (ððµ 00 ðð¶
) = ððµâðð¶
Repeated eigenvalues
So what do we do if we have repeated eigenvalues? How do we calculate the matrix exponential?
Generalized eigenspace: Definition
Suppose ðð is an eigenvalue of a matrix ðŽ with algebraic multiplicity ðð. We define the generalized
eigenspace of ðð as:
ðžð â ker[(ðŽ â ðððŒ)ðð]
There are basically two reasons why this definition is important. The first is that these generalized
eigenspaces are INVARIANT UNDER MULTIPLICATION BY ðŽ (does not change). That is: if ð â ðžð, then
ðŽð is as well
Proposition: of generalized eigenspace
Suppose that ðžð is a generalized eigenspace of the matrix ðŽ, then if ð£ â ðžð; so is ðŽð£
Proof of proposition:
- If ð£ â ðžð for some generalized eigenspace, for some eigenvector ð; then ð£ â ker((ðŽ â ððŒ)ð)
say for some ð. Then we have that (ðŽ â ððŒ)ðᅵᅵ = (ðŽ â ððŒ)ð(ðŽ â ððŒ + ððŒ)ð£
= (ðŽ â ððŒ)ð+1 + (ðŽ â ððŒ)ððð£ = 0
The second reason we care about this is that it will give us our set of linearly independent
generalized eigenvectors; which we can use to find ððŽ
Primary decomposition theorym
Let ð: ð â ð be a linear transformation of an ð dimensional vector space over the complex
numbers. If ð1, ⊠ðð are the distinct eigenvalues (not necessarily = ð ) ; if Ej is the eigenspace for ðð,
then dim(ðžð) = the algebraic multiplicity of ðð and the generalized eigenvectors span ð. In terms of
vector spaces, we have:
ð = ðž1âðž2ââŠâðžð
Combining this with the previous proposition: ð will decompose the vector space into the direct sum
of the invariant subspaces.
The next thing to do is explicitly determine the ssemisimple nilpotent decomposition. Before: we
had the diagonal matrix Î which was
- Block diagonal
- In normal form for complex eigenvalues
- Diagonal for real eigenvalues
Suppose we have a real matrix ðŽ with ð eigenvalues ð1, âŠðð with repeated multiplicity, suppose we
break them up into complex ones and real ones, so we have ð1, ð1, ⊠, ðð, ðð complex and
ð2ð+1, ⊠ðð real. Let ðð = ðð + ðð + ð then:
Î â (ð1 ð1âð1 ð1
)â âŠâ (ðð ððâðð ðð
)â ðððð(ð2ð+1, ⊠ðð)
Let ð£1, ð£1 , ⊠ð£2ð+1, ⊠, ð£ð be a basis of âð of generalsed eigenvectors, then for complex ones: write
ð£ð = ð¢ð + ðð€ð
Let
ð â (ð¢1ð€1ð¢2ð€2âŠð¢ð ð€ðð£2ð+1âŠð£ð)
Returning to our construction, the matrix ð is invertiable (because of the primary decomposition
theorem); so we can write the new matrix as
ð = ðÎðâ1
And a matrix
ð = ðŽ â ð
We have the following:
Theorems: Let ðŽ, ð SÎandP be the matricies defined: then
1. ðŽ = ð + ð
2. ð is semisimple
3. ð, ð and A commute
4. ð is nilpotent
Proofs of theorems:
1. From definition of ð
2. From construction of ð
3. [ð, ðŽ] = 0 suppose that ð£ is a generalized eigenvector of ðŽ associated to eigenvalue ð. By
construction: ð£ is a genuine eigenvector of ð. Note that eigenspace is invariant, [ð, ðŽ]ð£ =
ððŽð£ â ðŽðð£ = ð(ðŽð£ â ðŽð£) = 0 so N and S commute. A and N commute from the definition
of N and that A commutes with S
4. suppose maximum algebraic multiplicity of an eigenvalue of ðŽ is ð. Then for any ð£ â ðžð a
generalized eigenspace corresponding to the eigenvalue ðð. We have ððð£ = (ðŽ â ð)ðð£ =
(ðŽ â ð)ðâ1ð£(ðŽ â ðð)ð£ = (ðŽ â ðð)ðð£ = 0
General algorithm for finding ððŽ 1. find eigenvalues
2. find generalized eigenspace for each eigenvalue.; eigenvalues Î
3. Turn these vectors into ð
4. ð = ðÎðâ1
5. ð = ðŽ â ð
6. ððŽð¡ = ððð¡ððð¡ = ððÎðâ1ð(ðŽâð)ð¡
Example:
(â1 1â4 3
)
Eigenvalues:
det ((â1 â ð 1â4 3 â ð
)) = 0
â ð = 1,1
Î = ðððð(1,1) = ðŒ
Generalized eigenspace:
ker((ðŽ â ððŒ)ðð) = ker((ðŽ â ðŒ)2) = ker (â2 1â4 2
)2
= ker ((0 00 0
)) = â2
So our vectors must span â ð; choose
(10); (
01)
ð = (1 00 1
) = ðŒ
â ð = ðŒðŒðŒâ1 = ðŒ
â ð = ðŽ â ð = ðŽ â ðŒ = (â2 1â4 2
)
So then:
ððŽ = ðððððð¡
Last time:
Algorithm for computing ððŽ and ððŽð¡
Eg: let
ðŽ = (
1 0 0 2â2 â2 3 4â1 â2 3 2â1 â2 1 4
)
Evals ar ð = 2,2,1,1
Generalized eigenspace for ð = 2
ðž2 = ker((ðŽ â 2ðŒ)2)
ðž2 is spanned by:
(
2001
) ;(
â2100
)
ðž1 = ker((ðŽ â ðŒ)2) spanned by
(
1010
) ;(
â1100
)
Î = ðððð(2,2,1,1); ð = ðÎðâ1
ð = (
2 â2 1 â10 1 0 10 0 1 01 0 0 0
)
ðâ1 = â¯
ð = ðÎðâ1
ð = ðŽð
Note: if not using a computer:
Check that ð is nilpotent (ðð = 0 for some ððð¡ððð ð¡ð¡âðð ðð§ðððð¡âðððð¡ððð¥, )
check ðð â ðð = 0 (that they commute)
⎠ðŽð¡ = (ð + ð)ð¡ = ðð¡ + ðð¡
â ððŽð¡ = ððð¡ððð¡ = ððÎð¡ðâ1ððð¡
= ððÎð¡ðâ1(1 + ðð¡)
ððŽð¡:â â ðºð¿(ð,â) = (ðððððððððð£ððð¡ððððð à ðððð¡ðððððð ð€ðð¡âððððððð¡ðððð )
Derivative of ððŽð¡ Proposition: what is
ð
ðð¡(ððŽð¡) = ðŽððŽð¡ = ððŽð¡ðŽ
(ðŽððŽð¡ = ððŽð¡ðŽððððð¡âðð¡ððŽð¡ðð¥ðð ð¡ð ððð = ððð€ððð ððððð ðððððð ððð¡ðð¡ððð)
Proof 1:
ð
ðð¡(ððŽð¡) =
ð
ðð¡(ðŒ + ð¡ðŽ +
1
2ð¡2ðŽ2 +
ð¡3
3!ðŽ3 +â¯)
= ðŽ + ð¡ðŽ2 +ð¡2
2ðŽ3 +
ð¡3
3!ðŽ4 +â¯
= ðŽ(1 + ð¡ +ð¡2
2+ð¡3
3!+ â¯)
(we will show it converges uniformly, and so we can differentiate it term by term)
= ðŽððŽð¡
Proof 2:
ð
ðð¡(ððŽð¡) = lim
ââ0(ððŽ(ð¡+â) â ððŽð¡
â)
= limââ0
(ððŽð¡(ððŽâ â ðŒ)
â)
= ððŽð¡ limââ0
((ðŽâ +
ðŽ2â2
2 +â¯)
â)
= ððŽð¡ limââ0
(ðŽ +ðŽ2
2â +â¯)
= ððŽð¡ðŽ
This proposition implies that ððŽð¡ is a solution to the matrix differential equation:
Ί(ð¡) = ðŽÎŠ(ð¡)
(ððð¡ð: ðŽðð ðððð ð¡ððð¡ððððððððððð¡)
ðŽ cannot be functions of ð¡, as the functions may not commute with the integral
Comparing this to the 1x1 differential equation:
ᅵᅵ = ððŠ
â ðŠ = ð¶ððð¡