intro to method of applied math
TRANSCRIPT
-
8/19/2019 Intro to method of applied math
1/187
Methods of Applied Mathematics I
Math 583, Fall 2015
Jens Lorenz
December 16, 2015
Department of Mathematics and Statistics,UNM, Albuquerque, NM 87131
Contents
1 A Simple Boundary Value Problem 51.1 Existence, Uniqueness, Green’s Function . . . . . . . . . . . . . . 51.2 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Perturbed Equations and the Contraction Mapping Theorem 112.1 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 The Contraction Mapping Theorem . . . . . . . . . . . . . . . . 112.3 Example: extended . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 The Mean Value Theorem in Integral Form . . . . . . . . . . . . 152.5 Completeness of (C (Ω), | · |∞) . . . . . . . . . . . . . . . . . . . . 162.6 Completeness of (C 2[0, l], | · |∞,2) . . . . . . . . . . . . . . . . . . 18
3 Fredholm’s Alternative and Green’s Function 193.1 Linear Second–Order BVPs . . . . . . . . . . . . . . . . . . . . . 193.2 The Green’s Function . . . . . . . . . . . . . . . . . . . . . . . . 213.3 Estimates for the Integral Operator G . . . . . . . . . . . . . . . 233.4 The Formal Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Introduction to Distribution Theory 284.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Testfunctions and Distributions . . . . . . . . . . . . . . . . . . . 304.3 Examples of Regular and Singular Distributions . . . . . . . . . . 314.4 Differentiation of Distributions . . . . . . . . . . . . . . . . . . . 324.5 The Simplest Differential Equations for Distributions . . . . . . . 334.6 A Simple Equation Involving Distributions . . . . . . . . . . . . 374.7 The Formal Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . 384.8 Further Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.9 The Green’s Function Revisited . . . . . . . . . . . . . . . . . . . 42
1
-
8/19/2019 Intro to method of applied math
2/187
5 Three Types of Linear Problems 465.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1.1 Inhomogeneous Problems . . . . . . . . . . . . . . . . . . 46
5.1.2 Spectral Problems . . . . . . . . . . . . . . . . . . . . . . 465.1.3 Problems of Evolution; Semi–Group Theory . . . . . . . . 47
5.2 BVPs as Operator Equations . . . . . . . . . . . . . . . . . . . . 475.3 Inhomogeneous Problems . . . . . . . . . . . . . . . . . . . . . . 495.4 Spectral Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 495.5 Problems of Evolution: Semigroup Theory . . . . . . . . . . . . . 49
6 Linear Operators: Riesz Index, Projectors, Resolvent 506.1 Nullspaces and Ranges . . . . . . . . . . . . . . . . . . . . . . . . 506.2 The Neumann Series . . . . . . . . . . . . . . . . . . . . . . . . . 566.3 Spectral Theory of Matrices . . . . . . . . . . . . . . . . . . . . . 58
6.3.1 Eigenvalues and Resolvent . . . . . . . . . . . . . . . . . . 586.3.2 Jordan Normal Form and Riesz Index . . . . . . . . . . . 596.3.3 Bases for N r and Rr . . . . . . . . . . . . . . . . . . . . . 606.3.4 Pro jectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.3.5 Integral Representation of the Projector P . . . . . . . . . 626.3.6 Direct Sum Decompositions for Different Eigenvalues . . . 63
7 Riesz–Schauder Theory of Linear Compact Operators 667.1 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 667.2 Null Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697.3 Range Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717.4 The Spectrum of Linear Compact Operators . . . . . . . . . . . . 76
8 Hilbert–Schmidt Kernels and Compact Operators 798.1 Integral Operators With Continuous Kernels . . . . . . . . . . . 798.2 Limits of Compact Operators . . . . . . . . . . . . . . . . . . . . 808.3 Hilbert–Schmidt Kernels . . . . . . . . . . . . . . . . . . . . . . . 81
9 Best Approximations in Inner Product Spaces 849.1 The Gram-Schmidt Process . . . . . . . . . . . . . . . . . . . . . 849.2 Best Approximations . . . . . . . . . . . . . . . . . . . . . . . . . 849.3 The Decomposition H = V ⊕ V ⊥ . . . . . . . . . . . . . . . . . . 869.4 Riesz Representation Theorem . . . . . . . . . . . . . . . . . . . 87
10 Spectral Theory of Compact Hermitian Operators 8910.1 Motivation: The Matrix Case . . . . . . . . . . . . . . . . . . . . 8910.2 Use of the Quadratic Form (u, Au) . . . . . . . . . . . . . . . . . 9010.3 Existence of an Orthonormal Sequence of Eigenvectors . . . . . . 9510.4 Density of C ∞0 in L2 . . . . . . . . . . . . . . . . . . . . . . . . . 9910.5 Application to Ordinary BVPs . . . . . . . . . . . . . . . . . . . 10210.6 The Sign of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . 10510.7 Lower Bound for Eigenvalues . . . . . . . . . . . . . . . . . . . . 10710.8 Expansion of the Green’s function . . . . . . . . . . . . . . . . . 110
2
-
8/19/2019 Intro to method of applied math
3/187
10.9 Two Formulas for Green’s Functions: An Example . . . . . . . . 11210.9.1 First Formula for the Green’s Function: Eigenfunction
Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
10.9.2 Second Formula for the Green’s Function: Via Delta–Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
11 Initial–Boundary Value Problems on a Finite x–Interval 11711.1 The Formal Solution Process for an IBVP . . . . . . . . . . . . . 11711.2 A Particular Problem . . . . . . . . . . . . . . . . . . . . . . . . 11911.3 An IBVP on 0 ≤ x ≤ l . . . . . . . . . . . . . . . . . . . . . . . . 122
11.3.1 Solution Via Separation of Variables . . . . . . . . . . . . 12211.3.2 Solution Via Laplace Transform in Time . . . . . . . . . . 122
12 A Problem on the Half–Line 0 ≤ x < ∞ 12412.1 An ODE for 0 ≤ x < ∞ . . . . . . . . . . . . . . . . . . . . . . . 12412.2 Transformation to a First–Order System . . . . . . . . . . . . . . 12412.3 The Green’s Function . . . . . . . . . . . . . . . . . . . . . . . . 12812.4 The Case 0 < λ < ∞ . . . . . . . . . . . . . . . . . . . . . . . . . 12912.5 Estimate of u by u and u . . . . . . . . . . . . . . . . . . 13412.6 S ummary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
13 An Initial–Boundary Value Problem 13713.1 Application of Laplace Transform . . . . . . . . . . . . . . . . . . 13713.2 Fourier and Laplace Transform . . . . . . . . . . . . . . . . . . . 13813.3 Inverse Laplace Transform: The Simplest Example . . . . . . . . 13913.4 Inverse Laplace Transform: The Case a = 0 in Theorem 13.1 . . 143
13.5 Inverse Laplace Transform: Proof of Theorem 13.1 . . . . . . . . 14513.6 Solution Formula and Example . . . . . . . . . . . . . . . . . . . 14713.7 The Strip Problem Reconsidered . . . . . . . . . . . . . . . . . . 149
14 Fourier Transformation: Application to the Equation u − λu =f 15214.1 Fourier Transformation . . . . . . . . . . . . . . . . . . . . . . . . 15214.2 The Equation u(x) − λu(x) = f (x) for x ∈ R . . . . . . . . . . . 153
14.2.1 The Case λ /∈ iR . . . . . . . . . . . . . . . . . . . . . . . 15414.2.2 The Case λ ∈ iR . . . . . . . . . . . . . . . . . . . . . . . 155
14.3 Extension of a Bounded Linear Operator . . . . . . . . . . . . . . 156
15 The Cauchy Problem for the Heat Equation 15915.1 Fourier Transform in Space . . . . . . . . . . . . . . . . . . . . . 15915.2 Laplace Transform in Time . . . . . . . . . . . . . . . . . . . . . 16115.3 Transform in Space and Time . . . . . . . . . . . . . . . . . . . . 16215.4 Remarks on the Heat Kernel and Non–Uniqueness . . . . . . . . 165
16 The Cauchy Problem for the Wave Equation 16916.1 T he 1D Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
16.1.1 The 1D Case with g ≡ 0 . . . . . . . . . . . . . . . . . . . 16916.1.2 The 1D Case with f ≡ 0 . . . . . . . . . . . . . . . . . . . 171
3
-
8/19/2019 Intro to method of applied math
4/187
16.2 T he 3D Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17216.2.1 The 3D Case with f ≡ 0 . . . . . . . . . . . . . . . . . . . 17216.2.2 The 3D Case with g
≡0 . . . . . . . . . . . . . . . . . . . 175
16.2.3 Light Cones . . . . . . . . . . . . . . . . . . . . . . . . . . 17616.3 The 2D Wave Equation: Method of Descent . . . . . . . . . . . . 177
17 Elliptic Equations with Constant Coefficients 179
18 Remarks on Hyperbolic Equations 185
4
-
8/19/2019 Intro to method of applied math
5/187
1 A Simple Boundary Value Problem
1.1 Existence, Uniqueness, Green’s Function
Let f ∈ C [0, l] denote a given function. Try to find a function u ∈ C 2[0, l]satisfying
−u(x) = f (x) for 0 ≤ x ≤ l, u(0) = u(l) = 0 . (1.1)Here the differential equation −u = f is supplemented by two homogeneousDirichlet boundary conditions:
u(0) = u(l) = 0 .
Uniqueness: Suppose that u1 and u2 solve the BVP (1.1). We set u =
u1 − u2 and obtain thatu(x) = 0 for 0 ≤ x ≤ l, u(0) = u(l) = 0 .
It follows that u(x) = A + Bx and the boundary conditions yield that A = B =0. Therefore, the BVP has at most one solution.
Existence: By integration, we first obtain a special solution of the differ-ential equation,
usp(x) = − x0
t0
f (s) ds .
Since the general solution of the homogeneous differential equation −u = 0 isuhom(x) = A + Bx
we obtain that
u(x) = usp(x) + A + Bx
is the general solution of the equation −u = f . We now impose the boundaryconditions to obtain A and B :
0 = u(0) = A, 0 = u(l) = usp(l) + Bl .
This yields the following solution of the BVP:
u(x) = usp(x) − usp(l) xl
.
Motivation for a Green’s Function: We want to find a continuousfunction
g : [0, l] × [0, l] → Rwith the following property: For every f ∈ C [0, l] the solution u(x) of the BVP(1.1) is given by
5
-
8/19/2019 Intro to method of applied math
6/187
u(x) =
l0
g(x, y)f (y) dy . (1.2)
If this holds then we have
0 = u(0) =
l0
g(0, y)f (y) dy for all f ∈ C [0, l] ,
which implies that
g(0, y) ≡ 0 .Similarly,
g(l, y) ≡ 0 .If we differentiate (1.2) twice and formally put the derivatives under the integralsign, then we obtain
f (x) = −u(x) = − l0
gxx(x, y)f (y) dy .
Since, formally,
l0
δ (x − y)f (y) dy = f (x) ,
we are lead to require
−gxx(x, y) = δ (x − y) .This motivates the following construction.
Construction of a Solution Via a Green’s Function: Fix at point0 < x0 < l and try to find a function g(x) with
−g(x) = δ (x − x0), g(0) = g(l) = 0 .Here we first take an intuitive view of the δ –function and think of the right–handside of the above equation as a function satisfying
l0 δ (x − x0) dx = 1, δ (x − x0) = 0 for x = x0 .
Thus, the conditions for g(x) are the boundary conditions
g(0) = g(l) = 0
and
g(x) = 0 for x = x0 and 1 = − l0
g(x) dx = g (0) − g(l)
6
-
8/19/2019 Intro to method of applied math
7/187
In addition, we require that g ∈ C [0, l].The differential equation g (x) = 0 for x = x0 yields
g(x) =
Ax + B for 0 ≤ x < x0Cx + D for x0 < x ≤ l
The boundary conditions imply
B = 0 and D = −Cl .Therefore,
g(x) =
Ax for 0 ≤ x < x0
C (x − l) for x0 < x ≤ lThe continuity condition g ∈ C [0, l] yields that
Ax0 = C (x0 − l) ,thus
C = Ax0x0 − l .
Obtain:
g(x) =
Ax for 0 ≤ x ≤ x0Ax0x0−l (x − l) for x0 ≤ x ≤ l
and
g(x) =
A for 0 ≤ x < x0Ax0x0−l for x0 < x ≤ l
Therefore,
g(0) − g(l) = A(1 − x0x0 − l ) = A
−lx0 − l .
The condition g(0) − g(l) = 1 yields that
A = l − x0
l .
We have obtained:
g(x) = 1
l
x(l − x0) for 0 ≤ x ≤ x0x0(l − x) for x0 ≤ x ≤ l
Clearly, the function depends on x as well as x0.With a change of notation, let
g(x, y) = 1
l
x(l − y) for 0 ≤ x ≤ yy(l − x) for y ≤ x ≤ l
We then have that g ∈ C ([0, l] × [0, l]) and
7
-
8/19/2019 Intro to method of applied math
8/187
−gxx(x, y) = δ (x − y), g(0, y) = g(l, y) = 0 .
The function g(x, y) is called the Green’s function for the BVP (1.1). It istrivial, but important, to note that the function g(x, y) does not depend on theinhomogeneous term f (x) in the differential equation.
Note that
gx(x, y) = 1
l
l − y for 0 ≤ x < y
−y for y < x ≤ l
Theorem 1.1 For any f ∈ C [0, l] the solution of the BVP (1.1) is
u(x) =
l0
g(x, y)f (y) dy . (1.3)
It is easy to check that the function u(x) defined in (1.3) satisfies u(0) =u(l) = 0. Thus, it remains to show that −u = f .
Formal Argument: If we formally differentiate under the integral sign,we obtain
−u(x) = l0
−gxx(x, y)f (y) dy
=
l0
δ (x − y)f (y) dy
= f (x)
Rigorous Argument: From
u(x) =
l0
g(x, y)f (y) dy
obtain that
ux(x) =
l0
gx(x, y)f (y) dy
= x0 −
y
l f (y) dy +
1
l lx (l − y)f (y) dy
and
uxx(x) = −xl
f (x) − 1l
(l − x)f (x) = f (x) .
8
-
8/19/2019 Intro to method of applied math
9/187
1.2 Formalization
Let
V = C [0, l]
denote the space of right–hand sides f of the differential equation −u = f inBVP.
Let
U = {u ∈ C 2[0, l] : u(0) = u(l) = 0}denote the solution space. Note that the boundary conditions are taken care of by imposing them onto the functions in the solution space.
We have shown that the differential operator
L :
U → V u → −u
is 1–1 and onto. The inverse of L is the integral operator
G :
V → U
f → l0 g(x, y)f (y) dy = (Gf )(x)On V = C [0, l] define the norm
|f |∞ = max0≤x≤l
|f (x| for f ∈ V .
On C 2
[0, l] define the norm
|u|∞,2 = |u||∞ + |u|∞ + |u|∞ for u ∈ C 2[0, l] .Since U ⊂ C 2[0, l] the norm | · |∞,2 is also a norm on U .
Theorem 1.2 1) For all f ∈ V = C [0, l] the estimate
|Gf |∞ ≤ C 0|f |∞ (1.4)holds with
C 0 = max0≤x≤l l
0 g(x, y) dy .
2) Let f ∈ V and set u = Gf . There is a constant C > 0, independent of f , so that the estimate
|u|∞ + |u|∞ + |u|∞ ≤ C |f |∞ (1.5)holds.
9
-
8/19/2019 Intro to method of applied math
10/187
Proof: 1) Let u = Gf . We have for all x ∈ [0, l]:
|u(x)| ≤ l0
|g(x, y)||f (y)| dy
≤ |f |∞ l0
|g(x, y)| dy
Taking the maximum over 0 ≤ x ≤ l and noting that g(x, y) ≥ 0, the firstestimate follows.
2) By 1) we have |u|∞ ≤ C 0|f |∞. Also, since −u = f we have |u|∞ = |f |∞.It remains to estimate |u|∞. Since
u(0) = u(l) = 0
there exists 0 ≤ ξ ≤ l with u(ξ ) = 0. Then we have
u(x) = u(x) − u(ξ ) = xξ
u(s) ds ,
thus
|u(x)| ≤ |x − ξ ||u|∞ ≤ l|u|∞ = l|f |∞ .Therefore,
|u|∞,2 ≤ (1 + l + C 0)|f |∞ .
10
-
8/19/2019 Intro to method of applied math
11/187
2 Perturbed Equations and the Contraction Map-
ping Theorem
2.1 An Example
Consider the nonlinear boundary value problem
−u(x) = f (x) + ε cos u(x) for 0 ≤ x ≤ l, u(0) = u(l) = 0 , (2.1)where f ∈ V := C [0, l] is a given function.
Applying the operator G introduced in Section 1.2 yields the fixed pointequation
u = Gf + εG cos u .
For fixed f ∈ V , define the following map
Φ :
V → U ⊂ V
u → Φ(u) = Gf + εG cos uUsing Theorem 1.2 we obtain for all u1, u2 ∈ V :
|Φ(u1) − Φ(u2)|∞ ≤ |ε|C 0 | cos u1 − cos u2|∞≤ |ε|C 0 |u1 − u2|∞
The second estimate follows from the mean value theorem and the bound
| sin ξ | ≤ 1 for all real ξ .Therefore, if |ε|C 0 < 1, the contraction mapping theorem implies that there
exists a unique u ∈ V with
u = Φ(u) .
It then follows that u ∈ U , and u is the unique solution of the BVP (2.1).
2.2 The Contraction Mapping Theorem
Metric Spaces. Let X denote any nonempty set. A function
d :
X × X → [0, ∞)(u, v) → d(u, v)is called a metric on X if the following holds for all u, v, w ∈ X :
1) d(u, v) = d(v, u)2) d(u, w) ≤ d(u, v) + d(v, w)3) d(u, v) = 0 if and only if u = v.
A pair (X, d) where X is a nonempty set and d is a metric on X is called ametric space.
A sequence un ∈ X is called a Cauchy sequence if for all ε > 0 there existsan integer N (ε) so that
11
-
8/19/2019 Intro to method of applied math
12/187
d(um, un) < ε for all m, n ≥ N (ε) .
A metric space (X, d) is called complete if every Cauchy sequence in X hasa limit in X .If
U, ·
is a normed space then
d(u, v) = u − v for u, v ∈ U defines a metric on U . If the corresponding metric space is complete, then
U, ·
is called a Banach space.
Example: Let Ω ⊂ RN denote a compact set and let C (Ω) denote thevector space of all continuous functions u : Ω → R. Define the maximum norm
|u|∞ = maxx∈Ω |u(x)|, u ∈ C (Ω) .One can show that
C (Ω), | · |∞
is a Banach space.
Definition: Let (X, d) denote a metric space. A mapping Φ : X → X iscalled a contraction if there exists 0 ≤ q
-
8/19/2019 Intro to method of applied math
13/187
d(un+1, un) ≤ q nd(u1, u0) .
Therefore, if n > m:
d(un, um) ≤ d(un, un−1) + . . . + d(um+1, dm)≤
q n−1 + q n−2 + . . . + q m
d(u1, u0)
≤ q m(1 + q + . . .)d(u1, u0)=
q m
1 − q d(u1, u0)
Since q m → 0 as m → ∞ it follows that un is a Cauchy sequence in X , thusun → u∗ for some u∗ ∈ X .
In the equation
un+1 = Φ(un)
we let n → ∞. Using the continuity of Φ we obtain that
u∗ = Φ(u∗) ,
i.e., u∗ is a fixed point. Furthermore, in the estimate
d(un, um) ≤ q m
1 − q d(u1, u0)
one can let n go to infinity. Then un →
u∗ and the error estimate (2.2) follows.
2.3 Example: extended
Let
φ : [0, l] ×R3 → Rdenote a C 1 function and consider B V P ε:
−u(x) = f (x) + εφ(x, u(x), u(x), u(x)), u(0) = u(l) = 0 , (2.3)where f ∈ V = C [0, l] is a given function. Define the operator Q : C 2[0, l] →C [0, l] by
Q(u)(x) = φ(x, u(x), u(x), u(x)) for 0 ≤ x ≤ l .As above, we apply the operator G to the differential equation (2.3) and BV P εbecomes the fixed point equation
u = Gf + εGQ(u), u ∈ C 2[0, l] .If ε = 0 then BV P 0 has the unique solution
13
-
8/19/2019 Intro to method of applied math
14/187
u0 := Gf .
We want to show that for |ε| sufficiently small B V P ε has a solution uε close tou0, and this solution is locally unique.
On the space U we use the norm
|u|∞,2 = |u|∞ + |u|∞ + |u|∞ .The estimate (1.5) becomes
|Gh|∞,2 ≤ C G|h|∞ for all h ∈ V = C [0, l] .On C 2[0, l] define the operator Φε by
Φε(u) = u0 + εGQ(u) .
We obtain that Φε : C 2[0, l] → U and BV P ε is equivalent to the fixed point
equation
u = Φε(u), u ∈ C 2[0, l] .If the derivatives of the function φ are unbounded, one cannot expect that theoperator Φε is globally a contraction.
We will restrict Φε to a finite ball centered at u0. In fact, let
B1 = {u ∈ U : |u − u0|∞,2 ≤ 1} .
For u ∈ B1 we have
|Φε(u) − u0|∞,2 = |ε||GQ(u)|∞,2≤ |ε|C G|Q(u)|∞
It is not difficult to prove that there is a constant M Q so that
|Q(u)|∞ ≤ M Q|u|∞,2 for all u ∈ B1 .In fact, one can choose
M Q = max{|
φ(x,α ,β ,γ )|
: 0≤
x≤
l,|α−
u0(x)|+
|β −
u0
(x)|+γ
−u0
(x)| ≤
1}
.
Then the above estimate for |Φε(u) − u0|∞,2 yields that
|Φε(u) − u0|∞,2 ≤ |ε|C GM Q for u ∈ B1 .Therefore, if
|ε|C GM Q ≤ 1 ,then the operator Φε maps the ball B1 into itself.
For u1, u2 ∈ B1 we have
14
-
8/19/2019 Intro to method of applied math
15/187
|Φε(u1)
−Φε(u2)
|∞,2
≤ |ε
|C G
|Q(u1)
−Q(u2)
|∞≤ |ε|C GC Q|u1 − u2|∞,2for a suitable constant C Q. The second estimate follows from the mean valuetheorem; see Section 2.4.
If |ε| is small enough then we obtain by the contraction mapping theoremthat BV P ε has a unique solution uε ∈ B1. (A solution exists in B1 and isunique in B1, but other solutions outside of B1 may also exist.)
We can apply the error estimate (2.2) of the contraction mapping theoremwith
u∗ = uε, m = 0, u1 = Φε(u0) = u0 + εGQ(u0) .
Since |u1 − u0|∞,2 ≤ C |ε| the bound (2.2) yields that
|uε − u0|∞,2 = O(ε) .
This must be expected since BV P ε differs from BV P 0 by an O(ε)–perturbationterm.
2.4 The Mean Value Theorem in Integral Form
A standard form of the mean value theorem of calculus is the following: If f : [a, b] → R is a C 1 function then there exists ξ ∈ [a, b] so that
f (b) − f (a) = f (ξ )(b − a) .In analysis, it is often better to express the value f (ξ ) in integral form, i.e., asan average of derivatives.
Let H : Rm → R denote a C 1 map and let P, Q ∈ Rm. The function
P + t(Q − P ), 0 ≤ t ≤ 1 ,parameterizes the straight line from P to Q.
Set
h(t) = H (P + t(Q
−P )), 0
≤t
≤1 .
By the chain rule,
h(t) = ∇H (P + t(Q − P )) · (Q − P ) ,and by the fundamental theorem of calculus,
h(1) − h(0) = 10
h(t) dt .
Since h(0) = H (P ) and h(1) = H (Q) one obtains
15
-
8/19/2019 Intro to method of applied math
16/187
H (Q)−
H (P ) = 1
0 ∇H (P + t(Q
−P ))
·(Q
−P ) dt
=
10
∇H (P + t(Q − P )) dt · (Q − P )
Here the integral is defined componentwise.If H : Rm → Rs is vector valued, one obtains in the same way:
Theorem 2.2 (Mean value theorem in integral form) Let H : Rm → Rs denote a C 1 function and let P, Q denote two points in Rm. Then the identity
H (Q) − H (P ) = 10
DH (P + t(Q − P )) dt (Q − P )
holds. Here DH (x) ∈ Rs×m denotes the Jacobian of H at x and the integral is defined elementwise.
Application: Let
φ : [0, l] ×R3 → Rdenote a C 1 function and define the operator
Q : C 2[0, l] → C [0, l]
by
Q(u)(x) = φ(x, u(x), u(x), u(x)), 0 ≤ x ≤ l .Note that the derivatives of φ(x,α,β,γ ) are bounded if (α ,β ,γ ) varies in abounded set. Therefore, for all u1, u2 ∈ B1:
|Q(u1)(x) − Q(U 2)(x)| = |φ(x, u1(x), u1(x), u1(x)) − φ(x, u2(x), u2(x), u2(x))|≤ C Q
|u1(x) − u2(x)| + |u1(x) − u2(x)| + |u1(x) − u2(x)|
≤ C Q|u1 − u2|∞,2
2.5 Completeness of (C (Ω), | · |∞)Theorem 2.3 Let Ω ⊂ RN denote a compact set and let C (Ω) denote the vector space of all continuous functions u : Ω → R. Then
|u|∞ = maxx∈Ω
|u(x)|
defines a norm on C (Ω) and the space
C (Ω), | · |∞
is complete.
16
-
8/19/2019 Intro to method of applied math
17/187
Proof: To prove completeness, we let un ∈ C (Ω) denote a Cauchy sequence.First fix x ∈ Ω. Then the sequence of numbers un(x) is a Cauchy sequence inR. Since (R,
| · |) is complete, there exists u(x)
∈R so that
un(x) → u(x) as n → ∞ .This defines a function u : Ω → R. We have to show that u ∈ C (Ω) and that
|un − u|∞ → 0 as n → ∞ .a) Let x ∈ Ω and let ε > 0 be given. There exists N so that
|un − um|∞ ≤ ε4
for n, m ≥ N .Since uN ∈ C (Ω) there exists δ > 0 so that
|uN (x) − uN (y)| ≤ ε4
for all y ∈ Ω with |x − y| ≤ δ .Therefore, for all y ∈ Ω with |x − y| ≤ δ :
|u(x) − u(y)| ≤ |u(x) − uN (x)| + |uN (x) − uN (y)| + |uN (y) − u(y)|≤ |u(x) − uN (x)| + ε
4 + |uN (y) − u(y)|
Also, for all ξ ∈ Ω and all n ≥ N :
|un(ξ ) − uN (ξ )| ≤ |un − uN |∞ ≤ ε4
.
For n → ∞ we obtain that
|u(ξ ) − uN (ξ )| ≤ ε4
.
Therefore, for all y ∈ Ω with |x − y| ≤ δ :
|u(x) − u(y)| ≤ |u(x) − uN (x)| + ε4
+ |uN (y) − u(y)| ≤ ε .This proves that u ∈ C (Ω).
b) We have that
|un(x)
−um(x)
| ≤ ε
4for all x ∈ Ω and all n, m ≥ N (ε). In this estimate we let n → ∞ to obtainthat
|u(x) − um(x)| ≤ ε4
for all x ∈ Ω if m ≥ N (ε) .Thus,
|u − um|∞ ≤ ε if m ≥ N (ε) .This proves that um → u w.r.t | · |∞. The space
C (Ω), | · |∞
is complete.
17
-
8/19/2019 Intro to method of applied math
18/187
2.6 Completeness of (C 2[0, l], | · |∞,2)Let un ∈ C 2[0, l] denote a Cauchy sequence w.r.t. the norm
|u|∞,2 = |u|∞ + |u|∞ + |u|∞ .It follows that the three sequences
un, un, u
n
are Cauchy sequences in C [0, l] w.r.t. |·|∞. Therefore, there exist limits u, v, w ∈C [0, l] with
|un − u|∞ → 0, |un − v|∞ → 0, |un − w|∞ → 0 . (2.4)In the equation
un(x) = un(0) +
x0
un(s) ds for 0 ≤ x ≤ l
we let n → ∞ and obtain that
u(x) = u(0) +
x0
v(s) ds for 0 ≤ x ≤ l .
This implies that u ∈ C 1[0, l] and u = v. In the same way, obtain thatv ∈ C 1[0, l] and v = w. Therefore, u ∈ C 2[0, l] and u = w. Then (2.4) yieldsthat
|un − u|∞,2 → 0 as n → ∞ .We have proved that the normed space
C 2[0, l], | · |∞,2
is complete.
18
-
8/19/2019 Intro to method of applied math
19/187
3 Fredholm’s Alternative and Green’s Function
3.1 Linear Second–Order BVPs
Let a0, a1, a2 ∈ C [a, b] and assume that
a0(x) = 0 for all x ∈ [a, b] .Define the differential operator
Lu = a0u + a1u + a2u for u ∈ C 2[a, b] .
Also, define the boundary operator
Ru = R1uR2u
= α1u(a) + α2u(a)
β 1u(b) + β 2u(b) where (α1, α2) = (0, 0) = (β 1, β 2) .
For given f ∈ C [a, b] consider the BVP
Lu = f, Ru = 0 . (3.1)
For simplicity, we assume all functions to be real.
Auxiliary Results on Initial–Value Problems: We use the same nota-tion as above.
Lemma 3.1 1) Every IVP
Lu(x) = f (x), a ≤ x ≤ b, u(a) = α, u(a) = β ,has a unique solution u ∈ C 2[a, b].
2) The homogeneous equation
Lu = 0 in a ≤ x ≤ b ,has two linearly independent solutions, u1(x) and u2(x).
3) Let u1(x), u2(x) denote two solutions of Lu = and let
W (x) = det
u1(x) u2(x)u1(x) u
2(x)
denote their Wronskian. If u1(x) and u2(x) are linearly independent functions,then W (x) = 0 for all a ≤ x ≤ b. If u1(x) and u2(x) are linearly dependent,then W (x) ≡ 0.
Proof of 3): Let u1, u2 be linearly independent and suppose that W (a) = 0.There exists a vector
αβ
=
00
with
19
-
8/19/2019 Intro to method of applied math
20/187
u1(a) u2(a)u1(a) u
2(a)
αβ
=
00
.
The function
u(x) = αu1(x) + βu2(x)
satisfies
Lu = 0, u(a) = u(a) = 0 .
It follows that u ≡ 0. The functions u1 and u2 are linearly dependent. We will prove Fredholm’s Alternative for the BVP Lu = f,Ru = 0.
Essentially, the alternative follows from the above results for IVPs together
with Fredholm’s alternative for 2 × 2 linear matrix systems.Theorem 3.1 (Fredholm’s Alternative) The BVP (3.1) has a unique solution u ∈ C 2[a, b] if and only if the homogeneous problem
Lu = 0, Ru = 0 . (3.2)
has only the trivial solution.
Proof: Assume that the homogeneous problem has only the trivial solution.We must show that the inhomogeneous problem is solvable.
Let usp(x) denote the solution of the initial value problem
Lu = f, u(a) = u(a) = 0 ,
and let u1, u2 denote two linearly independent solutions of the homogeneousequation Lu = 0. Every function of the form
u(x) = usp(x) + c1u1(x) + c2u2(x)
satisfies Lu = f . Imposing the boundary conditions on u(x) leads to two linearequations for c1 and c2:
c1R1u1 + c2R1u2 =
−R1usp
c1R2u1 + c2R2u2 = −R2uspIf the corresponding homogeneous system has the nontrivial solution ( c∗1, c
∗2)
then the homogeneous BVP has the nontrivial solution
c∗1u1(x) + c∗2u2(x) ,
which contradicts our assumption. Therefore, the above system for c1 and c2is uniquely solvable, leading to a solution of the inhomogeneous BVP.
20
-
8/19/2019 Intro to method of applied math
21/187
3.2 The Green’s Function
See Stakgold I, p. 66.
Assume unique solvability of the BVP (3.1). We want to write the solutionin the form
u(x) =
ba
g(x, y)f (y) dy
where g (x, y) is the Green’s function corresponding to (L, R).First fix a < y < b. We want to construct the function
g(x) = g(x, y)
satisfying the following conditions:1) (Lg)(x) = 0 for a
≤x < y and for y < x
≤b.
2) g satisfies the boundary conditions, i.e., Rg = 0.3) g ∈ C [a, b].4) g(x) satisfies a jump condition at x = y so that
(Lg)(x) = δ (x − y) .Let u1(x) and u2(x) denote two linearly independent solutions of the homo-
geneous equation Lu = 0. Then g(x) has the form
g(x) =
Au1(x) + Bu2(x) for x < yCu1(x) + Du2(x) for y < x
Here the constants A,B, C, D will generally depend on y.
Lemma 3.2 Let u1, u2 denote two linearly independent solutions of Lu = 0and consider the boundary operator
R1u = α1u(a) + α2u(a) with (α1, α2) = (0, 0) .
Then
R1u1 = 0 or R1u2 = 0 .
Proof: If R1u1 = R1u2 = 0 then
u1(a) u1(a)u2(a) u
2(a)
α1α2
=
00
.
However, W (a) = 0, a contradiction. Assume first that R1u1 = 0 and set
uh1(x) = −R1u2R1u1
u1(x) + u2(x) .
Then uh1 is a nontrivial solution of Lu = 0 with
R1uh1 = 0 .
21
-
8/19/2019 Intro to method of applied math
22/187
Similarly, if R1u2 = 0, consider
uh1
(x) = u1(x)−
R1u1
R1u2u2(x) .
In both cases, we obtain a nontrivial solution uh1(x) of
Lu = 0, R1u = 0 .
Therefore, enforcing the boundary conditions on g (x) leads to
g(x) =
A1uh1(x) for x < yA2uh2(x) for y < x
Here uh1 and uh2 are two nontrivial solutions of Lu = 0 with
R1uh1 = 0, R2uh2 = 0 .The solutions uh1 and uh2 are linearly independent. Otherwise, there would bea nontrivial solution u of Lu = 0 satisfying R1u = R2u = 0.
We note that
g(x) =
A1uh1(x) for x < y
A2uh2(x) for y < x
The condition g ∈ C [a, b] leads to the requirement
A1uh1(y) − A2uh2(y) = 0 .
Integrate the condition
(Lg)(x) = δ (x − y) for a ≤ x ≤ bover the interval
y − ε ≤ x ≤ y + ε .We have that
y+εy−ε
δ (x − y) dy = 1 for all ε > 0 .
Also,
y+εy−ε
g(x) dy = g (y + ε) − g(y − ε) .
This leads to the jump condition
A2uh2(y) − A1uh1(y) =
1
a0(y) .
The system for the coefficients A1, A2 becomes
22
-
8/19/2019 Intro to method of applied math
23/187
uh1(y) −uh2(y)
−uh1(y) u
h2(y)
A1A2
=
01
a0(y)
The determinant of the matrix is
det(y) = uh1(y)uh2(y) − uh2(y)uh1(y) ,
which is the Wronskian of the functions uh1 and uh2. Since det(y) = 0 oneobtains a unique solution for A1 and A2.
Solving for A1 and A2 we obtain
A1 = A1(y) = uh2(y)
a0(y)det(y), A2 = A2(y) =
uh1(y)
a0(y)det(y) .
The Green’s function is
g(x, y) = 1
a0(y)det(y)
uh1(x)uh2(y) for a ≤ x ≤ y ≤ buh2(x)uh1(y) for a ≤ y ≤ x ≤ b .
3.3 Estimates for the Integral Operator G
Consider the BVP Lu = f, Ru = 0 as specified above. Assume that the homo-geneous equation
Lu = 0, Ru = 0
only has the trivial solution u = 0. Thus, we can construct a Green’s function
g ∈ C
[a, b] × [a, b]
so that the integral operator G defined by
(Gf )(x) =
ba
g(x, y)f (y) dy
is the solution operator for BVP. Clearly, for all f ∈ C [a, b],
|Gf |∞ ≤ M |f |∞with
M = maxx
b
a |g(x, y)
|dy .
Theorem 3.2 There exists a constant C so that
|Gf |∞,2 ≤ C |f |∞ for all f ∈ C [a, b] .
Proof: First assume that
Lu = ( pu) + qu
with p ∈ C 1[a, b], q ∈ C [a, b] and
23
-
8/19/2019 Intro to method of applied math
24/187
| p(x)| ≥ p0 > 0 for a ≤ x ≤ b .
Also, assume that
R1u = α1u(a) + α2u(a) and α2 = 0 .
In the following, M 1, M 2 etc. denote constants independent of f .If u = Gf then R1u = 0, thus
|u(a)| ≤α1
α2
|u(a)| ≤ M 1|f |∞ .From the equation
( pu) = f − quobtain that
( pu)(x) − ( pu)(a) = xa
(f − qu)(s) ds .
Therefore,
|u|∞ ≤ M 2|f |∞ .Also,
pu + pu + qu = f ,
and one obtains the bound
|u|∞ ≤ M 3|f |∞ .This proves the estimate of the theorem if Lu = ( pu) + qu and α2 = 0.
If α2 = 0 but β 2 = 0, we can argue as above and obtain the estimate. If α2 = β 2 = 0, then
u(a) = u(b) = 0 ,
thus u(ξ ) = 0 for some ξ . The estimate follows as above.Now consider the equation
Lu = a0u + a1u + a2u = f
where a j ∈ C and a0(x) = 0 for all a ≤ x ≤ b. We multiply the equation by afunction α(x) which will be determined below. We have
αa0u + αa1u + αa2u = αf (3.3)
and want the choose α so that
αa0u + αa1u = ( pu) = pu + pu .
24
-
8/19/2019 Intro to method of applied math
25/187
We obtain the conditions
p = αa0, p = αa1 ,
thus
(ln p) = p
p =
a1a0
.
Set
p(x) = exp x
a(a1/a0)(s) ds
and set
α(x) =
p(x
a0(x) .
Then the equation (3.3) takes the form
( pu) + αa2u = αf .
Our previous estimate applies, and we obtain
|u|∞,2 ≤ M 3|α|∞|f |∞ = M 4|f |∞ .This proves the theorem.
3.4 The Formal Adjoint
Assume that
a0 ∈ C 2, a1 ∈ C 1, a2 ∈ C .The formal adjoint of L is
L∗v = (a0v) − (a1v) + a2v= a0v
+ (2a0 − a1)v + (a0 − a1 + a2)v
Note that L = L∗ if and only if a 0 = a1. If this holds then L is called formally
self–adjoint and can be written as
Lu = (a0u) + a2u .
Define the inner product for (real valued) functions u, v ∈ C [a, b] by
(u, v) =
ba
u(x)v(x) dx .
For functions u, v with compact support in I = [a, b] we have that
(Lu,v) = (u, L∗v) .
25
-
8/19/2019 Intro to method of applied math
26/187
We now assume that u, v ∈ C 2(I ). Then one obtains through integration byparts
(Lu,v) − (u, L∗v) = J (u, v)ba
with
J (u, v) = a0(uv − uv) + (a1 − a0)uv .
We now want to impose boundary conditions on u and v so that the boundaryterms vanish. Consider
J a = J (u, v)(a) ,
for definiteness. Suppose we have
0 = Rau = α1u(a) + α2u(a) .
Case 1: α2 = 0.Then we have u(a) = 0. If we require that v(a) = 0 then J a = 0.
Case 2: α2 = 0.The boundary condition Rau = 0 yields that
u(a) = −α1α2
u(a) .
One obtains that
J a = u(a)
α∗1v(a) + α∗2v(a)
=: u(a)R∗av
with
α∗1 = −a0(a)α1α2
+ (a1 − a0)(a), α∗2 − −a0(a) .
Thus, one can obtain a boundary operator R∗ so that the following holds:If u, v ∈ C 2 and Ru = R∗v = 0 then
(Lu,v) = (u, L∗v) .
One calls (L∗, R∗) the adjoint of (L, R).Note: If L = L∗ then a0 = a1 and one can choose R∗ = R.
Theorem 3.3 a) Assume that
Lu = 0, Ru = 0 implies u = 0 .
Then
L∗v = 0, R∗v = 0 implies v = 0 .
b) Let g(x, y) denote the Green’s function for
26
-
8/19/2019 Intro to method of applied math
27/187
Lu = f 1, Ru = 0
and let h(x, y) denote the Green’s function for
L∗v = f 2, R∗v = 0 .
Then we have
g(x, y) = h(y, x) .
Proof: a) Let f ∈ C be arbitrary and let Lu = f, Ru = 0. Assume that
L∗v = 0, R∗v = 0 .
Then we have
(f, v) = (Lu,v)
= (u, L∗v)= 0
Since v is orthogonal to all f ∈ C it follows that v = 0.b) Let
u(x) =
ba
g(x, y)f 1(y) dy
and
v(y) =
ba
h(y, x)f 2(x) dx .
Since
(Lu,v) = (u, L∗v)
we have that
(f 1, v) = (u, f 2) .
This yields that
ba
f 1(y)
ba
h(y, x)f 2(x) dxdy =
ba
f 2(x)
ba
g(x, y)f 1(y) dydx .
Therefore,
ba
ba
h(y, x) − g(x, y)
f 1(y)f 2(x) dxdy = 0 .
The claim follows.
27
-
8/19/2019 Intro to method of applied math
28/187
4 Introduction to Distribution Theory
The English theoretical physicist Paul Dirac (1902–1984) introduced the δ –
function. The French mathematician Laurent Schwartz (1915–2002) pioneereddistribution theory. He received the fields medal in 1950.
4.1 Motivation
Intuitively, we may think of the delta–function as a function from R to R whichhas the following two properties:
1) δ (x) = 0 for x = 0;2)
∞−∞ δ (x) dx = 1.
However, such a function does not exist.One can try to replace δ (x) by a sequence of functions sn(x) defined for
x∈R.
Example 1:
sn(x) = 1
π
n
1 + n2x2, n = 1, 2, . . .
Example 2:
sn(x) = n√
π e−n
2x2 , n = 1, 2, . . .
For both sequences, the following holds:
limn→∞ sn(x) = 0 for x = 0, limn→∞ sn(0) = ∞ .
and
∞−∞
sn(x) dx = 1 for all n .
Example 3: Set
sn(x) =
0 for |x| > 1n2n for 12n < |x| ≤ 1n
−n for
|x
| ≤ 12n
In this case,
limn→∞ sn(x) = 0 for x = 0, limn→∞ sn(0) = −∞ .
and
∞−∞
sn(x) dx = 1 for all n .
For all three examples, the following is easy to check: If φ ∈ C ∞0 (R) then
28
-
8/19/2019 Intro to method of applied math
29/187
∞
−∞
sn(x)φ(x) dx → φ(0) as n → ∞ .
Proof for Example 1: First note that
∞−∞
sn(x) dx = 1
π
∞−∞
n dx
1 + n2x2 (nx = y, ndx = dy)
= 1
π
∞−∞
dy
1 + y2
= arctan y∞−∞
= 1
For any φ ∈ C ∞0 we have ∞−∞
sn(x)φ(x) dx =
∞−∞
sn(x)
φ(x) − φ(0)
dx + φ(0) .
Denote the integral on the right–hand side by J n. We must show that J n → 0as n → ∞.
Let ε > 0 be given. There exists δ > 0 with
|φ(x) − φ(0)| ≤ ε2
for |x| ≤ δ .Therefore,
|J n| ≤ δ−δ
· · · + |x|≥δ
· · ·
≤ ε2
+ 4|φ|∞ 1π
∞δ
n dx
1 + n2x2
The last integral equals
J (n, δ ) =
∞nδ
dy
1 + y2
and it is easy to see that J (n, δ ) → 0 as n → ∞.
Proof for Example 3: Proceeding as for Example 1, one obtains that
|J n| ≤ ε2
+ 4|φ|∞ ∞δ
|sn(x)| dx .
If n is large enough, then sn(x) = 0 for x ≥ δ and, therefore, |J n| ≤ ε2 forn ≥ N (ε).
29
-
8/19/2019 Intro to method of applied math
30/187
4.2 Testfunctions and Distributions
The space
D = C ∞0 (R,C)consists of all functions φ : R → C which are infinitely often differentiable andwhich have compact support.
In general, if φ : R → C is any function, then its support is the closure of the set
{x ∈ R : φ(x) = 0} .Thus, a function φ : R → C has compact support if and only if there exists afinite interval [a, b] so that
φ(x) = 0 if x /∈ [a, b] .The space D is called the space of test functions. An example of a test
function is
φ(x) =
e−1/(1−x2) for |x| 0 the function
ψ(x) = φx − x0
ε
is also a test function.In the space D one introduces a convergence concept of sequences as follows:Definition 3.1: Let φn, φ ∈ D. Then one says
φn → φ in Dif the following two conditions hold:
1) There exists a finite interval [a, b] so that φ(x) = φn(x) = 0 for all n andall x /∈ [a, b].
2) For all derivatives Dk = dk
dxk:
|Dk(φ
−φn)
|∞ →0 as n
→ ∞, k = 0, 1, 2, . . .
This convergence concept is very restrictive: For
φn → φ in Dto hold, all derivatives of φn must converge uniformly to the correspondingderivatives of φ and, outside some finite interval, the functions φn(x) and φ(x)all agree with each other.
A functional t on the vector space D is simply a map from D to the field of scalars C:
30
-
8/19/2019 Intro to method of applied math
31/187
t :
D → Cφ
→ t(φ)
A functional t : D → C is called linear if
t(aφ1 + bφ2) = at(φ1) + bt(φ2) for all a, b ∈ C and φ1, φ2 ∈ D .
If t : D → C is a linear functional, we also write
t(φ) = t, φ for φ ∈ D .
Definition 3.2: A linear functional t : D → C is called continuous if
φn → φ in D implies t, φn → t, φ ∈ C .Since the condition for convergence φn → φ in D is very strong, the condition
for continuity of a linear functional t : D → C is very mild. Typically, if youwrite down a linear functional t : D → C, it will be continuous.Definition 3.3: The linear space of all continuous linear functionals t : D → Cis called the space of distributions (in one space dimension). It is denoted byD. This space is also called the dual space to D. If tn, t ∈ D then convergence
tn → t in D
means, by definition, that
tn, φ → t, φ in C for all φ ∈ D .
4.3 Examples of Regular and Singular Distributions
If f ∈ L1loc = L1loc(R,C) then the assignment
φ → f, φ = ∞−∞
f (x)φ(x) dx, φ ∈ D ,
defines a distribution, which is identified with f . One can prove: If f, g ∈ L1locthen f and g determine the same distribution if and only if f (x) = g(x) almost
everywhere.Continuity of the linear functional
φ → ∞−∞
f (x)φ(x) dx
on D follows from Lebesgue’s Dominated Convergence Theorem (LDCT).The distributions of the form
φ → ∞−∞
f (x)φ(x) dx
31
-
8/19/2019 Intro to method of applied math
32/187
with f ∈ L1loc are called regular distributions. All other distributions are calledsingular.
For any fixed a
∈R the assignment
φ → δ a, φ = φ(a)is a singular distribution. One also writes
δ a, φ = φ(a) = ∞−∞
δ (x − a)φ(x) dx
although the integral is meaningless as a Riemann or Lebesgue integral.Example: Let
H (x) =
1 for x > 00 for x
-
8/19/2019 Intro to method of applied math
33/187
The operator D on distributions is a continuous operator in the followingsense:
Lemma 4.1 Let tn, t ∈ D. If tn → t in D then then Dtn → Dt in D.Proof: For all φ ∈ D:
Dtn, φ = −tn, Dφ→ − t,Dφ = Dt,φ .
4.5 The Simplest Differential Equations for Distributions
Example 1: Find all t ∈ D with
Dt = 0 .
a) Let c ∈ C denote any constant. We have for all φ ∈ D:
Dc,φ = −c,Dφ= −c
∞−∞
φ(x) dx
= 0
This shows that the distribution t = c satisfies Dc = 0.b) We will show that the equation Dt = 0 has no other solutions. Thus, let
t ∈ D and assume that Dt = 0. Fix a testfunction φ0 with ∞−∞
φ0(x) dx = 1
and set
t, φ0 =: c .If φ ∈ D is arbitrary, then we set
J =
∞
−∞
φ(x) dx
and obtain the decomposition
φ(x) = J φ0(x) +
φ(x) − Jφ0(x)
=: J φ0(x) + ψ(x)
with
∞−∞
ψ(x) dx = 0 .
33
-
8/19/2019 Intro to method of applied math
34/187
Note that
w(x) := x
−∞ψ(s) ds
defines a testfunction w with Dw = ψ. Therefore,
t, ψ = t, w = −Dt,w = 0 .The decomposition
φ = J φ0 + ψ
implies
t, φ = J t, φ0= cJ
= c
∞−∞
φ(x) dx
= c, φ
This proves that the distribution t equals the constant c = t, φ0. Example 2: Find all t ∈ D with
Dt = δ 0 .
We know that DH = δ 0 and thus
D(H + c) = δ 0
for any constant c. Thus, any distribution t of the form t = H + c satisfiesDt = δ 0. Conversely, if Dt = δ 0 then D(t− H ) = 0, thus t = H + c by Example1. This shows that t = H + c is the general solution of the differential equationDt = δ 0.
Example 3: Let f ∈ C , i.e., f : R → C is a continuous function. Find allsolutions t ∈ D of
Dt = f .
Set
F (x) =
x0
f (s) ds ,
thus F ∈ C 1(R,C). For all φ ∈ D holds:
34
-
8/19/2019 Intro to method of applied math
35/187
DF,φ
=
−F, φ
= − ∞−∞
F (x)φ(x) dx
=
∞−∞
f (x)φ(x) dx
= f, φ
This shows that t = F satisfies Dt = f . All solutions of the equation Dt = f are given by
t = c +
x0
f (s) ds .
Example 4: Find all t ∈ D with
D2t = 0 .
a) If c1 and c2 are constants then let t = c1x + c2. We have for all φ ∈ D:
D2t, φ = t, φ=
∞−∞
(c1x + c2)φ(x) dx
= ∞
−∞(c
1x + c
2)φ(x) dx
= 0
Thus, the distribution t = c1x + c2 satisfies D2t = 0.
b) Show that there are no other solutions. If D2t = 0 then D(Dt) = 0. ByExample 1 we obtain that Dt = c1. Then Example 3 yields that
t = c1x + c2 .
Example 5: Find all t ∈ D with
D
2
t = δ 0 .We know from Example 2 that
Dt = c + H .
Set
H 1(x) =
x0
H (s) ds .
We claim that DH 1 = H .
35
-
8/19/2019 Intro to method of applied math
36/187
Proof:
DH 1, φ
=
−H 1, φ
= − ∞0
xφ(x) dx
= −xφ(x)∞x=0
+
∞0
φ(x) dx
= H, φ
It follows that all solutions t of the equation D2t = δ 0 are given by
t = c1x + c2 + H 1(x) .
Continuity of the Differentiation Operator in DLet tn, t ∈ D and let
tn → t in D .This means that
tn, φ → t, φ in C for all φ ∈ D .One obtains that
Dtn, φ = −tn, φ → − t, φ = Dt,φ in C for all φ ∈ D .
Thus,
Dtn → Dt in D .The operator D is continuous in D.
Example: Consider the sequence of functions
sn(x) = 1
n sin(enx) .
Each sn defines the distribution
sn, φ
=
1
n R
sin(enx)φ(x) dx .
If the interval [a, b] contains the support of φ then
|sn, φ| ≤ b − an
|φ|∞ ,thus sn → 0 in D. The derivative of sn(x) is
sn(x) = en
n cos(enx) with |sn|∞ → ∞ as n → ∞ .
Nevertheless, sn → 0 in the sense of distributions.
36
-
8/19/2019 Intro to method of applied math
37/187
4.6 A Simple Equation Involving Distributions
Multiplication of a distribution by a C ∞–function: Let a ∈ C ∞ and lett ∈ D. One defines the product at ∈ D by
at,φ = t,aφ for all φ ∈ D .Example: Determine all t ∈ D with
xt = 0 .
If t = cδ 0, where c ∈ C is an arbitrary constant, then
xt,φ = cδ 0, xφ = 0for all φ ∈ D. Thus the distribution t = cδ 0 solves the equation xt = 0.
We want to show that the equation xt = 0 has only the solutions t = cδ 0.Assume xt = 0. Fix a function φ0 ∈ D with φ0(0) = 1 and set
t, φ0 = c .For any φ ∈ D write
φ(x) = φ(0)φ0(x) +
φ(x) − φ(0)φ0(x)
=: φ(0)φ0(x) + ψ(x) .
Here ψ ∈ D and ψ(0) = 0.We can write
ψ(x) = x
0ψ(t) dt (substitute t = xs, dt = xds)
= x
10
ψ(xs) ds
= xw(x)
with w ∈ D.This yields the decomposition
φ(x) = φ(0)φ0(x) + ψ(x) = φ(0)φ0(x) + xw(x) .
We have
t, ψ = t,xw = xt,w = 0 .Therefore,
t, φ = t, φ(0)φ0= φ(0)c
= cδ 0, φWe have shown that
t = cδ 0 .
37
-
8/19/2019 Intro to method of applied math
38/187
4.7 The Formal Adjoint
A General Linear Differential Operator
Let a0, a1, . . . , an ∈ C ∞(R,C) and setLu = a0D
nu + a1Dn−1u + . . . + an−1Du + a0u for u ∈ D .
For t ∈ D and φ ∈ D we have:
Lt,φ =n
k=0
an−kDkt, φ
=n
k=0
Dkt, an−kφ
=n
k=0
(−1)kt, Dk(an−kφ)
= t, L∗φ
with
L∗φ =∞k=0
(−1)kDk(an−kφ) .
The operator L∗ is called the formal adjoint of L.
Solutions of Lt = δ 0
We now assume that a0(x) = 0 for all x ∈ R. We expect that we candetermine solutions t ∈ D of the equation Lt = δ 0 in the following way:
Let g denote a piecewise smooth function of the form
g(x) =
u(x) for x 0
(4.1)
where
u ∈ C ∞(−∞, 0] and (Lu)(x) = 0 for x ≤ 0and
v ∈ C ∞[0, ∞) and (Lv)(x) = 0 for x ≥ 0 .Furthermore, assume that g ∈ C n−2(R), i.e.,
Dku(0) = Dkv(0) for k = 0, . . . , n − 2 . (4.2)Our vague considerations also suggest that we should consider
ε−ε
a0Dng dx +
ε−ε
a1Dn−1g dx + . . . +
ε−ε
ang dx =
ε−ε
δ 0(x) dx = 1 .
38
-
8/19/2019 Intro to method of applied math
39/187
This formally leads to the requirement
Dn−1v(0)−
Dn−1u(0) = 1
a0(0) . (4.3)
Theorem 4.1 Let g : R → C denote a function of the form (4.1) satisfying (4.2) and (4.3). Then g defines a regular distribution and
Lg = δ 0 .
Proof: We must show that
Lg,φ = φ(0) for all φ ∈ D .Here
Lg,φ = g, L∗φ
=n
k=0
(−1)k ∞−∞
g(x)Dk(an−kφ)(x) dx
=
nk=0
(−1)k 0−∞
u(x)Dk(an−kφ)(x) dx +n
k=0
(−1)k ∞0
v(x)Dk(an−kφ)(x) dx
=: U + V
We now use integration by parts on smooth functions and must show that
U + V = φ(0).
Lemma 4.2 Let u ∈ C ∞(−∞, 0] and let p ∈ D. Then we have 0−∞
u(x)Dk p(x) dx = (−1)k 0−∞
(Dku(x)) p(x) dx + BT leftk
with
BT leftk =k−1 j=0
(−1) j(D ju Dk−1− j p)(0) .
Similarly, if v ∈
C ∞[0,∞
) then
∞0
v(x)Dk p(x) dx = (−1)k ∞0
(Dkv(x)) p(x) dx + BT rightk
with
BT rightk = −k−1 j=0
(−1) j(D jv Dk−1− j p)(0) .
39
-
8/19/2019 Intro to method of applied math
40/187
Proof: Through integration by parts:
0−∞
uDk p dx = (uDk−1 p)(0) − 0−∞
Du Dk−1 p dx
= (uDk−1 p)(0) − (DuDk−2 p)(0) + 0−∞
D2u Dk−2 p dx
= . . .
= (−1)k 0−∞
(Dku(x)) p(x) dx + BT leftk
We continue the proof of the theorem. Using the fact that u and v solvethe homogeneous equations, Lu = 0 and Lv = 0, one obtains that
g, L∗φ = nk=0
(−1)k
BT leftk + BT rightk
=n
k=0
(−1)kk−1 j=0
(−1) j
(D ju)Dk−1− j(an−kφ)(0) − (D jv)Dk−1− j(an−kφ)(0)
Using the equations (4.2), it follows that all terms cancel except the termsfor k = n and j = n − 1. One then obtains that
g, L∗φ = (−1)n(−1)n−1((Dn−1u)a0φ)(0) − ((Dn−1v)a0φ)(0)
= −(a0φ)(0)(Dn−1u − Dn−1v)(0)= φ(0)
The last equation follows from (4.3).
4.8 Further Examples
Example: Consider the function
f (x) =
ln x for x > 0
0 for x 00 for x
-
8/19/2019 Intro to method of applied math
41/187
Df,φ
=
−f, φ
= − ∞0
(ln x)φ(x) dx
= − limε→0+
∞ε
(ln x)φ(x) dx
= − limε→0+
− (ln ε)φ(ε) −
∞ε
φ(x)
x dx
= limε→0+
(ln ε)φ(ε) +
∞ε
φ(x)
x dx
Note that the existence of the limit is guaranteed since the distribution Df iswell–defined. One cannot expect, however, that the two limits
limε→0+(ln ε)φ(ε) and limε→0+
∞ε
φ(x)
x dx
exist separately. (The two limits will only exist if φ(0) = 0.)
Example: Consider the function
f (x) = ln |x| for x = 0with pointwise derivative
f (x) = 1
x for x = 0 .
Note that
f ∈ L1loc, but f /∈ L1loc .What is Df ?
We have for all φ ∈ D:
Df,φ = −f, φ= −
∞−∞
(ln |x|)φ(x) dx
= − limε→0+
−ε−∞
. . . + ∞ε
. . .
Here
−ε−∞
ln(−x)φ(x) dx = ln(ε)φ(−ε) − −ε−∞
φ(x)
x dx
and ∞ε
ln(x)φ(x) dx = − ln(ε)φ(ε) − ∞ε
φ(x)
x dx .
Noting that
41
-
8/19/2019 Intro to method of applied math
42/187
limε→0+
ln(ε)(φ(ε) − φ(−ε)) = 0
one obtains that
Df,φ = limε→0+
−ε−∞
φ(x)
x dx +
∞ε
φ(x)
x dx
= p.v.
∞−∞
φ(x)
x dx
Here p.v. stands for Cauchy principle value.We have obtained that
Df,φ
= p.v.
∞
−∞f (x)φ(x) dx
for the function
f (x) = ln |x| .
4.9 The Green’s Function Revisited
Let a0, a1, a2 ∈ C ∞(R) and assume that a0(x) = 0 for all x ∈ R. Let
Lu = a0u + a1u + a2u, u ∈ C 2(R) .
For simplicity, we ignore boundary conditions and consider the equation
Lu = f ,
where f ∈ C (R) is a given function with compact support.The Relation Lg(x) = δ (x − y).When constructing the Green’s function g(x, y) we fix y ∈ R and construct
a function g(x) = g(x, y) with
g ∈ C (R), g ∈ C ∞(−∞, y], g ∈ C ∞[y, ∞)and
Lg(x) = δ (x − y) .For simplicity, let y = 0. We obtain the conditions
Lg(x) = 0 for − ∞ < x ≤ 0Lg(x) = 0 for 0 ≤ x < ∞
with one sided derivatives at x = y. Also, we obtain the jump condition
g(0+) − g(0−) = 1a0(0)
.
42
-
8/19/2019 Intro to method of applied math
43/187
Theorem 4.2 Assume that g(x) satisfies the above conditions. Then we have
Lg = δ 0 in
D .
Proof: For all φ ∈ D we have
Lg,φ = g, L∗φ=
∞−∞
g(x)(L∗φ)(x) dx
=
0−∞
. . . +
∞0
. . .
Here, through integration by parts,
0−∞
g(a0φ) dx =
g(a0φ)
− ga0φ
(0−) + 0−∞
ga0φ dx
and, similarly,
∞0
g(a0φ) dx =
− g(a0φ) + ga0φ
(0+) +
∞0
ga0φ dx .
It follows that
∞−∞
g(a0φ) dx = (a0φ)(0)g
(0+) − g(0−) + 0−∞
ga0φ dx + ∞0
ga0φ dx .
Also,
− ∞−∞
g(a1φ) dx =
∞−∞
ga1φ dx .
One obtains that
Lg,φ = g, L∗φ= (a0φ)(0)
g(0+) − g(0−)
= φ(0)
This proves that Lg = δ 0 in the sense of distributions. Solution of the Inhomogeneous Equation Lu = f .We use the same notation as above. Assume that g(x, y) has been con-
structed by the process described above. We assume:1) g ∈ C (R×R)2) Let
T right = {(x, y) ∈ R2 : x ≥ y} and T left = {(x, y) ∈ R2 : x ≤ y} .
43
-
8/19/2019 Intro to method of applied math
44/187
Then
g
∈C ∞(T right) and g
∈C ∞(T left)
with one sided derivatives along the diagonal where x = y.3) For every fixed y:
Lxg(x, y) = 0 for x ≤ yLxg(x, y) = 0 for x ≥ y
with one–sided derivatives at x = y.4) For every fixed y the jump condition
D1g(y+, y)
−D1g(y
−, y) =
1
a0(y)
(4.4)
holds.
Theorem 4.3 Assume that g(x, y) satisfies the above conditions and let f ∈C (R) have compact support. Set
u(x) =
∞−∞
g(x, y)f (y) dy .
Then u ∈ C 2 and
Lu(x) = f (x) for x ∈ R .
Proof: We have
u(x) =
x−∞
g(x, y)f (y) dy +
∞x
g(x, y)f (y) dy .
Therefore,
u(x) = g(x, x)f (x) + x−∞
D1g(x, y)f (y) dy − g(x, x)f (x) + ∞x
D1g(x, y)f (y) dy
=
x
−∞
D1g(x, y)f (y) dy +
∞x
D1g(x, y)f (y) dy
and
u(x) = D1g(x, x−)f (x)+ x−∞
D21g(x, y)f (y) dy−D1g(x, x+)f (x)+ ∞x
D21g(x, y)f (y) dy .
It follows that
Lu(x) = a0(x)
D1g(x, x−) − D1g(x, x+)
f (x) .
The jump condition (4.4) yields
44
-
8/19/2019 Intro to method of applied math
45/187
D1g(x+, x) − D1g(x−, x) = 1a0(x)
.
By assumption, the function D1g(x, y) has a smooth extension from T right tothe diagonal. Therefore,
D1g(x+, x) = D1g(x, x−) .Similarly,
D1g(x−, x) = D1g(x, x+) .One obtains that
Lu(x) = f (x) .
45
-
8/19/2019 Intro to method of applied math
46/187
5 Three Types of Linear Problems
5.1 Overview
Let U denote a normed space over C. Let D(A) denote a subspace of U andlet A : D(A) → U denote a linear operator.
5.1.1 Inhomogeneous Problems
Consider the equation
Au = f
where f ∈ U is given. Does Fredholm’s Alternative hold? If G = A−1 exists, isthe operator G : U → U bounded? In which norms? Is G : U → U compact?
Often, the following holds: If the equation Au = f comes from a BVP in afinite region, then Fredholm’s Alternative holds. If Au = 0 implies u = 0, then
G = A−1 : U → U is compact. For BVPs on infinite regions the situation is often more difficult.
5.1.2 Spectral Problems
Consider the equation
Au = λu .
Which λ ∈ C are eigenvalues? If λ is not an eigenvalue, how does (λI − A)−1
behave ?Assume that Fredholm’s Alternative holds for the equation Au = f and
that Au = 0 implies u = 0. Also, assume that G = A−1 : U → U is compact.For λ = 0 the equation Au = λu is equivalent to
Gu = 1
λ u .
Therefore, we will study spectral theory for compact operators G.If s ∈ C is not an eigenvalue of A then
sI − A = (sG − I )A(sI − A)−1 = G(sG − I )−1
One therefore studies (sG − I )−1 for compact operators G and obtains infor-mation about the resolvent (sI − A)−1.
The best results are obtained if G is compact and G = G∗.
46
-
8/19/2019 Intro to method of applied math
47/187
5.1.3 Problems of Evolution; Semi–Group Theory
Consider the IVP
u(t) = Au(t), u(0) = u0 .
Formally, the solution is
u(t) = eAtu0 ,
but the exponential series
eAt =
∞ j=0
1
j! (At) j
typically does not converge if A is unbounded. If
ũ(s) =
∞0
u(t)e−st dt
denotes the Laplace transform of u(t) then, formally,
sũ(s) − u0 = Aũ(s)ũ(s) = (sI − A)−1u0u(t) =
1
2πi
Γ
(sI − A)−1est ds
Note that estimates of the resolvent (sI −A)−1
become important. This relatesto spectral theory.
5.2 BVPs as Operator Equations
Boundary value problems for ODEs and PDEs can often be cast in the followingabstract setting:
Let U denote a Banach space, let D(A) denote a linear subspace of U andlet
A : D(A) → U (5.1)
denote a linear operator. For a given f ∈ U consider the equationAu = f .
Typical questions are: Does Fredholm’s Alternative hold? What are allsolutions of the equation Au = 0? If there is a unique solution u = A−1f forevery f ∈ U , how can one represent u? Can one estimate norms of u in termsof f ?
Examples of ordinary BVPs: Let I = [a, b], let a0, a1, a2 ∈ C (I ) and let
Lu = a0u + a1u + a2u .
47
-
8/19/2019 Intro to method of applied math
48/187
Also, consider boundary operators
Rau = α1u(a) + α2u(a), Rbu = β 1u(b) + β 2u(b)
where
(α1, α2) = (0, 0) and (β 1, β 2) = (0, 0) .We also set
Ru =
RauRbu
.
If f ∈ C (I ) is a given function, the BVP reads
Lu = f, Ru = 0 .
To put this into the abstract setting (5.1) one has to specify the Banach spaceU and the subspace D(A). If one wants to work with classical functions, onecan set
U = C (I ) and D(A) = {u ∈ C 2(I ) : Ru = 0} .A norm on U is
|f |∞ = maxx∈I
|f (x)| for f ∈ U
and (U, | · |∞) is a Banach space.The operator A then is
Au = Lu for u ∈ D(A) .A disadvantage of this setting is that the space C (I ) with maximum norm
is not a Hilbert space.One can also set
U = L2(a, b)
with L2–innerproduct
(f, g) = ba f (x)g(x) dx .
Then consider the Sobolev space
H 2(a, b) = {u : u,Du,D2u ∈ L2(a, b)}where u,Du, D2u are understood in the sense of distributions, i.e., as elementsof the dual of the space of test functions C ∞0 (a, b).
One can prove that
H 2(a, b) ⊂ C 1[a, b] .
48
-
8/19/2019 Intro to method of applied math
49/187
Therefore, it makes sense to apply the boundary operator R to an elementu ∈ H 2(a, b).
Define
D(A) = {u ∈ H 2(a, b) : Ru = 0} .The BVP
Lu = f, Ru = 0 ,
becomes the operator equation
Au = f for f ∈ L2(a, b), u ∈ D(A) .
5.3 Inhomogeneous Problems
Let (U, · ) denote a Banach space; let D(A) ⊂ U and let A : D(A) → U denote a linear operator. Consider the inhomogeneous problem
Au = f
where f ∈ U is given and u ∈ D(A) is unknown.Questions: 1) Under what assumptions does Fredholm’s alternative hold?
2) Suppose that the inverse operator A−1 : U → D(A) exists. Can one boundthe solution u of the equation Au = f in terms of f ? In which norms? 3) Underwhat assumptions is the operator A−1 compact?
If A−1 is compact and symmetric, then one can derive expansion theorems
for the solution u in terms of the eigenfunctions of A.
5.4 Spectral Problems
Let (U, ·) denote a complex Banach space; let D(A) ⊂ U and let A : D(A) →U denote a linear operator.
In spectral theory one studies the equation
Au = su for s ∈ C .
5.5 Problems of Evolution: Semigroup Theory
49
-
8/19/2019 Intro to method of applied math
50/187
6 Linear Operators: Riesz Index, Projectors, Resol-
vent
6.1 Nullspaces and Ranges
Let U denote a vector space and let L : U → U denote a linear operator. Forevery j = 0, 1, 2, . . . we set
N j = ker(L j), R j = range(L
j) .
It is clear that
{0} = N 0 ⊂ N 1 ⊂ N 2 ⊂ . . . ⊂ U and
U = R0 ⊃ R1 ⊃ R2 ⊃ . . . ⊃ {0} .
Example 1: Let U = C3 and
L =
0 1 00 0 1
0 0 0
, L2 =
0 0 10 0 0
0 0 0
, L3 = 0 .
We have
N 1 = a
00
: a ∈ C, N 2 =
a
b0
: a, b ∈ C, N 3 = C3 .
Therefore,
{0} = N 0 ⊂ N 1 ⊂ N 2 ⊂ N 3 = N 4 = C3 ,where the inclusions are strict.
For the ranges we have:
R1 =
a
b0
: a, b ∈C
, R2 =
a
00
: a ∈C
, R3 = {0}
and
C3 = R0 ⊃ R1 ⊃ R2 ⊃ R3 = R4 = {0} ,
where the inclusions are strict.We note that the smallest index j with N j = N j+1 is j = 3 and the smallest
index k with Rk = Rk+1 is k = 3.
Example 2: Let U = C ∞(R) and let L denote the derivative operator,
50
-
8/19/2019 Intro to method of applied math
51/187
Lu = u for u ∈ U .
Then the space N j consists of all polynomials of degree ≤ j − 1 and, therefore,the inclusion
N j ⊂ N j+1is strict for all j .
For the ranges we have R j = U for all j, thus
U = R0 = R1 = R2 = . . .
There never is a strict inclusion.
Example 3: Let U = C ∞(R) and let L denote the integration operator,
(Lu)(x) =
x0
u(s) ds for u ∈ U .
If f = Lu then f = u. Therefore, if Lu = 0 then u = 0. This shows thatN 1 = {0}. In fact, if f = L ju then D jf = 0. Therefore, if L ju = 0 then u = 0.Thus,
{0} = N 0 = N 1 = N 2 = . . .All null–spaces are trivial.
Let f = Lu ∈ R1. Clearly, f (0) = 0. Conversely, assume that f ∈ U andf (0) = 0. Then
f (x) =
x0
f (s) ds ,
thus f = Lf ∈ R1. We have shown that
R1 = {f : f ∈ U, f (0) = 0} .It is easy to generalize:
R j = {f : f ∈ U, f (0) = Df (0) = . . . = D j−1f (0) = 0 .Therefore,
U = R0 ⊃ R1 ⊃ R2 ⊃ . . .where all inclusions are strict.
Examples 2 and 3 may suggest that nothing can be said, in general, abouta number j with
N j = N j+1
and a number k with
51
-
8/19/2019 Intro to method of applied math
52/187
Rk = Rk+1 .
Nevertheless, the remarkable Theorem 6.1 (see below) holds.The following result is not surprising.
Lemma 6.1 a) If N j = N j+1 then N j+1 = N j+2.b) If R j = R j+1 then R j+1 = R j+2.
Proof: a) The inclusion N j+1 ⊂ N j+2 always holds. Let u ∈ N j+2, thusL j+1(Lu) = 0. It then follows that
Lu ∈ N j+1 = N j ,Thus L j+1u = 0, thus u ∈ N j+1. We have shown that u ∈ N j+2 implies
u ∈ N j+1, i.e., N j+2 ⊂ N j+1.b) The inclusion R j+2 ⊂ R j+1 always holds. Let u ∈ R j+1. There existsv ∈ U with u = L j+1v. We have that
L jv ∈ R j = R j+1 ,thus L jv = L j+1w for some w ∈ U . This yields that
u = L j+1v = L j+2w ∈ R j+2 .
If there exists a finite j with N j = N j+1 we set
r = min{ j ≥ 0 : N j = N j+1} . (6.1)Similarly, if there exists a finite k with Rk = Rk+1 we set
s = min{k ≥ 0 : Rk = Rk+1} . (6.2)
The next result, which only uses linearity, is remarkable.
Theorem 6.1 Assume there exist j and k with
N j = N j+1 and Rk = Rk+1
and define the indices r and s by (6.1) and (6.2). Then we have
r = s .
Furthermore,
U = N r ⊕ Rrand the spaces N r and Rr are invariant under L:
L(N r) ⊂ N r, L(Rr) = Rr .The restriction of L to Rr is a bijection from Rr onto Rr.
52
-
8/19/2019 Intro to method of applied math
53/187
Proof: a) First suppose that r > s. We then have
N r−1
⊂N r = N r+1 and N r
−1
= N r .
We also have
Rr−1 = Rr since r > s .
Let u ∈ N r be arbitrary. We have
Lr−1u ∈ Rr−1 = Rr ,thus
Lr−1u = Lrv
for some v ∈ U . Since u ∈ N r we have0 = Lru = Lr+1v ,
thus
v ∈ N r+1 = N r .This yields that
0 = Lrv = Lr−1u ,
which implies that u
∈N r
−1. Thus, we obtain that N r
⊂N r
−1. Together with
the inclusion N r−1 ⊂ N r we obtain that N r−1 = N r. This contradiction provesthat r > s is not possible.
b) Second, suppose that s > r. We then have
Rs−1 ⊃ Rs = Rs+1 and Rs−1 = Rs .We also have
N s = N s−1 since s > r .
Let v ∈ Rs−1 be arbitrary, thus v = Ls−1u for some u ∈ U . We have
Lv = Ls
u ∈ Rs = Rs+1 ,thus
Lv = Ls+1w
for some w ∈ U . From
Lsu = Lv = Ls+1w
we conclude that
53
-
8/19/2019 Intro to method of applied math
54/187
Ls(u − Lw) = 0 ,
thus
u − Lw ∈ N s = N s−1 .Therefore,
0 = Ls−1(u − Lw), Ls−1u = Lsw ,and
v = Ls−1u = Lsw ∈ Rs .We have shown that v ∈ Rs−1 implies v ∈ Rs, thus the inclusion
Rs ⊂ Rs−1cannot be strict. This contradiction proves that s > r is not possible.
For the remaining parts of the proof we will assume r ≥ 1 since the claimsare trivial for r = 0. Just note that
N 0 = {0} and R0 = U .c) We claim that
N r
∩Rr =
{0
} . (6.3)
To show this, let v ∈ N r ∩ Rr, thus
Lrv = 0 and v = Lru
for some u ∈ U . One obtains that
L2ru = 0, thus u ∈ N 2r = N r ,thus v = Lru = 0.
d) To prove the space decomposition U = N r ⊕ Rr, we must show that forevery u ∈ U there exists a unique v ∈ N r and a unique w ∈ Rr with
u = v + w .
The uniqueness of v and w follows from (6.3), and it remains to show existence.To this end, let u ∈ U be arbitrary. We then have
Lru ∈ Rr = R2r ,thus
Lru = L2rq
for some q ∈ U . We then have
54
-
8/19/2019 Intro to method of applied math
55/187
u = (u − Lrq ) + Lrq
with Lr
q ∈ Rr. Also,Lr(u − Lrq ) = Lru − L2rq = 0 ,
thus u − Lrq ∈ N r.e) Claim: L(N r) ⊂ N r. To show this, let u ∈ L(N r) be arbitrary. Then
u = Lv for some v ∈ N r. This yields that Lru = L(Lrv) = L0 = 0, thusu ∈ N r.
f) Claim: L(Rr) ⊂ Rr. To show this, let u ∈ L(Rr) be arbitrary. Then u =Lv for some v ∈ Rr; thus v = Lrw for w ∈ U . This implies that u = Lr(Lw),thus u ∈ Rr.
g) Claim: Rr
⊂L(Rr). To show this, let u
∈Rr = Rr+1 be arbitrary. There
exists v ∈ U withu = Lr+1v = L(Lrv) = Lw with w = Lrv ∈ Rr .
This shows that u ∈ L(Rr).h) Claim: The operator L is one–to–one on Rr. To show this, let u ∈ Rr
and assume that Lu = 0. Then we have u ∈ N 1 ⊂ N r and (6.3) implies thatu = 0.
This completes the proof of the theorem. Definition: If L : U → U is a linear operator as in Theorem 6.1 then the
number r ∈ {0, 1, 2, . . .} is called the Riesz index of L.Terminology: For any u ∈ U there exists a unique v ∈ N r and a unique
w ∈ Rr with
u = v + w .
The mappings
P :
U → U
u → vand
Q : U →
U
u → ware linear and are projectors, i.e., P 2 = P and Q2 = Q. The projector P iscalled the projector onto N r along Rr; the projector Q is called the projectoronto Rr along N r. Clearly,
P + Q = I .
55
-
8/19/2019 Intro to method of applied math
56/187
6.2 The Neumann Series
Theorem 6.2 Let U denote a Banach space and let A : U → U denote a bounded linear operator with
A
-
8/19/2019 Intro to method of applied math
57/187
u − Au = f .
c) We have shown that L = I − A : U → U is one–to–one and onto.Therefore, the inverse operator L−1 : U → U exists. We claim that L−1 is abounded operator. For every f ∈ U we have
L−1f = ∞
j=0
A jf ≤ 11 − q f .
This proves that L−1 is bounded and
L−1 ≤ 11 − q .
d) Define the operators
S n =n
j=0
A j for n = 1, 2, . . .
Then we have for every f ∈ U :
L−1f − S nf = ∞
j=n+1
A jf ≤ q n+1
1 − q f .
This proves that
L−1
− S n ≤ q n+1
1 − q .Thus, the operators S n converge to L
−1 in operator norm. Remark: We have shown that
n j=0
A j → (I − A)−1 as n → ∞ ,
where the convergence holds in operator norm. One often writes
(I − A)−1 =∞
j=0
A j
although the convergence in the above series may not be clear. A precise resultis
n
j=0
A j − (I − A)−1 → 0 as n → ∞ .
57
-
8/19/2019 Intro to method of applied math
58/187
6.3 Spectral Theory of Matrices
6.3.1 Eigenvalues and Resolvent
Let A ∈ Cn×n. The characteristic polynomial of A is
pA(λ) = det(A − λI )= (λ1 − λ)α1 · · · (λq − λ)αq
where λ1, . . . , λq are the distinct eigenvalues of A and α j is the algebraic mul-tiplicity of λ j. The set
σ(A) = {λ1, . . . , λq}of eigenvalues of A is called the spectrum of A. We have
j
α j = n .
The set
ρ(A) = C \ σ(A)is called the resolvent set of A. The matrix valued function
R(z) = (zI − A)−1, z ∈ ρ(A) ,is called the resolvent of A.
The resolvent
R : ρ(A) → Cn×n
is a meromorphic function of z in every entry rij(z). In fact, by determinantrules for the inverse of a matrix, each function rij(z) is a rational function of z .
Let z0 ∈ ρ(A) be fixed and let z ∈ C denote a point near z0 with
|z − z0||R(z0)|
-
8/19/2019 Intro to method of applied math
59/187
Remarks on Laplace Transform and Resolvent: Let A ∈ Cn×n, u0 ∈Cn. The IVP
u(t) = Au(t), u(0) = u0 , (6.4)
has the solution
u(t) = eAtu0
where
eAt =
∞ j=0
t j
j! A j .
If A is an unbounded operator, then one cannot use the above power series to
define the solution operator eAt
. If
û(s) =
∞0
u(t)e−st dt
denotes the Laplace transform of u(t) then the IVP (6.4) transforms to
sû(s) − u0 = Aû(s) ,which yields that
û(s) = R(s)u0 with R(s) = (sI − A)−1 for s ∈ (C \ σ(A)) .
This indicates that the solution operator eAt is the inverse Laplace transformof the resolvent R(s). Even if A is unbounded (e.g., a differential operator) onecan often use the resolvent of A to study the IVP (6.4).
6.3.2 Jordan Normal Form and Riesz Index
Let A ∈ Cn×n. Using Schur’s Theorem and the Blocking Lemma, we know thatthere exists a nonsingular matrix T ∈ Cn×n so that
T −1AT =
B1 0
. . .
0 Bq
where B j has dimension α j × α j , B j is upper triangular, and the diagonalelements of B j are all λ j.
Thus,
B j = λ jI + U j
where U j is strictly upper triangular and, therefore, nilpotent.In fact, one can transform to Jordan normal form. The matrices
59
-
8/19/2019 Intro to method of applied math
60/187
J 1 = (0), J 2 = 0 1
0 0 , J 3 =
0 1 00 0 1
0 0 0
, etc.
are called elementary Jordan matrices. It is easy to see that
J j−1 j = 0, J j j = 0 .Thus, the Riesz index of J j equals j .
A block diagonal matrix J is called a Jordan matrix (to the eigenvalue zero)if each diagonal block of J is an elementary Jordan matrix.
If J ∈ Cα×α is a Jordan matrix and the dimension of the largest elementaryJordan block of J is r × r, then
J r
−1
= 0, J r
= 0 .Therefore, the Riesz index of J equals r, the dimension of the largest Jordanblock of J .
6.3.3 Bases for N r and Rr
Let A ∈ Cn×n and let λ1 denote an eigenvalue of A. Let α denote the algebraicmultiplicity of λ1 and let β = n − α.
There is a transformation matrix T so that
T −1(A − λ1I )T =
U 00 B
=: Ã . (6.5)
Here U has dimension α × α and
U r−1 = 0, U r = 0 .The matrix B of dimension β × β is nonsingular. We have
Ãr =
0 00 Br
.
If we set
Ñ j = ker( Ã j), R̃ j = range( Ã
j)
then we have
Ñ r = {
ξ I
0
: ξ I ∈ Cα}
R̃r = {
0ξ II
: ξ II ∈ Cβ}
Split the transformation matrix T as
60
-
8/19/2019 Intro to method of applied math
61/187
T = (T I , T II )
where T
I
contains the first α columns of T .Let
N r = ker((A − λ1I )r)Rr = range((A − λ1I )r)
We have
x ∈ N r ⇔ (A − λ1I )rx = 0⇔ ÃrT −1x = 0
⇔ T −1
x = ξ I
0
for some ξ I
∈ Cα
⇔ x = T I ξ I for some ξ I ∈ Cα
Similarly,
x ∈ Rr ⇔ x = T II ξ II for some ξ II ∈ Cβ .If x = T ξ then the decomposition
x = T I ξ I + T II ξ II with T I ξ I ∈ N r, T II ξ II ∈ Rrcorresponds to the decomposition
Cn = N r ⊕ Rr .The first α columns of T form a basis for N r, the last β = n − α columns of T form a basis for Rr.
6.3.4 Projectors
In this section, let T denote a transformation matrix with (6.5) where λ1 is aneigenvalue of A. Let r denote the Riesz index of L = A − λ1I and let
N r = kerLr, Rr = rangeL
r .
Let x∈Cn and let ξ = T −1x. If P denotes the projector onto N r along Rr
then we have
P x = T I ξ I
= T
ξ I
0
= T
I 00 0
ξ
= T
I 00 0
T −1x
61
-
8/19/2019 Intro to method of applied math
62/187
Therefore,
P = T I α 0
0 0
T −1
.
Similarly, if Q = I −P denotes the projector onto Rr along N