hilbert spae discription
Post on 07-Aug-2018
230 Views
Preview:
TRANSCRIPT
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 1/28
Lecture Notes on
Hilbert Spaces
Following the lectures of Prof. Dominic Joyce
Written by Jakub Zavodny
21.3.2010, v1.1
Part B Course in Mathematics, University of OxfordHilary Term 2008
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 2/28
Contents
1 Hilbert Spaces 3
2 Subspaces and Orthogonality 5
3 Riesz Representation 7
4 Complete Orthonormal Sets 8
5 Gram-Schmidt Process 10
6 Dense Subspaces of L2 Spaces 11
7 Legendre Polynomials 13
8 Classical Fourier Series 15
9 Cesaro Sums and Fejer’s Theorem 20
10 Linear Operators on Hilbert Spaces 22
11 Spectral Theory 25
12 The Fourier Transform 27
2
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 3/28
1 Hilbert Spaces
Definition 1.1. A real inner product on a real vector space V is a function·, · : V × V → R such that (for all v , u, w ∈ V and α, β ∈ R)
αu + βv,w = αu, w + β v, w (RIP1)
u, v = v, u (RIP2)
u, u ≥ 0 with equality iff u = 0, (RIP3)
i.e. it is bilinear, symmetric and positive definite. The vector space V to-gether with this inner product is called a real inner product space .
Definition 1.2. A complex inner product on a complex vector space V is a
function ·, · : V × V → C such that (for all v, u, w ∈ V and α, β ∈ C)
αu + βv,w = αu, w + β v, w (CIP1)
u, v = v, u (CIP2)
u, u ∈ R+0 with u, u = 0 iff u = 0, (CIP3)
i.e. it is linear in the first variable, conjugate symmetric, and positive definite.The vector space V together with this inner product is called a complex inner product space .
Definition 1.3. For any vector v in a real or complex inner product space,we define its norm by v = (v, v)1
2 . This norm satisfies the standard normaxioms.
Proposition 1.4 (Parallelogram Rule). In any inner product space (V, ·, ·)with the induced norm ., the identity
u + v2 + u − v2 = 2(u2 + v2) (1)
holds for any u, v ∈ V . Moreover, if V is real, then
u, v
= 1
4
(
u + v2
− u
−v2), (2)
and if V is complex, then
u, v = 14
(u + v2 − u − v2) + i4
(u + iv2 − u − iv2). (3)
Proof. Write the norms as inner products and expand.
Proposition 1.5. Let (V, .) be a real or complex normed vector space such that the equation (1) above holds for all u, v ∈ V . Then, if we define u, v :V 2 → R (or C) using (2) (or (3)), the space (V, ·, ·) will be a real (or complex) inner product space.
3
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 4/28
Proof. In the real case, denoting the function defined by (2) as ·, ·R:
u, w
R +
v, w
R = 1
4
(
u + w2 +
v + w
2
− u
−w
2
− v
−w
2)
= 12
(2(u+w
2
2 +v+w
2
2) − 2(u−w
2
2 + v−w
2
2))
= 12
((u−v
2
2 +u+v
2 + w
2) − (u−v
2
2 +u+v
2 − w
2))
= 214
(u+v
2 + w
2 − u+v2 − w
2) = 2u+v2
, wR,
using the property (1) to get the third line. Putting v = 0 we get u, vR =2u
2, vR, and using this back in the above result we get that u, wR +
v, wR = u + v, wR for any u,v,w ∈ V . By induction on n we can provethat nu,vR = nu, vR for all n ∈ N, and then extend the result for allq
∈Q. By the continuity of the norm, we get that in fact
αu,v
R = α
u, v
R
for all α ∈ R. Together with u, wR + v, wR = u + v, wR this yields thelinearity of ·, ·. Symmetry and positivity follow easily from the propertiesof a norm.
The complex case can be handled by showing that when ·, ·R is a realinner product on a complex vector space, such that u, vR = iu, ivR forany u, v ∈ V , then u, v = u, vR+ iu,ivR is a complex inner product.
Note. This shows that an inner product space can be viewed as a specialcase of a normed vector space.
Definition 1.6. An inner product space (V, ·, ·) is called a Hilbert space
if the norm . induced by the inner product is complete. That is, if anyCauchy sequence in V converges to a point in V . Equivalently, (V, ·, ·) is aHilbert space if (V, .) is a Banach space.
Example. Cn with the standard inner product is a Hilbert space.
Example. The space ℓ2 = {(xn)n∈N : |xn|2 < ∞} is an inner product
space with the inner product x, y =∞
n=1 xnyn. It is also a Hilbert space.Let (x(k)) be a Cauchy sequence in ℓ2, so that as j, k → ∞, x( j) − x(k) = |x( j)n − x(k)
n |2 → 0. Then also |x( j)n − x(k)
n | → 0 for each individual n, and
by the completeness of C, x(k)n → xn for some xn.
The Cauchy sequence (x(k)
) must be bounded (in norm) by some M , sothat
N n=1 |x(k)
n |2 ≤ M 2 for all N and k. But then alsoN
n=1 |xn|2 ≤ M 2 forall N , and hence x2 =
∞n=1 |xn|2 ≤ M 2. Therefore x ∈ ℓ2.
Finally, for any ε > 0, there exists K such that |x( j)
n − x(k)n |2 < ε for all
j, k > K . Letting k → ∞ we also get that |x( j)
n − xn|2 ≤ ε for all j > K ,which means that x j → x as j → ∞. This shows that any Cauchy sequencein ℓ2 has a limit, and therefore ℓ2 is a Hilbert space.
Example. The space L2(R) (or L2[a, b]) of square-integrable functions on R(or on an interval) is a Hilbert space. For more details see the Banach Spacescourse or the Part A Integration course.
4
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 5/28
2 Subspaces and Orthogonality
Note. If (V, ·, ·) is an inner product space and W is a vector subspace of V , then (W, ·, ·|W ×W ) is an inner product space.
Lemma 2.1. In a Hilbert space V , a subspace W ≤ V is a Hilbert space iff it is closed in V .
Definition 2.2. Let V be an inner product space, u, v ∈ V and U, W ≤ V .We say that u and v are orthogonal and write u ⊥ v if u, v = 0. We saythat U ⊥ v if u ⊥ v for all u ∈ U . We say that U ⊥ W if u ⊥ W for allu ∈ U .
Theorem 2.3. Let V be an inner product space, W be a subspace of V , v
∈ V
and w0 ∈ W . Then v − w0 = inf w∈W v − w if and only if (v − w0) ⊥ W .
Proof. Suppose that w0 is such that v − w0 = inf w∈W v − w. To showthat w, v − w0 = 0 for any w ∈ W , take a scalar t and note that sincew0 − tw ∈ W , v − w0 + tw2 ≥ v − w02 for all t ∈ R (or C). Therefore
v − w0, tw + tw,v − w0 + |t|2 w2 = v − w0 + tw2 − v − w02 ≥ 0
for all t. If V is a real inner product space, this immediately shows thatv − w0, w = w, v − w0 = 0. In the complex case we can rewrite the aboveinequality as 2 Re(t
w, v
−w0
) +
|t
|2
w
2
≥ 0. Taking first t
∈R and then
t ∈ iR we get Re(w, v − w0) = 0 and then Im(w, v − w0) = 0. Therefore,in any case, w, v − w0 = 0 for all w ∈ W .
Conversely, suppose that w0 ∈ W is such that (v − w0) ⊥ W . Then forany w ∈ W , v − w0, w0 − w = 0 and hence
v − w2 = v − w, v − w = v − w0 + w0 − w, v − w0 + w0 − w= v − w02 + v − w0, w0 − w + w0 − w, v − w0 + w0 − w2= v − w02 + w0 − w2 ≥ v − w02 .
Then also v − w ≥ v − w0, which together with w0 ∈ W implies that
v − w0 = inf w∈W v − w.
Theorem 2.4. Let V be a Hilbert space and let W be a closed subspace of V . Then for any v ∈ V there exists a unique w0 ∈ W such that v − w0 =inf w∈W v − w.
Proof. Denote inf w∈W v − w = d. By the definition of the infimum, thereexists a sequence (wn) in W such that v − wn → d as n → ∞. Forany n and m, we can use the parallelogram rule to get that wn − wm2 +wn + wm − 2v2 = 2 wn − v2 + 2 wm − v2, and hence
wn
−wm
2 = 2
wn
−v2 + 2
wm
−v2
−4wn+wm
2 −v2.
5
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 6/28
Since wn+wm
2 ∈ W , wn+wm
2 − v ≥ d and hence
0 ≤ wn − wm2
≤ 2 wn − v2
+ 2 wm − v2
− 4d2
,
which converges to zero as n, m → ∞ by the definition of (wn). The sequence(wn) is therefore Cauchy, and by the completeness of V , it converges to somew0 ∈ V . Since W is closed, w0 ∈ W . By the continuity of the norm,v − w0 = limn→∞ v − wn = d.
If v − w′0 = d for some other w′
0 ∈ W , we can use the parallelogramrule again to show that
w0+w′
0
2 − v2 = 1
2 w0 − v2 + 1
2 w′
0 − v2 − w0 − w′02,
which is less than d2 (a contradiction) unless w0 = w ′0.
Corollary 2.5. For any Hilbert space V , a closed subspace W ≤ V , and an element v ∈ V , there exists a unique w0 ∈ W such that (v − w0) ⊥ W .
Definition 2.6. Let V be an inner product space. The orthogonal comple-ment W ⊥ of a subspace W ≤ V is defined as the set {v ∈ V : v ⊥ W }.
Lemma 2.7. For any subspace W ≤ V , W ⊥ is a closed subspace of V .
Proof. By the linearity of the inner product W ⊥ is a subspace of V , and bythe continuity of the inner product it is closed in V .
Theorem 2.8. (Projection Theorem) Let V be a Hilbert space and W ≤ V be a closed subspace. Then
V = W ⊕ W ⊥.
Proof. By Corollary (2.5), for any v ∈ V we can find w ∈ W such that(v−w) ∈ W ⊥, so that v = w +(v−w) ∈ W +W ⊥. Moreover, if v ∈ W ∩W ⊥,then v, v = 0 and hence v = 0. Therefore V = W ⊕ W ⊥.
Proposition 2.9. Let V be a Hilbert space and W
≤ V be any subspace.
Then (W ⊥)⊥ = W .
Proof. If v ∈ W ⊥, then v, w = 0 for all w ∈ W , so by the continuity of
the inner product also v, w = 0 for all w ∈ W . Therefore v ∈ W ⊥
. Since
W ⊆ W , trivially W ⊥ ⊆ W ⊥, so with the above we get that W ⊥ = W
⊥.
However, since W is a closed subspace of V , by the projection theorem
we have V = W ⊕ W ⊥
. Now for any v ∈ V , if v ∈ W then clearly v ⊥ W ⊥
.
If v /∈ W , then v = w + w′ for some w ∈ W and some nonzero w ′ ∈ W ⊥
, and
hence v ⊥ W ⊥
. Therefore v ∈ W iff v ⊥ W ⊥
, and hence W = (W ⊥)⊥. Since
W ⊥ = W ⊥
, we also have W = (W ⊥)⊥.
6
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 7/28
3 Riesz Representation
In this section, let V be a Hilbert space and V ′ its dual space (of boundedlinear functionals on V ). For any v ∈ V , define the map f v : V → C (or R)by f v(u) = u, v.
Proposition 3.1. For any v ∈ V , f v ∈ V ′ and f vV ′ = vV . Moreover,the mapping ιV : V → V ′ given by v → f v is an antilinear isometric injection.
Proof. The mapping f v is clearly linear, and |f v(u)| = |u, v| ≤ u v bythe Cauchy-Schwarz inequality, so it is also bounded with f v ≤ v. Since|f v(v)| = v2, we actually have f v = v.
The mapping ιV is clearly antilinear (conjugate linear), and by the aboveit is isometric. If f v = f w, then
u, v
=
u, w
and hence
u, v
−w
= 0 for
all u ∈ V , in particular, v − w = v − w, v − w = 0. Therefore v = w,which shows that ιV is injective.
Theorem 3.2 (Riesz Representation Theorem). For any f ∈ V ′ there exists a unique v ∈ V such that f = f v. (In other words, the mapping ιV defined above is surjective.)
Proof. Take any f ∈ V ′ and let W = ker f . Since f is continuous, W isclosed, and hence V = W ⊕ W ⊥ by the projection theorem. If f = 0 thenclearly f = f 0, so suppose that f = 0. Then there exists some nonzerov′
∈ W ⊥, and since then v′ /
∈ W , we get f (v′)
= 0.
Now for any u ∈ V ,
f
u − f (u)f (v′)
v′
= f (u) − f (u)f (v′)
f (v′) = 0,
so u − f (u)f (v′)
v′ ∈ ker f = W . Therefore v′ ⊥ u − f (u)f (v′)
v′, which means thatu − f (u)
f (v′)v′, v′
= 0
u, v′ = f (u)f (v′)
v′, v′f (u) = f (v′)
v′2 u, v′ = u, f (v′)
v′2 v′ .
The vector v = f (v′)
v′2 v′ therefore satisfies f (u) = f v(u) for all u ∈ V .
Corollary 3.3. The mapping ιV is an antilinear isometric bijection between the spaces V and V ′.
Note. With the inner product f u, f v = u, v = v, u, the space V ′ is alsoa Hilbert space.
Note. Similarly as ιV for V → V ′, ιV ′ : V ′ → V ′′ is also an antilinearisometric bijection. Therefore the map ιV ′ ◦ ιV : V → V ′′ is a linear isometricbijection. This means that any Hilbert space is reflexive.
7
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 8/28
4 Complete Orthonormal Sets
Definition 4.1. Let V be an inner product space. A set S of nonzeroelements of V is called orthogonal if its elements are pairwise orthogonal.The set S called orthonormal if in addition s = 1 for all s ∈ S . Asequence (si)i∈N is called orthogonal (orthonormal) if all si are distinct andthe set {si}i∈N is orthogonal (orthonormal).
Proposition 4.2. If {e1, . . . , en} is an orthonormal set and x1, . . . , xn are scalars, then n
i=1 xiei2 =n
i=1 |xi|2.
Proof. By induction on n. The case n = 1 is trivial, and since for any k,xk+1ek+1 is orthogonal to
ki=1 xiei, we get that
k+1i=1 xiei
2 = ki=1xiei + xk+1ek+1,
ki=1xiei + xk+1ek+1
=k
i=1xiei
2 + xk+1ek+12 =k
i=1|xi|2 + |xk+1|2.
Corollary 4.3. Any orthonormal set is linearly independent.
Proof. If
xiei = 0 for some finite linear combination of eis from an or-thonormal set E , then
|xi|2 = 0 and hence xi = 0 for all i.
Theorem 4.4 (Bessel’s Theorem). If
{e1, . . . , en
} is an orthonormal set in
an inner product space V , then for all v ∈ V ,
v −ni=1v, eiei2 = v2 −n
i=1|v, ei|2.
Proof. Use u2 = u, u and expand.
Corollary 4.5 (Bessel’s Inequality). If (ei) is an orthonormal sequence in an inner product space V , then for all v ∈ V ,
∞i=1|v, ei|2 ≤ v2 .
Proof. From Bessel’s theorem we get that ni=1|v, ei|2 ≤ v2 for any n ∈N. Therefore the series
∞i=1|v, ei|2 is convergent and the sum is less than
or equal to v2.
Corollary 4.6. If (ei) is an orthonormal sequence in a Hilbert space V , then for all v ∈ V the series
∞i=1v, eiei converges.
Proof. Denoting the partial sum sn =n
i=1v, eiei, note that sn − sm2 =ni=m+1 |v, ei|2 goes to zero as n, m → ∞ since the series
∞i=1 |v, ei|2 is
convergent. Therefore the series
∞i=1v, eiei is Cauchy and hence conver-
gent in the Hilbert space V .
8
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 9/28
Corollary 4.7 (Bessel’s Theorem, series version). If (ei) is an orthonormal sequence in a Hilbert space V , then for all v ∈ V ,
v −∞i=1v, eiei2 = v2 −∞
i=1|v, ei|2.
Proof. Take the limit n → ∞ in Bessel’s theorem, noting that both sidesconverge by the above corollaries.
Definition 4.8. An orthonormal set S in a Hilbert spave V is said to becomplete if lin S = V . A complete orthonormal set is also sometimes calledan orthonormal basis of V , but note that it need not be a basis in the usualsense.
Proposition 4.9. An orthonormal set S in a Hilbert space V is complete iff
it is a maximal orthonormal set, i.e. if v, s = 0 for all s ∈ S implies v = 0.
Proof. Using Proposition (2.9), S is complete iff V = lin S = ((lin S )⊥)⊥.This is exactly when (lin S )⊥ = 0, which is when v ⊥ lin S implies v = 0.However, v ⊥ lin S is equivalent to v ⊥ S , therefore S is complete iff v ⊥ S implies v = 0.
Theorem 4.10. If (ei) is a complete orthonormal sequence in a Hilbert space V , then for all v ∈ V , v =
∞i=1v, eiei.
Proof. By Corollary (4.6), the series ∞i=1v, eiei is convergent in V , and by
the continuity of the inner product,
∞i=1v, eiei, e j = lim
n→∞n
i=1v, eiei, e j = v, e j.
Therefore, v −∞i=1v, eiei, e j = 0 for all e j , and by the completeness of
the sequence (e j), we must have v −∞i=1v, eiei = 0.
Corollary 4.11 (Parseval’s Formula). If (ei) is a complete orthonormal se-quence in a Hilbert space V , then for all v ∈ V ,
v2 =
∞i=1|v, ei|2.
Corollary 4.12 (Generalised Parseval’s Formula). If (ei) is a complete or-thonormal sequence in a Hilbert space V , then for all u, v ∈ V ,
u, v =∞
i=1u, eiei, v.
Proof. For any n, we have
ni=1u, eiei,
n j=1v, e je j =
ni=1u, eiei, v.
Taking the limit n → ∞ and using Theorem (4.10) and the continuity of theinner product, we get the desired result.
9
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 10/28
Theorem 4.13. Let V be a Hilbert space admitting a complete orthonormal sequence (ei), and define φ : ℓ2 → V by φ(x) =
∞i=1 xiei. Then φ is an
isometric isomorphism.
Proof. Let x = (xi)i∈N be in ℓ2. Denoting sn =n
i=1 xiei, we get thatsn − sm2 =
ni=m+1 |xi|2 → 0 as m, n → ∞, so that the series
∞i=1 xiei
is Cauchy. By the completeness of V it is also convergent and hence φ iswell-defined. Moreover, the generalised Parseval’s formula yields
φ(x), φ(y)V = ∞
i=1∞
j=1x je j , eiei,∞
k=1ykek = ∞
i=1xiyi = x, yℓ2,
so φ is an isometry, and therefore also an injection. By Theorem (4.10) andBessel’s inequality it is also surjective, and linearity is also easily established.
Therefore φ is an isomorphism between V and ℓ2.
5 Gram-Schmidt Process
Lemma 5.1. If (f i) is a sequence in a vector space V , then it has a (possibly finite) linearly independent subsequence (f ik) such that lin{f i} = lin{f ik}.
Proof. Defining f ik = min{i : dim lin{f 1, . . . , f i} = k} (leaving f ik un-defined if dim lin{f i} < k), we can easily prove that lin{f 1, . . . , f ik} =lin{f i1 , . . . , f ik}, and hence also lin{f ik} = lin{f i}.
Theorem 5.2 (Gram-Schmidt Orthogonalisation Process). If (f i) is a lin-early independent sequence in an inner product space V , then there exists an orthonormal sequence (ei) in V such that lin{e1, . . . , ek} = lin{f 1, . . . , f k} for all k ∈ N, and hence also lin{ei : i ∈ N} = lin{f i : i ∈ N}.
Proof. Define recursively gk = f k − k−1
i=1 f k, eiei and ek = gk/ gk. Byinduction we can prove that for each k ∈ N, {e1, . . . , ek} is orthonormal andlin{e1, . . . , ek} = lin{f 1, . . . , f k}.
Firstly, gk = 0 since f k /∈ lin{e1, . . . , ek−1} = lin{f 1, . . . , f k−1}, so ekis at least well-defined. Also, ek, ei = gk, ei/ gk = 0 and ek, ek =
gk, gk/ gk2
= 1, so assuming the induction hypothesis that {e1, . . . , ek−1}is orthonormal, so is {e1, . . . , ek}. Finally, directly from the definition of ek we can observe that f k ∈ lin{e1, . . . , ek} and ek ∈ lin{e1, . . . , ek−1, f k} =lin{f 1, . . . , f k}, therefore lin{e1, . . . , ek} = lin{f 1, . . . , f k}. This completesthe induction proof.
Since lin{f i : i ∈ N} =
k∈N lin{f 1, . . . , f k}, and similarly for eis, theabove also implies that lin{ei : i ∈ N} = lin{f i : i ∈ N}. Similarly, since{e1, . . . , ek} is orthonormal for each k, so is the sequence (ei)i∈N.
Corollary 5.3. If (f i) is a sequence in an infinite-dimensional Hilbert space V such that lin
{f i
} = V , then V admits a complete orthonormal sequence.
10
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 11/28
Proof. By Lemma (5.1), we can pick a linearly independent subsequence of (f i) whose linear span is still dense in V , and by the Gram-Schmidt orthog-
onalisation process we can construct an orthonormal sequence whose span isalso dense in V . Moreover, if V is infinite-dimensional, the sequence will beinfinite.
Corollary 5.4. Any infinite-dimensional separable Hilbert space is isomet-rically isomorphic to ℓ2.
Proof. If V is a separable Hilbert space, then there exists a countable densesubset {f i} of V , and hence also lin{f i} is dense in V . By the above corollary,V admits a (infinite) complete orthonormal sequence, and by Theorem (4.13),it is isometrically isomorphic to ℓ2.
Theorem 5.5. Assuming the axiom of choice, every Hilbert space admits a complete orthonormal set. Moreover, all the complete orthonormal sets in a Hilbert space have the same cardinality, and any two Hilbert spaces ad-mitting a complete orthonormal set of the same cardinality are isometrically isomorphic.
6 Dense Subspaces of L2 Spaces
Proposition 6.1. The space of all (complex-valued) step functions on R,Lstep(R), is dense in L2(R).
Proof. Let f ∈ L2(R) and ε > 0. If we define the functions f n by f n(x) = f (x)if |x| ≤ n and |f (x)| ≤ n, and f n(x) = 0 otherwise, then f n(x) → f (x) for allx. Moreover, f n ∈ L1(R) ∩ L2(R), since f n is measurable, bounded, and of compact support. The sequence |f − f n|2 is therefore a sequence of functionsin L1(R), and it is monotone decreasing in n and converging to zero for allx. By the monotone convergence theorem,
|f − f n|2 → lim |f − f n|2 = 0,
and hence f n → f in L2(R). In particular, there exists some n ∈ N such thatf − f n2 < ε/2.
Since f n is also in L1
(R), we can approximate it arbitrarily closely by stepfunctions, in particular we can choose ψ ∈ Lstep(R) such that f n − ψ1 = |f n − ψ| < ε2/8n. In addition, we can impose the condition |ψ| ≤ n, sincealso |f n| ≤ n. Therefore |f n − ψ| ≤ 2n, so that
|f n − ψ|2 ≤ 2n |f n − ψ| <
ε2/4 and hence f n − ψ2 < ε/2.Combining the two inequalities we get f − ψ2 < ε. This can be done
for any f ∈ L2(R) and any ε > 0, therefore Lstep(R) is dense in L2(R).
In the following, let [a, b] be a bounded interval in R.
Corollary 6.2. For any a < b, Lstep([a, b]) is dense in L2([a, b]).
11
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 12/28
Proposition 6.3. The space of all step functions on R whose discontinuities only occur on rational points and whose values are in Q(i), Lstep
Q (R), is dense
in L2(R). The same holds for Lstep
Q ([a, b]) and L2([a, b]).
Proof. For any f ∈ L2 and ε > 0, we can find a step function ψ such thatf − ψ2 < ε/2. It is not difficult to show that we can then find a stepfunction φ with values in Q(i) and only rational discontinuity points suchthat ψ − φ2 < ε/2. Therefore f − φ2 < ε.
Proposition 6.4. The space C ([a, b]) of continuous complex-valued functions on [a, b] is dense in L2([a, b]).
Proof. First note that C ([a, b]) ≤ L2([a, b]). Also, we can approximate any
step function ψ by a continuous function c such that ψ − c2 is arbitrarilysmall. Since Lstep([a, b]) is dense in L2([a, b]), so is C ([a, b]).
Theorem 6.5 (Weierstrass Approximation Theorem). For any f ∈ C ([a, b])and ε > 0 we can find a complex-valued polynomial p ∈ P([a, b]) such that f − p∞ < ε.
Corollary 6.6. The space P([a, b]) is dense in L2([a, b]).
Proof. For any f ∈ L2([a, b]) and ε > 0, we can find g ∈ C ([a, b]) suchthat f − g2 < ε/2. We can also find p ∈ P([a, b]) such that g − p∞ <
ε/(2√
b−
a). Then
g−
p2
2
= ba |g − p|2 < ε2/4 and hence
g
− p
2 < ε/2.
Therefore f − p2 < ε.
Theorem 6.7. There exists a complete orthonormal sequence ( pi) in L2([a, b])such that pk is a polynomial of degree k. This sequence is unique up to mul-tiplication by unit complex numbers.
Proof. Apply the Gram-Schmidt orthogonalisation process to the sequence(xk)k∈N0 to get an orthonormal sequence ( pk) such that lin{ p0, . . . , pk} =lin{1, . . . , xk}. Then lin{ pk}k∈N0 = P([a, b]), and since P([a, b]) is dense inL2([a, b]), ( pk) is a complete orthonormal sequence.
By construction, pk is a polynomial of degree no greater than k, andsince all pk are linearly independent, we can prove by induction that each pk is a polynomial of degree exactly k. Moreover, if we suppose that ( pk)and (q k) are two such sequences, we can prove by induction on k that eachq k is a scalar multiple of pk. The base case of the induction is trivial. Sup-posing the induction hypothesis for k, write q k+1 =
k+1 j=1 a j p j (since q k+1 ∈
lin{1, . . . , xk+1} = lin{ p0, . . . , pk+1}) and observe that a j = q k+1, p j = 0for j ≤ k, since p j is a scalar multiple of q j and q j is orthogonal to q k+1.Therefore q k+1 = ak+1 pk+1, which completes the induction step.
Finally just note that since q k = pk = 1 for all k, we must have
|ak
| = 1.
12
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 13/28
7 Legendre Polynomials
Definition 7.1. We define the nth
Legendre polynomial by
P n(t) = 1
2nn!
d
dt
n (t2 − 1)n
.
The normalisation 12nn!
is chosen so that P n(1) = 1.
Proposition 7.2. In the Hilbert space L2([−1, 1]),
P n, P m =
22n+1
if n = m,
0 if n = m.
Proof. Suppose that m ≤ n. Since d
dtk (t2
− 1)n
= 0 at t = ±1 if k < n, byn-times repeated integration by parts we get that
P n, P m = 1
2n+mn!m!
1−1
d
dt
n
(t2 − 1)n
d
dt
m
(t2 − 1)m dt
= (−1)n
2n+mn!m!
1−1
(t2 − 1)n
d
dt
m+n
(t2 − 1)m dt.
Now if m < n, then
ddt
m+n(t2 − 1)m = 0, so P n, P m = 0. If m = n, then
ddt
m+n
(t2 − 1)m = (2n)!, and integrating (t2 − 1)n = (t − 1)n(t + 1)n byparts n times we get
P n, P n = (−1)n(2n)!
22n(n!)2
1−1
(t2 − 1)n dt
= (2n)!
22n(n!)2
1−1
(t + 1)2n(n!)2
(2n)! dt =
2
2n + 1.
Definition 7.3. The Legendre functions are defined as
pn(t) = P n(t)
2
2n + 1.
Corollary 7.4. The Legendre functions form a complete orthonormal se-quence in L2([−1, 1]).
Proof. The above proposition shows that ( pn) is orthonormal. Since pk(t)is a polynomial of degree k in t, lin{ pk} = P([−1, 1]), which is dense inL2([−1, 1]) by Corollary (6.6).
Lemma 7.5. For any n,
P n(t) =
⌊n2⌋
j=0
(−1) j(2n − 2 j)!tn−2 j
2n j!(n − j)!(n − 2 j)!.
13
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 14/28
Proof. Just expand the definition using the binomial theorem:
12nn!
(t2 − 1)n =
n j=0
(−1) j
t2n
−2 j
2n j!(n − j)! and hence
P n(t) =
n−2 j≥0 j=0
(−1) jtn−2 j(2n − 2 j)!
2n j!(n − j)!(n − 2 j)! =
⌊n2⌋
j=0
(−1) j(2n − 2 j)!tn−2 j
2n j!(n − j)!(n − 2 j)!.
Proposition 7.6 (Generating Function for Legendre Polynomials).
∞
n=0
P n(t)z n = (1 − 2zt + z 2)−12 .
Proof. By the binomial theorem, (1 − q )−12 =
∞m=0
12m
(2m)!2mm!
qm
m!. Putting
q = 2zt − z 2 and using the substitution n = m + j, we get
(1 − 2zt + z 2)−12 =
∞m=0
(2m)!
(2mm!)2(2zt − z 2)m
=∞
m=0
m j=0
(2m)!
(2mm!)2(−1) jz 2 j+m− j2m− jtm− j
m!
j!(m − j)!
=
∞n=0
z n
n− j j=0
(−1) j (2n
−2 j)!
22n−2 j(n − j)! 2n−2 jtn−2 j
1
j!(n − 2 j)!
=∞n=0
P n(t)z n.
Proposition 7.7. The Legendre polynomials satisfy the differential equation
(1 − t2)P ′′n (t) − 2tP ′n(t) + n(n + 1)P n(t) = 0.
Proof. Check directly from the definition of P n, or check that the generating
function F (z, t) = (1 − 2zt + z 2)−1
2 = ∞n=0 P n(t)z n satisfies the differentialequation [(1 − t2) ∂
2
∂t2 − 2t ∂
∂t + z 2 ∂
2
∂z2 + 2z ∂
∂z]F (z, t) = 0 and compare the z n
coefficients.
We can construct similar sequences of other orthogonal polynomials byconsidering different function spaces, usually the weighted spaces L2
w(I ) forsome interval I and some weight function w. The sequences are again ob-tained by the Gram-Schmidt process from the sequence (xk)k∈N0. (Of course,all xk must lie in the space L2
w(I ).) They again form complete orthogonalsequences in the respective function Hilbert spaces, but we do not prove thishere, we just give some examples.
14
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 15/28
Definition 7.8. The Laguerre polynomials Ln and the Laguerre functions ϕn are defined as
Ln(t) = et
n!
d
dt
n
(tnet) and ϕn(t) = Ln(t)e−t/2.
Proposition 7.9. The Laguerre polynomials form an orthogonal sequence in L2
w([0, ∞)) with w(t) = e−t and the Laguerre functions form a complete orthonormal sequence in L2([0, ∞)). Moreover, the Laguerre polynomials satisfy the identities
tL′′n(t) + (1 − t)L′
n(t) + nLn(t) = 0 and ∞n=0
Ln(t)z n = e−tz/(1−z)
1 − z .
Definition 7.10. The Hermite polynomials H n and the Hermite functions ψn are defined as
H n(t) = (−1)net2
d
dt
n
e−t2
and ψn(t) = H n(t)e−t
2/2
(2nn!)1/2π1/4.
Proposition 7.11. The Hermite polynomials form an orthogonal sequence in L2w with w(t) = e−t
2
and the Hermite functions form a complete orthonormal sequence in L2. Moreover, the Hermite polynomials satisfy the identities
H ′′n
(t)−
2tH ′n
(t) + 2nH n(t) = 0 and ∞
n=0
H n(t)z n
n! = e2tz−z2.
8 Classical Fourier Series
In the following, we will write L1(−π, π] instead of L1((−π, π]), etc.
Proposition 8.1. L2(−π, π] ⊆ L1(−π, π].
Proof. For any f ∈ L2(−π, π], f is measurable and |f |2 integrable. Since|f | ≤ 1 + |f |2, |f | ∈ L1(−π, π] and hence also f ∈ L1(−π, π].
Proposition 8.2. For any f ∈ L1
(−π, π] and any n ∈ N, f (x) sin(nx) and f (x) cos(nx) are also in L1(−π, π].
Proof. For any f ∈ L1(−π, π], f is measurable and so are sin(nx) andcos(nx). Therefore |f (x)sin(nx)| ≤ |f (x)| and |f (x) cos(nx)| ≤ |f (x)| areintegrable and so are f (x) sin(nx) and f (x) cos(nx).
Definition 8.3. For any f ∈ L1(−π, π], we define the classical Fourier series of f , denoted as (S f n)n∈N, by
S f n(x) = 1√
2πA0 +
1√ π
n
j=1
(A j cos( jx) + B j sin( jx)) ,
15
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 16/28
where the Fourier coefficients A j and B j are defined as
A0 = 1√ 2π π−π
f (x) dx, A j = 1√ π π−π f (x) cos( jx) dx
and B j = 1√
π
π−π
f (x) sin( jx) dx.
Alternatively, we can write
S f n(x) = 1√
2π
nk=−n
C keikx,
where
C k = 1√
2π
π−π
f (x)e−ikx.
Note. If we define the functions
u0 = 1√
2π, u j(x) =
1√ π
cos( jx), and v j(x) = 1√
π sin( jx),
then for any f ∈ L2(−π, π] we will have
S f
n = f, u0u0 +
n
j=1 (f, u ju j + f, v jv j) .
Similarly, if we define
ek(x) = 1√
2πeikx,
then
S f n =n
k=−nf, ekek.
Proposition 8.4. The sequences {
u j} j∈N0
∪ {v j
} j∈N and
{ek
}k∈Z as defined
above are orthonormal in L2(−π, π].
Proof. Check directly using integration by parts.
Corollary 8.5. For any f ∈ L2(−π, π], the classical Fourier series (S f n) of f converges in L2(−π, π].
Proof. Immediately from Corollary (4.6) and the above proposition.
Lemma 8.6 (Riemann-Lebesgue Lemma). For any f ∈ L1(−π, π], the se-quences ( f (t) cos( jt) dt) j
∈N and ( f (t) sin( jt) dt) j
∈N converge to zero.
16
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 17/28
Proof. Take any ε > 0. Since Lstep(−π, π] is dense in L1(−π, π], we can finda step function ψ =
ckχ[ak,bk) such that
|f − ψ| < ε/2. Note that
ψ(t) cos( jt) dt = ck
1 j
(sin( jbk) − sin( jak)) ≤ | 2ck
j | → 0,
so that there exists some N such that f (t) cos( jt) dt
< ε/2 for all j ≥ N .Then, for all such j, f (t) cos( jt) dt
≤ |f (t) − ψ(t)||cos( jt)| dt +
ψ(t)cos( jt) dt
< ε/2 + ε/2 = ε.
Therefore the integral f (t) cos( jt) dt converges to zero. A similar proof works for
f (t) sin( jt) dt.
Corollary 8.7. For any f ∈ L1(−π, π], the Fourier coefficients A j, B j , C kconverge to zero as j → ∞ or k → ±∞.
Note. For f ∈ L2(−π, π], this follows directly from Bessel’s inequality.
Definition 8.8. The Dirichlet kernels Dn are defined as
Dn(t) = 12
+n
j=1
cos( jt).
Note. Dn is an even 2π-periodic function with π0
Dn(t) dt = π/2.
Proposition 8.9. For any f ∈ L1(−π, π], we have
S f n(t) = 1
π
π−π
f (s)Dn(s − t) ds.
Proof. Just use the definition of S f n and merge the integrals defining theFourier coefficients.
S f n(t) = 1
π
π−π
f (s)1
2 +
n j=1
cos( js) cos( jt) + sin( js) sin( jt)
ds
= 1
π
π−π
f (s)
1
2 +
n j=1
cos( j(t − s))
ds =
1
π
π−π
f (s)Dn(s − t) ds.
Lemma 8.10. For t /∈ 2πZ,
Dn(t) = sin((n + 1
2)t)
2 sin( t2
) .
17
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 18/28
Proof. The identity can be obtained by summing a geometric series:
Dn(t) = 1
2
nk=−n
eikt = e−int
2
2nk=0
eikt = 1
2
e−int−
e−i(n+1)t
1 − eit = sin((n + 1
2
)t)
2 sin( t2
) .
Proposition 8.11. Let f ∈ L1(−π, π] and extend f to a 2π-periodic function on R. Then S f n(t) → S f (t) for some S f (t) iff the sequence π
0
[f (t + s) + f (t − s) − 2S f (t)]Dn(s) ds
converges to zero as n → ∞.
Proof. By the periodicity of f and Dn, and by evenness of Dn, we get
S f n(t) = 1π π−π
f (s)Dn(s − t) ds = 1π π+t−π+t
f (s)Dn(s − t) ds
= 1
π
π−π
f (s + t)Dn(s) ds = 1
π
π0
(f (t + s) + f (t − s))Dn(s) ds.
Also trivially
S f (t) = S f (t)2
π
π0
Dn(s) ds = 1
π
π0
2S f (t)Dn(s) ds,
therefore
S f n(t) − S
f
(t) = 1
π π0 (f (t + s) + f (t − s) − 2S f
(t))Dn(s) ds.
This completes the proof.
Proposition 8.12. Let f ∈ L1(−π, π] and extend f to a 2π-periodic function on R. Then S f n(t) → S f (t) iff the sequence δ
0
[f (t + s) + f (t − s) − 2S f (t)]Dn(s) ds
converges to zero as n → ∞ for some δ ∈ (0, π].
Proof. Since sin(s/2) ≥
sin(δ/2) for s ∈
[δ, π], 1/ sin(s/2) is bounded on
[δ, π] and hence the function f (t+s)+f (t−s)−2S f (t)2sin(s/2)
is integrable on [δ, π]. By the
Riemann-Lebesgue lemma (a slight modification of it), πδ
f (t + s) + f (t − s) − 2S f (t)
2sin(s/2) sin((n + 1
2)s) ds → 0
as n → ∞. By Lemma (8.10), this is the same as πδ
[f (t + s) + f (t − s) − 2S f (t)]Dn(s) ds → 0,
which combined with the above proposition yields the result.
18
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 19/28
Corollary 8.13. Let ψ ∈ Lstep(−π, π] and extend it to a 2π-periodic function on R. Then S ψn (t) converges to 1
2(ψ(t+) + ψ(t−)) for all t ∈ R.
Proof. Since ψ is a step function, for any t we can find a δ such that ψ(t+s) =ψ(t+) and ψ(t − s) = ψ(t−) for all s ∈ (0, δ ). Therefore δ
0
[f (t + s) + f (t − s) − 212
(ψ(t+) + ψ(t−))]Dn(s) ds = 0
for all n, and the result follows by the above proposition.
Theorem 8.14 (Dini’s Test). Let f ∈ L1(−π, π] and extend it to a 2π-periodic function on R. If there exists some δ ∈ (0, π] such that the function
s → f (t + s) + f (t − s) − 2S f (t)
s
is integrable on (0, δ ), then S f n(t) → S f (t) as n → ∞.
Proof. The function s → s2 sin(s/2)
is continuous and bounded by π/2 on the
interval (0, π]. Therefore, if s → f (t+s)+f (t−s)−2S f (t)s
is integrable on (0, δ ], so
is s → f (t+s)+f (t−s)−2S f (t)2 sin(s/2)
. By the Riemann-Lebesgue lemma,
δ0
f (t + s) + f (t − s) − 2S
f
(t)2 sin(s/2) sin(n + 12)s ds → 0
as n → ∞. The result follows by Proposition (8.12) (and Lemma (8.10)).
Corollary 8.15. Let f ∈ L1(−π, π] and extend it to a 2π-periodic function on R. If f is differentiable at some t ∈ R, then S f n(t) → f (t) as n → ∞.
Proof. If f is differentiable at t, then
f (s + t) + f (s − t) − 2f (t)
s → 0
as s → 0. It is therefore bounded on some interval (0, δ ), and it is continuous,so it must be integrable. By Dini’s test, S f n(t) → f (t).
Theorem 8.16. If f ∈ L2(−π, π], then S f n → f in L2(−π, π].
Proof. Let ε > 0 and choose a step function ψ ∈ Lstep(−π, π] such thatf − ψ2 < ε/3.
Since ψ is also in L2(−π, π], its Fourier series S ψn converges to someS ψ ∈ L2(−π, π] by Corollary (8.5). Using Fatou’s lemma (or a similarresult from integration), we must have S ψ
n
(t) →
S ψ(t) for almost all t ∈
19
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 20/28
(−π, π]. However, by Corollary (8.13), S ψn (t) → ψ(t) for almost all t, there-fore S ψ(t) = ψ(t) in L2(−π, π]. Since S ψn → S ψ in L2(−π, π], we can infer
that ψ − S ψn 2 = S
ψ
− S ψn 2 < ε/3 for all sufficiently large n.
Finally, S ψn − S f n2 = S ψ−f n 2 ≤ ψ − f 2 by Bessel’s inequality, andhence f − S f n
2 ≤ f − ψ2 +
ψ − S ψn2 + S ψn − S f n2 < ε
for all sufficiently large n. Therefore S f n → f in L2(−π, π].
Corollary 8.17. The orthonormal sequences {u j} j∈N0 ∪{v j} j∈N and {ek}k∈Zare complete in L2(−π, π].
9 Cesaro Sums and Fejer’s Theorem
Definition 9.1. For f ∈ L1(−π, π], define the nth Cesaro sum of its Fourierseries as
σf n =
1
n + 1
nk=0
S f k .
Also define the Fejer kernels as
F n(t) = 1
n + 1
nk=0
Dk(t).
Note. Since S f n(t) = 1π π−π f (s)Dn(s − t) ds, we get that
σf n =
1
π
π−π
f (s)F n(s − t) ds.
Note. By the same properties of Dn, F n is an even 2π-periodic function with π0
F n(t) dt = π/2.
Lemma 9.2. If S f n(t) → S f (t), then σf n(t) → S f (t). (As n → ∞.)
Proof. Let ε > 0 and let N be such that |S f n(t) − S f (t)| < ε/2 for all n ≥ N .Then, for such n,
|σf n(t) − S f (t)| ≤ 1
n+1n
k=0|S f k (t) − S f (t)|≤ 1
n+1
N k=0|S f k (t) − S f (t)| + n−N
n+1ε2
,
where 1n+1
N k=0 |sf k(t) − sf (t)| < ε/2 for sufficiently large n. But then also
|σf n(t) − S f (t)| < ε, which shows that σf
n(t) → S f (t).
Lemma 9.3. For t /∈ 2πZ,
F n(t) = 1 − cos((n + 1)t)
4(n + 1) sin2( t2
) =
sin2(12
(n + 1)t)
2(n + 1) sin2( t2
).
20
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 21/28
Proof. Use Lemma (8.10) and then sum the geometric series:
F n(t) = 1n + 1
nk=0
Dk(t) = 1n + 1
nk=0
sin(k +
1
2)t2 sin( t2
)
F n(t) = 1
2(n + 1) sin( t2
)
1
2i
nk=0
e(k+
12)t − e−(k+
12)t
=
1
2(n + 1) sin( t2
)
1
2i
et/2
1 − e(n+1)t
1 − et − e−t/2
1 − e−(n+1)t
1 − e−t
=
1
2(n + 1) sin( t2
)
1
4/2i
2 − e(n+1)t − e−(n+1)t
et/2 − e−t/2 =
1 − cos((n + 1)t)
4(n + 1) sin2( t2
) .
Proposition 9.4. Let f ∈ L1(−π, π] and extend f to a 2π-periodic function on R. Then σf
n(t) → S f (t) iff the sequence
π|σf n(t) − S f (t)| =
π0
[f (t + s) + f (t − s) − 2S f (t)]F n(s) ds
converges to zero as n → ∞ for some δ ∈ (0, π].
Proof. Same as the proof of Proposition (8.11), just with σf n and F n instead
of S f n and Dn.
Theorem 9.5 (Fejer’s Theorem). Let f
∈ L1(
−π, π] and extend it to a 2π-
periodic function. If f (t+) and f (t−) exist, then σf n(t) → 12(f (t+) + f (t−)).
Proof. Take S f (t) = f (t+) + f (t−) in the above proposition, choose someδ ∈ (0, π) such that |f (t + s) − f (t+)| < ε/4 and |f (t − s) − f (t−)| < ε/4 forall s ∈ (0, δ ), and split the integral
I n =
π0
[f (t + s) + f (t − s) − 2S f (t)]F n(s) ds
into one on (0, δ ) and one on (δ, π). We get that
|I n| ≤ δ0
(|f (t + s) − f (t+)| + |f (t − s) − f (t−)|) F n(s) ds
+
πδ
|f (t + s) + f (t − s) − f (t+) − f (t−)| 1
2(n + 1) sin2(s/2) ds.
≤ δ0
(ε/2)F n(s) ds + (|f (t+) + f (t−)|(π − δ ) + f 1) 1
2(n + 1) sin2(δ/2)
≤ π
4ε +
C
n + 1,
where C is a constant. This is less than ε for large enough n, so we can inferthat I n
→ 0. By the above proposition, σf
n
(t) →
S f (t) = f (t+) + f (t−
).
21
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 22/28
Corollary 9.6. If f (t+) and f (t−) exist and S f n(t) converges to some S f (t),then S f (t) = 1
2(f (t+) + f (t−)).
Corollary 9.7. If f ∈ C [−π, π] is such that f (−π) = f (π), then σf n → f
uniformly on [−π, π].
Proof. Firstly, it is clear by Fejer’s theorem that σf n → f on [−π, π]. Extend
f to a 2π-periodic function (still continuous since f (−π) = f (π)) and observethat it is uniformly continuous. Therefore, in the proof of Fejer’s theorem,the same δ works for all t. Also, since f is bounded, the same C works forall t (with ε fixed). Therefore |σf
n(t) − f (t)| ≤ π4
ε + C n+1
for all t ∈ [−π, π],
and hence σf n → f uniformly on [−π, π].
10 Linear Operators on Hilbert Spaces
Lemma 10.1. If V and W are Hilbert spaces and T : V → W is a bounded linear operator, then T = sup{|T v , wW | : v ∈ V, w ∈ W, v , w ≤ 1}.
Proof. For any v ∈ V and w ∈ W such that vV ≤ 1 and wW ≤ 1,we have |T v , wW | ≤ T vW wW ≤ T vV wW ≤ T . Conversely,T = sup{T vw : vV = 1} = sup{Tv, Tv/ T vw : vV = 1} ≤sup{|T v , wW | : v ∈ V, w ∈ W, vV , wW ≤ 1}.
Theorem 10.2. If V and W are Hilbert spaces and T
∈ B(V, W ), then there
exists a unique T ∗ ∈ B(W, V ) such that T v , wW = v, T ∗wV for all v ∈ V ,w ∈ W . Moreover, T ∗ = T .
Proof. For any w ∈ W , the map f : v → T v , wW is a linear functional.Moreover, |f (v)| = |T v , wW | ≤ T v w ≤ T v w, so f is boundedwith f ≤ T w. By the Riesz representation theorem, there exists aunique T ∗w ∈ V such that f (v) = v, T ∗wV for all v . This defines a uniquemap T ∗ : W → V satisfying the condition T v , wW = v, T ∗wV .
To show that T ∗ is linear, note that w → f is antilinear and so is f →T ∗w. Or just note that
v, T ∗(α1w1 + α2w2) = T v , α1w1 + α2w2 = α1T v , w1 + α2T v , w2= α1v, T ∗w1 + α2v, T ∗w2 = v, α1T ∗w1 + α2T ∗w2
and argue by the uniqueness of T ∗.Since T ∗w = f (by the Riesz representation), and we have shown
that f ≤ T w, it follows that T ∗ is bounded and T ∗ ≤ T . In fact,
T ∗ = sup{|T ∗w, vV |} = sup{|v, T ∗wV |} = sup{|T v , wW |} = T ,
where the suprema are taken from v ∈ V and w ∈ W with v , w ≤ 1 justas in the above lemma.
22
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 23/28
Definition 10.3. The operator T ∗ is called the Hilbert adjoint of T .
Recall that we have defined the antilinear isometric isomorphism ιV
:V → V ′ by ιV (v) = f v, where f v(u) = u, v, and similarly ιW for W → W ′.Also recall that in the Banach Spaces course we have defined an adjoint(dual) operator T ′ for any T ∈ B(V, W ) by T ′(w′)(v) = w ′(T v).
Proposition 10.4. The relationship between T ′ and T ∗ is ιV ◦ T ∗ = T ′ ◦ ιW ,i.e. the following diagram commutes.
Proof. For any w ∈ W and for any v ∈ V ,
v, T ∗wV = T v , wW
(ιV ◦ T ∗)(w)(v) = ιV (◦T ∗w)(v) = ιW (w)(T v) = (T ′ ◦ ιW )(w)(v).
Example. If V = Cn, W = Cm and T : V → W is represented by an m × nmatrix A with respect to some bases, then T ′ is represented by AT and T ∗
is represented by AT with respect to the dual bases.
Proposition 10.5. Let V be a Hilbert space, T, T 1, T 2 ∈ B(V ), and α1, α2
be scalars. Then
(a) (α1T 1 + α2T 2)∗ = α1T ∗1 + α2T ∗2 ,
(b) (T 1T 2)∗ = T ∗2 T ∗1 ,
(c) T ∗∗ = T ,
(d) I ∗ = I , 0∗ = 0, and
(e) T ∗T = T 2.
Proof. Let u, v be arbitrary elements of V .
(a) u, (α1T 1 + α2T 2)∗v = α1T 1u, v + α2T 2u, v = u, (α1T ∗1 + α2T ∗2 )v,the result then follows (and similarly for the following identities).
(b) u, (T 1T 2)∗v = T 1T 2u, v = T 2u, T ∗1 v = u, T ∗2 T ∗1 v.
(c) u, T ∗∗v = T ∗u, v = v, T ∗u = T, u = u , Tv.
(d)
u, I ∗u
=
Iu,v
=
u,Iv
and
u, 0∗v
=
0u, v
=
u, 0v.
23
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 24/28
(e) For any v ∈ V ,
T v
2 =
T v , T v
=
v, T ∗T v
≤ v
T ∗T v ≤
T ∗T
v2 ,
therefore T 2 ≤ T ∗T . But clearly T ∗T ≤ T ∗ T = T 2,which establishes the equality.
Definition 10.6. Let V be a Hilbert space and T ∈ B(V ). If T = T ∗, thenT is said to be self-adjoint . If T = −T ∗, T is said to be anti-self-adjoint . If T T ∗ = T ∗T = I , then T is said to be orthogonal in the real case and unitary in the complex case.
Example. If V = Rn, the matrix of a self-adjoint operator is symmetric, andthe matrix of an anti-self-adjoint operator is anti-symmetric. If V = Cn, thematrix of a self-adjoint operator satisfies A = AT and is called Hermitian ,and the matrix of an anti-self-adjoint operator is called anti-Hermitian .
Proposition 10.7. Let V be a real (or complex) Hilbert space and T ∈ B(V ).Then T is orthogonal (or unitary) iff it is an isometric isomorphism.
Proof. First suppose that T is orthogonal (or unitary), i.e. that T ∗T = T T ∗ =I . Then immediately T is invertible with T −1 = T ∗, so T is a bijection(isomorphism). Also, T v2 = T v , T v = v, T ∗T v = v, v = v2, soT preserves the norm. By polarisation (by identities for the inner productin terms of the norm from Proposition (1.4)), T also preserves the innerproduct, i.e. it is an isometry.
Conversely, if T is an isometry, i.e. for all u, v ∈ V we have u, v =T u , T v = u, T ∗T v, it follows that T ∗T = I . And if T is in additionan isomorphism, it follows that T ∗ must be its inverse T −1 and hence alsoT T ∗ = I , so that T is orthogonal (or unitary, if V is complex).
Definition 10.8. Let V be a Hilbert space and T ∈ B(V ). We define theexponential map exp : V → V by
exp(T )(v) =∞k=0
1
k! T kv.
Proposition 10.9. Let V be a Hilbert space. For any T ∈ B(V ), exp(T ) ∈B(V ) and exp(T ∗) = exp(T )∗. Moreover, for any S, T ∈ B(V ) such that ST = T S , exp(S ) exp(T ) = exp(S + T ).
Corollary 10.10. Let V be a Hilbert space. If T ∈ B(V ) is anti-self-adjoint,then exp(T ) is orthogonal (or unitary).
Proof. If T is anti-self-adjoint, then T ∗ + T = 0, and therefore
exp(T )∗ exp(T ) = exp(T ∗) exp(T ) = exp(T ∗ + T ) = exp(0) = I ,
and similarly exp(T ) exp(T )∗ = I . Therefore, exp(T ) is orthogonal (unitary).
24
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 25/28
11 Spectral Theory
Definition 11.1. If V is a Banach space and T ∈ B(V ), the spectrum of T is defined as
σ(T ) = {λ ∈C : (λI − T ) is not invertible in B (V )}.
Note. Recall that σ(T ) is a closed subset of C contained in {z : |z | ≤ T }.
Lemma 11.2. If V is a Hilbert space and T ∈ B(V ), then σ(T ∗) = σ(T ).
Proof. If λ /∈ σ(T ), then (λI − T ) is invertible in B(V ), so there exists someS ∈ B(V ) such that S (λI − T ) = (λI − T )S = I . Taking adjoints, we getthat (λI
−T ∗)S ∗ = S ∗(λI
−T ∗) = I , so λ /
∈ σ(T ∗). Therefore σ(T ∗)
⊆ σ(T ).
Similarly we get that σ(T ) ⊆ σ(T ∗), therefore σ(T ∗) = σ(T ).
Note. In the above the overline denotes complex conjugation, not closure.
Lemma 11.3. Let V be a Banach space and T ∈ B(V ) be such that for some α > 0, T v ≥ α v for all v ∈ V . Then T is injective and T (V ) is closed in V .
Proof. Firstly, if T v = 0 then α v ≤ 0 and hence v = 0, which means thatT is injective. To show that T (V ) is closed, suppose that (T vn) is a sequencein T (V ) convergent in V . It must also be Cauchy, so that
T v j
−T vk
→ 0
as j, k → ∞, but then also v j − vk ≤ 1α T v j − T vk → 0 as j, k → ∞.Therefore (vn) is a Cauchy sequence in V , and hence it converges to some v .Since T ∈ B(V ), we get that T vn → T v ∈ T (V ).
Theorem 11.4. Let V be a Hilbert space and T ∈ B(V ). If T is self-adjoint,then σ(T ) ⊆ R, if T is anti-self-adjoint, then σ(T ) ⊆ iR.
Proof. Let T be self-adjoint, write λ = α + iβ and let v ∈ V . Then
(λI − T )v2 = (α + iβ )v − T v, (α + iβ )v − T v=
αv
−T v
2 +
iβv,αv
−T v
+
αv
−Tv,iβv
+
iβv
2
= αv − T v2 + iβ (v,αv − T v − αv − T v , v) + iβv2≥ β 2 v2 ,
since v,αv − T v − αv − T v , v = 0 as (αI − T ) is self-adjoint. If β = 0,the above lemma applies and we can deduce that (λI − T ) is injective and(λI − T )(V ) is closed in V .
Now if some v ∈ (λI − T )(V )⊥, then (λI − T )u, v = 0 for all u, soalso u, (λI − T )v = 0 for all u, and hence (λI − T )v = 0. However, theabove argument could also show that (λI − T ) is injective, so we must havev = 0. This shows that (λI
−T )(V )⊥ = 0, and since (λI
−T )(V ) is closed,
25
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 26/28
(λI − T )(V ) = V . Therefore (λI − T ) is surjective and by the above alsobijective. We have shown above that
v ≥ |β | (λI − T )−1vfor all v, therefore (λI − T )−1 ∈ B(V ) and so (λI − T ) is invertible in B(V ).In other words, if β = 0, then λ /∈ σ(T ).
For T anti-self-adjoint the proof is similar, just expand as
(λI − T )v2 = iβv − T v2 + αv,iβv − T v + iβv − Tv,αv + αv .
Proposition 11.5. Let V be a Hilbert space and T ∈ B(V ) be self-adjoint.If we define
m = inf {
T v , v
:
v
= 1}
and M = sup{
T v , v
:
v
= 1}
,
then σ(T ) ⊆ [m, M ].
Proof. First note that m and M are well-defined since T v , v ∈ R for all vif T is self-adjoint. Also, by the above theorem we have σ(T ) ⊆ R.
Let λ ∈ R, λ > M , and v ∈ V \ {0}. Then
(λI − T )v, v = λ v2 − T v , v ≥ (λ − M ) v2 .
Using the Cauchy-Schwarz inequality also (λI − T )v v ≥ (λ − M ) v2,and hence (λI − T )v ≥ (λ − M ) v. By the same reasoning as in theproof of the above theorem, we can deduce that (λI
− T ) is injective and
(λI −T )(V ) is closed in V (by Lemma (11.3)), that (λI −T )(V )⊥ = ker(λI −T )∗ = ker(λI − T ) = 0, and hence that (λI − T ) is bijective. Moreover, bythe inequality (λI − T )v ≥ (λ − M ) v, its inverse is bounded, so it isinvertible in B(V ) and hence λ /∈ σ(T ).
Similarly we can show that if λ < m then λ /∈ σ(T ), which together withσ(T ) ⊆ R implies that σ(T ) ⊆ [m, M ].
Definition 11.6. Let V be a Hilbert space. A self-adjoint operator T ∈ B(V )is said to be positive if T v , v ≥ 0 for all v ∈ V .
Proposition 11.7. Let V be a Hilbert space. For any T ∈ B(V ), T ∗T is a
self-adjoint positive operator.Proof. It is self-adjoint since (T ∗T )∗ = T ∗T ∗∗ = T ∗T , and positive sinceT ∗T v , v = T v , T v ≥ 0 for all v ∈ V .
Example. Let V = L2 and let k ∈ L2(R2) be such that k(x, y) = k(y, x)for all x, y. Then, by Fubini’s theorem, y → k(x, y) is in L2 for almost allx, and hence T f : x →
k(x, y)f (y) dy is well-defined. Moreover, it can beshown that (T f ) ∈ L2 and T ∈ B(L2) with T ≤ kL2(R2). Finally, T isself-adjoint since
T f , g
= k(x, y)f (y)g(x) = k(y, x)g(x)f (y) =
T g , f
=
f , T g
.
26
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 27/28
12 The Fourier Transform
Definition 12.1. Let f ∈ L1
∩ L2
be a (complex-valued) function. TheFourier transform of f is the function f given by
f (y) = 1√
2π
∞−∞
f (x)e−ixy dx.
Proposition 12.2. For any a < b in R,
χ[a,b](y) = i√
2π
e−iby − e−iay
y
.
Moreover, χ[a,b] ∈ L
2
(R
) with χ[a,b]2 =
√ b − a = χ[a,b]2. Also, if (a, b)and (c, d) are disjoint intervals, then χ[a,b], χ[c,d]2 = χ[a,b], χ[c,d]2 = 0.
Proof. By direct calculation.
Corollary 12.3. If φ, ψ ∈ Lstep(R), then φ2 = φ2 and φ, ψ2 = φ, ψ2.
Proof. Choose a set of disjoint bounded intervals I 1, . . . , I k such that
φ = k
i=1aiχI i and ψ = k
i=1biχI i.
By the above proposition,
χI i, χI j
2 =
χI i, χI j
2, hence
φ, ψ2 =
i,jaib j χI i, χI j2 =
i,jaib jχI i, χI j2 = φ, ψ2.
Theorem 12.4 (Plancherel’s Theorem). There exists a unique linear mapF : L2 → L2, called the Fourier transform, such that
(a) if f ∈ L1 ∩ L2, then F f = f , and
(b) for all f ∈ L2, F f = f .
Proof. Since Lstep is dense in L2, we can uniquely extend the map φ → φ from
Lstep
to L2
: if φn in Lstep
converges to f ∈ L2
, define F f = limn→∞ ˆφn. Firstly,the convergent sequence φn must be Cauchy, so φ j − φk = φ j − φk → 0
and the sequence φn must also be Cauchy, and hence convergent. More-over, if ψn is another sequence in Lstep converging to f , then φn − ψn =φn − ψn → 0, and hence lim φn = lim ψn. Therefore, F f is well-definedfor all f ∈ L2.
To prove that F satisfies (b), just note that φn → f and φn →F f . Since φn = φn, we get that F f = f for all f . Also notethat since Lstep is dense in L2, any bounded operator satisfying (a) (andhence coinciding with F on Lstep) must in fact be equal to F . Therefore F is unique.
27
8/21/2019 Hilbert spae discription
http://slidepdf.com/reader/full/hilbert-spae-discription 28/28
It remains to prove that F satisfies (a) for any f ∈ L1 ∩ L2. Choose asequence φn in Lstep such that φn → f almost everywhere, and an integrable
function g such that φn ≤ g almost everywhere. (For real-valued f , wheref = h − h′ with h, h′ ∈ Linc and ψn → h, ψ′
n → h′ increasing, we can chooseφn = ψn − ψ′
n and g = f − ψ′1.) Then, for each y we can use the DCT with
dominating function g to show that
limn→∞
1√ 2π
∞−∞
ψn(x)e−ixy dx = 1√
2π
∞−∞
f (x)e−ixy dx,
i.e. φn → f for all y. Since φn → F f in L2, we can deduce that f = F f almost everywhere. (And hence f = F f in L2.)
Theorem 12.5 (Fourier Inversion Theorem). There exists a unique mapF −1 : L2 → L2 such that
(a) if f ∈ L1 ∩ L2, then F f = f = 1√ 2π
∞−∞ f (x)eixy dx,
(b) for all f ∈ L2, F f = f , and
(c) FF −1 = F −1F = I on L2.
Proof. Parts (a) and (b) can be handled just as in the above theorem. Toprove (c), first show that FF −1 = F −1F = I holds on Lstep which is densein L2, then use the continuity of F and F −1. (Note that showing FF −1 =
F −1
F = I is not completely trivial, since
F χ[a,b]
/∈
L1. We cannot thereforeuse the integral formula for F −1, we must again use a sequence convergingto F χ[a,b] and continuity.)
Note. As shown in the last example of the previous section, F −1 = F ∗.Therefore, F is a unitary transformation.
References
[1] D. Joyce, Hilbert Spaces Lectures, given at the Mathematical Institute,
University of Oxford, 2007.
[2] E. Kreyszig, Introductory Functional Analysis with Applications , JohnWiley & Sons, 1978.
[3] N. Young, An Introduction to Hilbert Space , Cambridge MathematicalTextbooks, 1992.
These lecture notes are based on the lectures of Prof. Dominic Joyce andare made available with his kind permission. However, the lecturer has notendorsed them in any way and their content remains the responsibility of theauthor.
28
top related