inner product - peter
Post on 24-Dec-2015
11 Views
Preview:
TRANSCRIPT
3Inner product spaces
3.1 Basic definitions and results
Up to now we have studied vector spaces, linear maps, special linear maps.
We can measure if two vectors are equal, but we do not have something
like ”length”, so we cannot compare two vectors. Moreover we cannot say
anything about the position of two vectors.
In a vector space one can define the norm of a vector and the inner
product of two vectors. The notion of the norm permits us to measure the
length of the vectors, and compare two vectors. The inner product of two
vectors, on one hand induces a norm, so the length can be measured, and
on the other hand (at least in the case of real vector spaces), lets us
measure the angle between two vectors, so a full geometry can be
constructed there. Nevertheless in the case of complex vector spaces, the
angle of two vectors is not clearly defined, but the orthogonality is.
Definition 3.1. An inner product on a vector space V over the field F is
a function (bilinear form) x¨, ¨y : V ˆ V Ñ R with the properties:
43
44 3. Inner product spaces
• (positivity and definiteness) xv, vy ě 0 and xv, vy “ 0 iff v “ 0.
• (additivity in the first slot) xu` v, wy “ xu,wy ` xv, wy, for all
u, v, w P V.
• (homogeneity in the first slot) xαv,wy “ αxv, wy for all α P F and
v, w P V.
• (conjugate symmetry) xv, wy “ xw, vy for all v, w P V .
An inner product space is a pair pV, x¨, ¨yq, where V is vector space and
x¨, ¨y is an inner product on V .
The most important example of an inner product space is Fn. Let
v “ pv1, . . . , vnq and w “ pw1, . . . wnq and define the inner product by
xv, wy “ v1w1 ` ¨ ¨ ¨ ` vnwn.
This is the typical example of an inner product, called the Euclidean inner
product, and when Fn is referred to as an inner product space, one should
assume that the inner product is the Euclidean one, unless explicitly
stated otherwise.
Example 3.2. Let A PM2pRq, A “
¨
˝
a b
b c
˛
‚be a positive definite
matrix, that is a ą 0, detpAq ą 0. Then for every
u “ pu1, u2q, v “ pv1, v2q P R2 we define xu, vy “ pv1 v2qA
¨
˝
u1
u2
˛
‚.
It can easily be verified that x¨, ¨y is an inner product on the real linear
space R2.
If A “ I2 we obtain the usual inner product xu, vy “ u1v1 ` u2v2.
From the definition one can easily deduce the following properties of an
Basic definitions and results 45
inner product:
xv, 0y “ x0, vy “ 0,
xu, v ` wy “ xu, vy ` xu,wy,
xu, αvy “ αxu, vy,
for all u, v, w P V and α P F
Definition 3.3. Let V be a vector space over F. A function
} ¨ } : V Ñ R
is called a norm on V if:
• (positivity) }v} “ 0, v P V ô v “ 0 ;
• (homogeneity) }αv} “ |α| ¨ }v},@α P F,@v P V ;
• (triangle inequality) }u` v} ď }u} ` }v},@u, v P V.
A normed space is a pair pV, } ¨ }q, where V is a vector space and } ¨ } is a
norm on V .
Example 3.4. On the real linear space Rn one can define a norm in
several ways. Indeed, for any x “ px1, x2, . . . , xnq P Rn define its norm as
}x} “a
x21 ` x22 ` ¨ ¨ ¨ ` x
2n. One can easily verify that the axioms in the
definition of norm are satisfied. This norm is called the euclidian norm.
More generally, for any p P R, p ě 1 we can define
}x} “ pxp1 ` xp2 ` ¨ ¨ ¨ ` x
pnq
1p , the so called p´norm on Rn.
Another way to define a norm on Rn is }x} “ maxt|x1|, |x2|, . . . , |xn|u.
This is the so called maximum norm.
Definition 3.5. Let X be a nonempty set. A function d : X ˆX Ñ R
satisfying the following properties:
46 3. Inner product spaces
• (positivity) dpx, yq ě 0, @x, y P X and dpx, yq “ 0 ô x “ y;
• (symmetry) dpx, yq “ dpy, xq,@x, y P X;
• (triangle inequality) dpx, yq ď dpx, zq ` dpz, yq,@x, y, z P X;
is called a metric or distance on X. A set X with a metric defined on it is
called a metric space.
Example 3.6. Let X be an arbitrary set. One can define a distance on X
by
dpx, yq “
$
&
%
0, if x “ y
1, otherwise.
This metric is called the discrete metric on X. On Rn the Chebyshev
distance is defined as
dpx, yq “ max1ďiďn
|xi ´ yi|, x “ px1, x2, . . . , xnq, y “ py1, y2, . . . , ynq P Rn.
In this course we are mainly interested in the inner product spaces. But
we should point out that an inner product on V defines a norm, by
}v} “a
xv, vy for v P V , and a norm on V defines a metric by
dpv, wq “ }w ´ v}, for v, w P V .
For an inner product space pV, x¨, ¨yq the following identity is true:
C
mÿ
i“1
αivi,nÿ
j“1
βjwj
G
“
mÿ
i“1
nÿ
j“1
αiβjxvi, wjy.
Definition 3.7. Two vectors u, v P V are said to be orthogonal if
xu, vy “ 0.
Theorem 3.8. (Parallelogram law) Let V be an inner product space
and u, v P V . Then
}u` v}2 ` }u´ v}2 “ 2p}u}2 ` }v}2q.
Basic definitions and results 47
Proof.
}u` v}2 ` }u´ v}2 “ xu` v, u` vy ` xu´ v, u´ vy “ xu, uy ` xu, vy ` xv, uy ` xv, vy
`xu, uy ´ xu, vy ´ xv, uy ` xv, vy
“ 2p}u}2 ` }v}2q.
Theorem 3.9. (Pythagorean Theorem) Let V be an inner product
space, and u, v P V orthogonal vectors. Then
}u` v}2 “ }u}2 ` }v}2.
Proof.
}u` v}2 “ xu` v, u` vy
“ xu, uy ` xu, vy ` xv, uy ` xv, vy
“ }u}2 ` }v}2.
Now we are going to prove one of the most important inequalities in
mathematics, namely the Cauchy-Schwartz inequality. There are several
methods of proof for this, we will give one related to our aims.
Consider u, v P V . We want to write u as a sum between a vector collinear
to v and a vector orthogonal to v. Let α P F and write u as
u “ αv ` pu´ αvq. Imposing now the condition that v is orthogonal to
pu´ αvq, one obtains
0 “ xu´ αv, vy “ xu, vy ´ α}v}2,
so one has to choose α “ xu,vy}v}2 , and the decomposition is
48 3. Inner product spaces
u “xu, vy
}v}2v `
ˆ
u´xu, vy
}v}2v
˙
.
Theorem 3.10. Cauchy-Schwartz Inequality Let V be an inner
product space and u, v P V . Then
|xu, vy| ď }u} ¨ }v}.
The equality holds iff one of u, v is a scalar multiple of the other (u and v
are collinear).
Proof. Let u, v P V . If v “ 0 both sides of the inequality are 0 and the
desired result holds. Suppose that v ‰ 0. Write u “ xu,vy}v}2 v `
´
u´ xu,vy}v}2 v
¯
.
Taking into account that the vectors xu,vy}v}2 v and u´ xu,vy
}v}2 v are orthogonal,
by the Pythagorean theorem we obtain
}u}2 “
›
›
›
›
xu, vy
}v}2v
›
›
›
›
2
`
›
›
›
›
u´xu, vy
}v}2v
›
›
›
›
2
“|xu, vy|2
}v}2`
›
›
›
›
u´xu, vy
}v}2v
›
›
›
›
2
ě|xu, vy|2
}v}2,
inequality equivalent with the one in the theorem.
We have equality iff u´ xu,vy}v}2 v “ 0, that is iff u is a scalar multiple of v.
3.2 Orthonormal Bases
Definition 3.11. Let pV, x¨, ¨yq an inner product space and let I be an
arbitrary index set. A family of vectors A “ tei P V |i P Iu is called an
orthogonal family, if xei, ejy “ 0 for every i, j P I, i ‰ j. The family A is
called orthonormal if it is orthogonal and }ei} “ 1 for every i P I.
Orthonormal Bases 49
One of the reason that one studies orthonormal families is that in such
special bases the computations are much more simple.
Proposition 3.12. If pe1, e2, . . . , emq is an orthonormal family of vectors
in V , then
}α1e1 ` α2e2 ` ¨ ¨ ¨ ` αmem}2 “ |α1|
2 ` |α2|2 ` ¨ ¨ ¨ ` |αm|
2
for all α1, α2, . . . , αm P F.
Proof. Apply Pythagorean Theorem.
Corollary 3.13. Every orthonormal list of vectors is linearly independent.
Proof. Let pe1, e2, . . . , emq be an orthonormal list of vectors in V and
α1, α2, . . . , αm P F with
α1e1 ` α2e2 ` ¨ ¨ ¨ ` αmem “ 0.
It follows that |α1|2 ` |α2|
2 ` ¨ ¨ ¨ ` |αm|2 “ 0, that is αj “ 0, j “ 1,m.
An orthonornal basis of an inner product vector space V is a basis of V
which is also an orthonormal list of V . It is clear that every orthonormal
list of vectors of length dimV is an orthonormal basis (because it is
linearly independent, being orthonormal).
Theorem 3.14. Let pe1, e2, . . . , enq be an orthonormal basis of an inner
product space V . If v “ α1e1 ` α2e2 ` ¨ ¨ ¨ ` αnen P V , then
• αi “ xv, eiy, for all i P t1, 2, . . . , nu and
• }v}2 “nÿ
i“1
|xv, eiy|2
Proof. Since v “ α1e1 ` α2e2 ` ¨ ¨ ¨ ` αnen, by taking the inner product in
both sides with ei we have
50 3. Inner product spaces
xv, eiy “ α1xe1, eiy ` α2xe2, eiy ` ¨ ¨ ¨ ` αixei, eiy ` ¨ ¨ ¨ ` αnxen, eiy “ αi.
The second assertion comes from applying the Pythagorean Theorem
several times.
Up to now we have an image about the usefulness of orthonormal basis.
But how does one go to find them? The next result gives an answer to the
question. The following result is a well known algorithm in linear algebra,
called the Gram-Schmidt procedure. The procedure is pointed here, giving
a method for turning a linearly independent list into an orthonormal one,
with the same span as the original one.
Theorem 3.15. Gram-Schmidt If pv1, v2, . . . , vmq is a linearly
independent set of vectors in V , then there exists an orthonormal set of
vectors pe1, . . . emq in V, such that
spanpv1, v2, . . . , vkq “ spanpe1, e2 . . . , ekq
for every k P t1, 2, . . . ,mu.
Proof. Let pv1, v2, . . . , vmq be a linearly independent set of vectors. The
family of orthonormal vectors pe1, e2 . . . , emq will be constructed
inductively. Start with e1 “v1}v1}
. Suppose now that j ą 1 and an
orthonormal family pe1, e2, . . . , ej´1q has been constructed such that
spanpv1, v2, . . . , vj´1q “ spanpe1, e2, . . . , ej´1q
Consider
ej “vj ´ xvj , e1ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1
}vj ´ xvj , e1ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1}
Since the list pv1, v2, . . . , vmq is linearly independent, it follows that vj is
not in spanpv1, v2, . . . , vj´1q, and thus is not in spanpe1, e2, . . . , ej´1q.
Orthonormal Bases 51
Hence ej is well defined, and }ej} “ 1.By direct computations it follows
that for 1 ă k ă j one has
xej , eky “
B
vj ´ xvj , e1ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1
}vj ´ xvj , e1ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1}, ek
F
“xvj , eky ´ xvj , eky
}vj ´ xvj , e1ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1}
“ 0,
thus pe1, e2, . . . ekq is an orthonormal family. By the definition of ej one
can see that vj P spanpe1, e2, . . . , ejq, which gives (together with our
hypothesis of induction), that
spanpv1, v2, . . . , vjq Ă spanpe1, e2, . . . , ejq
Both lists being linearly independent (the first one by hypothesis and the
second one by orthonormality), it follows that the generated subspaces
above have the same dimension j, so they are equal.
Remark 3.16. If in the Gram Schmidt process we do not normalize the
vectors we obtain an orthogonal basis instead of an orthonormal one.
Now we can state the main results in this section
Corollary 3.17. Every finitely dimensional inner product space has an
orhtonormal basis.
Proof. Choose a basis of V , apply the Gram-Schmidt procedure to it and
obtain an orthonormal list of length equal to dimV . It follows that the list
is a basis, being linearly independent.
The next proposition shows that any orthonormal list can be extended to
an orthonormal basis.
52 3. Inner product spaces
Proposition 3.18. Every orhtonormal family of vectors can be extended
to an orthonormal basis of V .
Proof. Suppose pe1, e2, . . . , emq is an orthonormal family of vectors.. Being
linearly independent, it can be extended to a basis,
pe1, e2, . . . , em, vm`1, . . . , vnq. Applying now the Gram-Schmidt procedure
to pe1, e2, . . . , em, vm`1, . . . , vnq, we obtain the list
pe1, e2, . . . , em, fm`1, . . . , fnq, (note that the Gram Schmidt procedure
leaves the first m entries unchanged, being already orthonormal). Hence
we have an extension to an orthonormal basis.
Corollary 3.19. Suppose that T P EndpV q. If T has an upper triangular
form with respect to some basis of V , then T has an upper triangular form
with respect to some orthonormal basis of V .
Corollary 3.20. Suppose that V is a complex vector space and
T P EndpV q. Then T has an upper triangular form with respect to some
orthonormal basis of V .
3.3 Orthogonal projections and
minimization problems
Let U Ď V be a subset of an inner product space V . The orthogonal
complement of U , denoted by UK is the set of all vectors in V which are
orthogonal to every vector in U i.e.:
UK “ tv P V |xv, uy “ 0, @u P Uu.
It can easily be verified that UK is a subspace of V , V K “ t0u and
t0uK “ V , as well that U1 Ď U2 ñ UK2 Ď UK1 .
Orthogonal projections and minimization problems 53
Theorem 3.21. If U is a subspace of V , then
V “ U ‘ UK
Proof. Suppose that U is a subspace of V . We will show that
V “ U ` UK
Let te1, . . . , emu be an orthonormal basis of U and v P V . We have
v “ pxv, e1ye1 ` ¨ ¨ ¨ ` xv, emyemq ` pv ´ xv, e1ye1 ´ ¨ ¨ ¨ ´ xv, emyemq
Denote the first vector by u and the second by w. Clearly u P U . For each
j P t1, 2, . . . ,mu one has
xw, ejy “ xv, ejy ´ xv, ejy
“ 0
Thus w is orthogonal to every vector in the basis of U , that is w P UK,
consequently
V “ U ` UK.
We will show now that U X UK “ t0u. Suppose that v P U X UK. Then v
is orthogonal to every vector in U , hence xv, vy “ 0, that is v “ 0. The
relations V “ U ` UK and U X UK “ t0u imply the conclusion of the
theorem.
Corollary 3.22. If U1, U2 are subspaces of V then
• U1 “ pUK1 qK.
• pU1 ` U2qK “ UK1 X U
K2 .
• pU1 X U2qK “ UK1 ` U
K2 .
54 3. Inner product spaces
Proof. Your job
In a real inner product space we can define the angle of two vectors
{pv, wq “ arccosxv, wy
}v} ¨ }w}
We have
vKw ô {pv, wq “π
2.
3.4 Linear manifolds
Let V be a vector space over the field F.
Definition 3.23. A set L “ v0 ` VL “ tv0 ` v|v P VLu , where v0 P V is a
vector and VL Ă V is a subspace of V is called a linear manifold (or linear
variety). The subspace VL is called the director subspace of the linear
variety.
Remark 3.24. One can easily verify the following.
• if v0 P VL then L “ VL.
• v0 P L because v0 “ v0 ` 0 P v0 ` VL.
• for v1, v2 P L we have v1 ´ v2 P VL.
• for every v1 P L we have L “ v1 ` VL.
• VL1 “ VL2 iff L1 “ L2.
Definition 3.25. We would like to emphasize that:
1. The dimension of a linear manifold is the dimension of its director
subspace.
2. Two linear manifolds L1 and L2 are called orthogonal if VL1KVL2
.
Linear manifolds 55
3. Two linear manifolds L1 and L2 are called parallel if L1 Ă L2 or
L2 Ă L1.
3.4.1 The equations of a linear manifold
Let L “ v0 ` VL be a linear manifold in a finitely dimensional vector space
V . For dimL “ k ď n “ dimV one can choose in the director subspace VL
a basis of finite dimension tv1, . . . , vku. We have
L “ tv “ v0 ` α1v1 ` ¨ ¨ ¨ ` αkvk|αi P F, i “ 1, ku
We can consider an arbitrary basis (fixed) in V , let’s say E “ te1, . . . , enu
and if we use the column vectors for the coordinates in this basis, i.e.
vrEs “ px1, . . . , xnqT , v0rEs “ px
01, . . . , x
0nqT , vjrEs “ px1j , . . . , xnjq
T , j “
1, k, one has the parametric equations of the linear manifold
$
’
’
’
’
&
’
’
’
’
%
x1 “ x01 ` α1x11 ` ¨ ¨ ¨ ` αkx1k...
xn “ x0n ` α1xn1 ` ¨ ¨ ¨ ` αkxnk
The rank of the matrix pxijqi“1,n
j“1,k
is k because the vectors v1, . . . , vk are
linearly independent.
It is worthwhile to mention that:
1. a linear manifold of dimension one is called line.
2. a linear manifold of dimension two is called plane.
3. a linear manifold of dimension k is called k plane.
4. a linear manifold of dimension n´ 1 in an n dimensional vector
space is called hyperplane.
56 3. Inner product spaces
Theorem 3.26. Let us consider V an n-dimensional vector spaces over
the field F. Then any subspace of V is the kernel of a surjective linear
map.
Proof. Suppose VL is a subspace of V of dimension k.. Choose a basis
te1, . . . , eku in VL and complete it to a basis te1, . . . , ek, ek`1, . . . , enu of V .
Consider U “ spantek`1, . . . , enu. Let T : V Ñ U given by
T pe1q “ 0, . . . T pekq “ 0, T pek`1q “ ek`1, . . . , T penq “ en.
Obviously,
T pα1e1 ` ¨ ¨ ¨ ` αnenq “ α1T pe1q ` ¨ ¨ ¨ ` αnT penq “ αk`1ek`1 ` ¨ ¨ ¨ ` αnen
defines a linear map. It is also clear that kerT “ VL as well that T is
surjective, i.e. Im T “ U .
Theorem 3.27. Let V,U two linear spaces over the same field F. If
T : V Ñ U is a surjective linear map, then for every u0 P U , the set
L “ tv P V |T pvq “ u0u is a linear manifold.
Proof. T being surjective, there exists v0 P V with T pv0q “ u0. We will
show that tv ´ v0|v P Lu “ kerT .
Let v P L. We have T pv ´ v0q “ T pvq ´ T pv0q “ 0, so
tv ´ v0|v P Lu Ď kerT .
Let v1 P kerT , i.e. T pv1q “ 0. Write v1 “ pv1 ` v0q ´ v0. T pv1 ` v0q “ u0,
so pv1 ` v0q P L. Hence, v1 P tv ´ v0|v P Lu or, in other words
kerT Ď tv ´ v0|v P Lu.
Consequently L “ v0 ` kerT, which shows that L is a linear manifold.
The previous theorems give rise to the next:
Theorem 3.28. Let V a linear space of dimension n. Then, for every
linear manifold L Ă V of dimension dimL “ k ă n, there exists an
Linear manifolds 57
n´ k-dimensional vector space U , a surjective linear map T : V Ñ U and
a vector u P U such that
L “ tv P V |T pvq “ uu.
Proof. Indeed, consider L “ v0 ` VL, where the dimension of the director
subspace VL “ k. Choose a basis te1, . . . , eku in VL and complete it to a
basis te1, . . . , ek, ek`1, . . . , enu of V . Consider U “ spantek`1, . . . , enu.
Obviously dimU “ n´ k. According to a previous theorem the linear map
T : V Ñ U, T pα1e1 ` ¨ ¨ ¨ ` αkek ` αk`1ek`1 ` ¨ ¨ ¨ ` αnenq “
αk`1ek`1 ` ¨ ¨ ¨ ` αnen is surjective and kerT “ VL. Let T pv0q “ u. Then,
according to the proof of the previous theorem L “ tv P V |T pvq “ uu.
Remark 3.29. If we choose in V and U two bases and we write the linear
map by matrix notation MT v “ u we have the implicit equations of the
linear manifold L,
$
’
’
’
’
&
’
’
’
’
%
a11v1 ` a12v2 ` ¨ ¨ ¨ ` a1nvn “ u1...
ap1v1 ` ap2v2 ` ¨ ¨ ¨ ` apnvn “ up
where p “ n´ k “ dimU “ rank paijq i“1,pj“1,n
.
A hyperplane has only one equation
a1v1 ` ¨ ¨ ¨ ` anvn “ u0
The director subspace can be seen as
VL “ tv “ v1e1 ` ¨ ¨ ¨ ` vnen|fpvq “ 0u “ ker f,
where f is the linear map (linear functional) f : V Ñ R with
fpe1q “ a1, . . . , fpenq “ an.
58 3. Inner product spaces
If we think of the hyperplane as a linear manifold in the euclidean space
Rn, the equation can be written as
xv, ay “ u0,where a “ a1e1 ` ¨ ¨ ¨ ` anen, u0 P R.
The vector a is called the normal vector to the hyperplane.
Generally in a euclidean space the equations of a linear manifold are
$
’
’
’
’
&
’
’
’
’
%
xv, v1y “ u1...
xv, vpy “ up
where the vectors v1, . . . vp are linearly independent. The director
subspace is given by$
’
’
’
’
&
’
’
’
’
%
xv, v1y “ 0
...
xv, vpy “ 0
so, the vectors v1, . . . , vp are orthogonal to the director subspace VL.
3.5 Orthogonal projections. Distances.
In this section we will explain how we can measure the distance between
some ”linear sets”, which are linear manifolds.
Let pV, x¨, ¨yq be an inner product space and consider the vectors
vi P V , i “ 1, k.
The determinant
Gpv1, . . . , vkq “
∣∣∣∣∣∣∣∣∣∣∣∣
xv1, v1y xv1, v2y . . . xv1, vky
xv2, v1y xv2, v2y . . . xv2, vky
. . . . . . . . . . . .
xvk, v1y xvk, v2y . . . xvk, vky
∣∣∣∣∣∣∣∣∣∣∣∣
Orthogonal projections. Distances. 59
is called the Gram determinant of the vectors v1 . . . vk.
Proposition 3.30. In an inner product space the vectors v1, . . . , vk are
linearly independent iff Gpv1, . . . , vkq ‰ 0.
Proof. Let us consider the homogenous system
G ¨
¨
˚
˚
˚
˚
˚
˚
˝
x1
x2...
xk
˛
‹
‹
‹
‹
‹
‹
‚
“
¨
˚
˚
˚
˚
˚
˚
˝
0
0...
0
˛
‹
‹
‹
‹
‹
‹
‚
.
This system can be written as$
’
’
’
’
&
’
’
’
’
%
xv1, vy “ 0
... where v “ x1v1 ` . . . xkvk.
xvk, vy “ 0
The following statements are equivalent.
The vectors v1, . . . , vk are linearly dependent. ðñ There exist
x1, . . . , xk P F, not all zero such that v “ 0. ðñ The homogenous system
has a nontrivial solution. ðñ detG “ 0.
Proposition 3.31. If te1, . . . , enu are linearly independent vectors and
tf1, . . . , fnu are vectors obtained by Gram Schmidt orthogonalization
process, one has:
Gpe1, . . . , enq “ Gpf1, . . . , fnq “ }f1}2 ¨ . . . ¨ }fn}
2
Proof. In Gpf1, . . . , fnq replace fn by en ´ a1f1 ´ ¨ ¨ ¨ ´ an´1fn´1 and we
obtain
Gpf1, . . . , fnq “ Gpf1, . . . , fn´1, enq.
By an inductive process the relation in the theorem follows. Obviously
Gpf1, . . . , fnq “ }f1}2 ¨ . . . ¨ }fn}
2 because in the determinant we have only
on the diagonal xf1, f1y, . . . , xfn, fny.
60 3. Inner product spaces
Remark 3.32. Observe that:
• }fk} “
d
Gpe1, . . . ekq
Gpe1, . . . , ek´1q
• fk “ ek ´ a1f1 ´ . . . ak´1fk´1 “ ek ´ vk one obtains ek “ fk ` vk,
vk P spante1, . . . , ek´1u and fk P spante1, . . . , ek´1uK, so fk is the
orthogonal complement of ek with respect to the space generated by
te1 . . . , ek´1u.
3.5.1 Distance problems
The distance between a vector and a subspace
Let U be a subspace of the inner product space V . The distance between a
vector v and the subspace U is
dpv, Uq “ infwPU
dpv, wq “ infwPU
}v ´ w}
Proposition 3.33. The distance between a vector v P V and a subspace U
is given by
dpv, Uq “ }vK} “
d
Gpe1, . . . , ek, vq
Gpe1, . . . , ekq,
where v “ v1 ` vK, v1 P U, v
K P UK and e1, . . . , ek is a basis in U .
Proof. First we prove that }vK} “ }v ´ v1} ď }v ´ u}, @u P U . We have
}vK} ď }v ´ u} ô
xvK, vKy ď xvK ` v1 ´ u, vK ` v1 ´ uy ô
xvK, vKy ď xvK, vKy ` xv1 ´ u, v1 ´ uy.
The second part of the equality, i.e. }vK} “b
Gpe1,...,ek,vqGpe1,...,ekq
, follows from
the previous remark.
Orthogonal projections. Distances. 61
Definition 3.34. If e1, . . . , ek are vectors in V the volume of the k-
parallelepiped constructed on the vectors e1, . . . , ek is defined by
Vkpe1, . . . , ekq “a
Gpe1, . . . , ekq.
We have the following inductive relation
Vk`1pe1, . . . , ek, ek`1q “ Vkpe1, . . . , ekqdpek`1, spante1, . . . , ekuq.
The distance between a vector and a linear manifold
Let L “ v0 ` VL be a linear manifold, and let v be a vector in a finitely
dimensional inner product space V . The distance induced by the norm is
invariant by translations, that is, for all v1, v2 P V one has
dpv1, v2q “ dpv1 ` v0, v1 ` v0q ô }v1 ´ v2} “ }v1 ` v0 ´ pv2 ` v0q}
That means that we have
dpv, Lq “ infwPL
dpv, wq “ infvLPVL
dpv, v0 ` vLq
“ infvLPVL
dpv ´ v0, vLq
“ dpv ´ v0, VLq.
Finally,
dpv, Lq “ dpv ´ v0, VLq “
d
Gpe1, . . . , ek, v ´ v0q
Gpe1, . . . , ekq,
where e1, . . . , ek is a basis in VL.
Let us consider the hyperplane H of equation
xv ´ v0, ny “ 0 .
The director subspace is VH “ xv, ny “ 0 and the distance
dpv,Hq “ dpv ´ v0, VHq.
62 3. Inner product spaces
One can decompose v ´ v0 “ αn` vH , where vH is the orthogonal
projection of v ´ v0 on VH and αn is the normal component of v ´ v0 with
respect to VH . It means that
dpv,Hq “ }αn}
Let us compute a little now, taking into account the previous observations
about the tangential and normal part:
xv ´ v0, ny “ xαn` vH , ny
“ αxn, ny ` xvH , ny
“ α}n}2 ` 0
So, we obtained
|xv ´ v0, ny|
}n}“ |α|}n} “ }αn}
that is
dpv,Hq “|xv ´ v0, ny|
}n}
In the case that we have an orthonormal basis at hand, the equation of the
hyperplane H is
a1x1 ` ¨ ¨ ¨ ` akxk ` b “ 0 ,
so the relation is now
dpv,Hq “|a1v1 ` ¨ ¨ ¨ ` akvk ` b|
a
a21 ` ¨ ¨ ¨ ` a2k
.
The distance between two linear manifolds
For A and B sets in a metric space, the distance between them is defined
as
dpA,Bq “ inftdpa, bq|a P A , b P Bu.
Orthogonal projections. Distances. 63
For two linear manifolds L1 “ v1 ` V1 and L2 “ v2 ` V2 it easily follows:
dpL1, L2q “ dpv1 ` V1, v2 ` V2q “ dpv1 ´ v2, V1 ´ V2q (3.1)
“ dpv1 ´ v2, V1 ` V2q. (3.2)
This gives us the next proposition.
Proposition 3.35. The distance between the linear manifolds
L1 “ v1 ` V1 and L2 “ v2 ` V2 is equal to the distance between the vector
v1 ´ v2 and the sum space V1 ` V2.
If we choose a basis in V1 ` V2, let’s say e1, . . . , ek, then this formula
follows:
dpL1, L2q “
d
Gpe1, . . . , ek, v1 ´ v2q
Gpe1 . . . ekq.
Some analytic geometry
In this section we are going to apply distance problems in euclidean spaces.
Consider the vector space Rn with the canonical inner product, that is: for
x “ px1, . . . , xnq, y “ py1, . . . , ynq P Rn the inner product is given by
xx, yy “nÿ
i“1
xkyk.
Consider D1 , D2 two lines (one dimensional linear manifolds), M a point
(zero dimensional linear manifold, we assimilate with the vector
xM “ 0M), P a two dimensional linear manifold (a plane), and H an
n´ 1 dimensional linear manifold (hyperplane). The equations of these
linear manifolds are:
D1 : x “ x1 ` sd1
D2 : x “ x2 ` td2
M : x “ xM
P : x “ xP ` αv1 ` βv2
H : xx, ny ` b “ 0,
64 3. Inner product spaces
where s, t, α, β, b P R. Recall that two linear manifolds are parallel if the
director space of one of them is included in the director space of the other.
Now we can write down several formulas for distances between linear
manifolds.
dpM,D1q “
d
GpxM ´ x1, d1q
Gpd1q;
dpM,P q “
d
GpxM ´ xP , v1, v2q
Gpv1, v2q;
dpD1, D2q “
d
Gpx1 ´ x2, d1, d2q
Gpd1, d2qif D1 ∦ D2
dpD1, D2q “
d
Gpx1 ´ x2, d1q
Gpd1, qif D1 ‖ D2
dpM,Hq “|xxM , ny ` b|
}n}
dpD1, P q “
d
Gpx1 ´ xP , d1, v1 ` v2q
Gpd1, v1, v2qif D1 ∦ P
top related