113-114.docx

5
11,3,1 We begin with a lemma. Lemma: Let be a unital ring and let be left unital - modules. If is -bilinear, then the induced map given by is a well-defined -module homomorphism. Proof: To see well definedness, we need to verify that is a module homomorphism. To that end note that . Similarly, to show that is a module homomorphism, note that , so that . [Note to self: In a similar way, if is a unital ring and unital modules, and is trilinear, then is bilinear. (So that the induced map is a module homomorphism, or unilinear- if you will.) That is to say, in a concrete fashion we can think of multilinear maps as the uncurried versions of higher order functions on modules. (!!!) (I just had a minor epiphany and it made me happy. Okay, so the usual isomorphism is just this lemma applied to the dot product ... that's cool.) Moreover, if and if and are -algebras, then the induced map is an algebra homomorphism if and only if and .] Define by . This map is certainly bilinear, and so by the lemma induces the linear transformation . Since has finite dimension, and since its dual space has the same dimension, to see that is an isomorphism of vector spaces it suffices to show that the kernel is trivial. To that end, suppose . Then we have for all . In particular, we have for all . If there exists a nonzero element , then by the Building-up lemma there is a basis of containing . In particular, there is a linear transformation such that . That is, we have , so that . Hence is injective, and so an isomorphism of vector spaces.

Upload: roger-sparks

Post on 12-May-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 113-114.docx

11,3,1We begin with a lemma.Lemma: Let be a unital ring and let be left unital -modules. If is

-bilinear, then the induced map given by is a well-defined -module homomorphism. Proof: To see well definedness, we need to verify that

is a module homomorphism. To that end note that

. Similarly, to show that is a module homomorphism, note that

, so that . [Note to self: In a similar way, if is a unital ring and unital modules, and

is trilinear, then is bilinear. (So that the

induced map is a module homomorphism, or unilinear- if you will.) That is to say, in a concrete fashion we can think of multilinear maps as the uncurried versions of higher order functions on modules. (!!!) (I just had a minor epiphany and it made me

happy. Okay, so the usual isomorphism is just this lemma applied to the dot

product ... that's cool.) Moreover, if and if and are -algebras, then the induced map is an algebra homomorphism if and only if

and .]

Define by . This map is certainly bilinear, and so by the lemma induces the linear transformation

. Since has finite dimension, and since its dual space

has the same dimension, to see that is an isomorphism of vector spaces it suffices to show that the kernel is trivial. To that end, suppose . Then we have

for all . In particular, we have for all . If there exists a nonzero element , then by the Building-up lemma there is a basis of containing .

In particular, there is a linear transformation such that . That is, we have , so that . Hence is injective, and so an isomorphism of vector spaces.

Note that , while

. If has dimension greater than 1, then is not a commutative ring. Thus these expressions need not be equal in general. In fact, if we choose , , and such that

, , and , then clearly and . In particular, is not a ring isomorphism if . On the other hand, if ,

then is commutative, and is a ring isomorphism.

On the other hand, these rings are clearly isomorphic since and are vector spaces of the same dimension.

Note that and are both -algebras via the usual scalar

multiplication by . Fix a basis of , and identify the linear transformation with

Page 2: 113-114.docx

its matrix with respect to this basis. (Likewise for .) Now define

by . It is clear that is well defined, and moreover is an -vector space homomorphism. Note also that

, so that

. Thus is a ring homomorphism; since , we have

, and indeed is an -algebra homomorphism. It remains to be seen that is an

isomorphism; it suffices to show injectivity. To that end, suppose for all . Then for all , and so . Thus is an -algebra isomorphism

. Note that depends essentially on our choice of a basis , and so is not “natural”.

11,3,5

We claim that . To prove this, for each let be a copy of . Now define

by . By the universal property of direct products, there exists a

unique -linear transformation such that for all . We claim

that is an isomorphism. To see surjectivity, let . Now define by letting

and extending linearly; certianly . To see injectivity, suppose .

Then , so that , and thus for all . Thus for all . Since is a basis of , we have . Thus is an isomorphism, and we have

.By this previous exercise, has strictly larger dimension than does .

11,4,1

Let .

We begin with a definition. If , where has dimension

, then the -minor of is the matrix .Recall that the cofactor expansion formula for along the th row is

.The analogous expansion along the th column is

.

which we presently prove to be true. First, note that ; this follows from our

definition of minors and the fact that . Now we have the following. =

Page 3: 113-114.docx

=

=

=

= ,as desired.About these ads

11,4,2Let be the reduced row echelon form of , and let be invertible such that .Suppose the columns of are linearly independent. Now has column rank . In particular,

. Now ; so .We prove the converse contrapositively. Suppose the columns of are linearly dependent; then the column rank of is strictly less than , so that has a row of all zeros. Using the cofactor

expansion formula, . Since is invertible, its determinant is nonzero; thus

. Thus if , then the columns of are linearly independent.

11,4,4

Note that if is not the identity, then there must exist an element such that . (Otherwise, we can show by induction that .) Now recalling the naive formula for

computing , note that if is an upper- or lower-triangular matrix, then is merely the product of the diagonal entries.

In particular, we have , and if . Note also that the row operation of interchanging two rows is equivalent to three operations which add a multiple of one row to another and one scalar row multiplication by -1. Since determinants are multiplicative, the determinant of the elementary matrix achieving this operation is -1.

Suppose . In this previous exercise, we saw that the columns of are linearly independent. In particular, all columns in the reduced row echelon form of are pivotal, and so

is row equivalent to the identity matrix. Conversely, suppose is row equivalent to the identity

matrix. Then the columns of are linearly independent, and so .Suppose now that , where is a product of elementary matrices. Since determinants

are multiplicative, it is clear that has the given form.

11,4,5Evidently, matrix is in reduced row echelon form after performing the following sequence of row operations:

1.2.3.4.

5.

Page 4: 113-114.docx

6.7.

8.9.

This sequence required one interchange and three scalar multiplications of rows. So

.Similarly, is in reduced row echelon form after performing the following sequence of row operations:

1.2.3.4.5.6.7.8.9.10.11.

Thus .