introduction to tensors

15
5. Introductory Tensor Analysis 5.1 Dyadic Algebra Consider two vectors, a and b . As we saw in chapter 3 we can write them as follows: a =a 1 u x + a 2 u y + a 3 u z b =b 1 u x + b 2 u y + b 3 u z where each vector has three components in our Cartesian space. If we multiply them in the 'normal' distributive fashion: a b = ( a 1 u x +a 2 u y +a 3 u z )( b 1 u x + b 2 u y + b 3 u z ) = a 1 b 1 u x u x + a 1 b 2 u x u y +a 1 b 3 u x u z +a 2 b 1 u y u x +a 2 b 2 u y u y +a 2 b 3 u y u z +a 3 b 1 u z u x + a 3 b 2 u z u y +a 3 b 3 u z u z This is the direct product of a and b , referred to briefly in chapter 2, and the resulting object is called a dyad. Note that, in our Cartesian space, there are now nine scalar coefficients, aibj , that is, 3x3 from the vectors a and b . One represents this compactly as 1 : D= a b Just as u x is termed a unit vector, u x u x is a unit dyad. Is this product commutative, you ask? Let's see: b a = ( b 1 u x +b 2 u y +b 3 u z )( a 1 u x + a 2 u y + a 3 u z ) = b 1 a 1 u x u x + b 1 a 2 u x u y +b 1 a 3 u x u z +b 2 a 1 u y u x +b 2 a 2 u y u y +b 2 a 3 u y u z +b 3 a 1 u z u x + b 3 a 2 u z u y +b 3 a 3 u z u z Now we subtract the dyadic products: 1 Do not confuse this notation with the ourer product notation of chapter 3.  Recall that the outer product of a  and b  as we have defined it is the product of a  and the transpose of b . Note also that we use an underscore here to represent dyadic (and higher) products.

Upload: anonymous-ti2but

Post on 12-Jan-2016

112 views

Category:

Documents


4 download

DESCRIPTION

Describes tensors and their fundamental aspects.

TRANSCRIPT

Page 1: Introduction to Tensors

5. Introductory Tensor Analysis

5.1 Dyadic Algebra

Consider two vectors, a and b . As we saw in chapter 3 we can write them as follows:

a=a1ux+a2 uy+a3 uz

b=b1ux+b2 uy+b3 uz

where each vector has three components in our Cartesian space. If we multiply them in the 'normal' distributive fashion:

a⊗ b= (a1 ux+a2u y+a3 uz ) (b1ux+b2 uy+b3 uz )

=a1 b1 ux ux+a1 b2 ux u y+a1 b3 ux uz

+a2b1u y ux+a2 b2 uy u y+a2 b3 u y uz

+a3b1uz ux+a3 b2uz u y+a3b3uz uz

This is the direct product of a and b , referred to briefly in chapter 2, and the resulting object is called a dyad. Note that, in our Cartesian space, there are now nine scalar coefficients, aibj , that is, 3x3 from the vectors a and b . One represents this compactly as1:

D=a⊗ b

Just as ux is termed a unit vector, ux ux is a unit dyad.

Is this product commutative, you ask? Let's see:

b⊗ a= (b1 ux+b2u y+b3 uz ) (a1ux+a2 uy+a3 uz )

=b1 a1 ux ux+b1 a2 ux u y+b1 a3 ux uz

+b2 a1u y ux+b2 a2 uy u y+b2 a3 u y uz

+b3a1uz ux+b3 a2uz u y+b3a3uz uz

Now we subtract the dyadic products:

1 Do not confuse this notation with the ourer product notation of chapter 3.  Recall that the outer product of a  and b  as we have defined it is the product of a  and the transpose of b . Note also that we use an underscore here to represent dyadic (and higher) products.

Page 2: Introduction to Tensors

a⊗ b− b⊗ a=(a1b1−b1 a1) ux ux+(a1 b2−b1a2) uxu y+(a1 b3−b1 a3) ux u z

=(a2b1−b2 a1)u y ux+(a2b2−b2 a2) uy uy+(a2b3−b2 a3)u y uz

=(a3 b1−b3 a1) uz ux+(a3 b2−b3 a2)uz u y+(a3 b3−b3a3) uz uz

The terms with the same subscripts are all zero however the non-identical subscript terms are not necessarily equal. Therefore the dyadic product is not commutative in general.

Now, what would be the result of, say, the inner product of vector c with dyad D ? We define this operation by 'associating' the vector c with the vector 'beside' it in D . Thus, if we premultiply by c :

c⋅D=c⋅a b=b

Post multiplication gives:

D⋅c=a b⋅c =a

Thus, this type of inner product is not commutative, ie.

c⋅D≠D⋅c

unlike the inner product of two vectors. We see that the inner product of vector with a dyad gives back one of the vectors that make up the dyad multiplied by a constant. We will have more to say about this shortly.

The astute reader will by now be saying to herself “Huh? What is this?”. And well so. To rationalize this in terms of previous discussions of vectors lets switch to our matrix representations and construct our dyad again:

a=[a1

a2

a3] b=[

b1

b2

b3]

In order to construct the dyad such that doing an inner product of the dyad with a vector makes sense in terms of matrices we must use the transpose of b :

a=[a1

a2

a3] bT

=[b1 b2 b3 ]

D= a bT

Page 3: Introduction to Tensors

This is, of course, the outer product of chapter 3. We could write this equivalently using the “ ⊗ “ operator as was mentioned in chapter 3. The convention when writing a dyadic product of vectors is not to explicitly indicate that the second vector is actually a transposed vector.

Now, if we do an inner product of D with vector c explicitly in terms of matrices:

D=( a⊗ b )⋅c=[a1

a2

a3] [b1 b2 b3 ]⋅[

c1

c2

c3]

=λ[a1

a2

a3]

we see that the inner product of b with c is, in terms of matrices, as we have already seen in the chapter on matrices. We might perhaps make this a bit clearer using Dirac notation:

D=|a><b |D⋅c=|a><b |c>

= |a>

What about premultiplication? A little thought will show that this must require the use of the transpose of vector c . Again, in the Dirac notation:

D=|a><b |c T⋅D=<c |a> <b |

= | a>

We can do the same type of analysis with cross products:

c×D= c×a b=d b=N

D×c=a b×c =af=O

and again we find that the product is not commutative:

c×D≠D×c

but this time the result of the product of a dyad with vector is a new dyad.

The third type of product that we will consider is the same type that we started with .. the normal distributive multiplication. Thus we

Page 4: Introduction to Tensors

will multiply dyad D by vector c :

c D=c a b

Long multiplication will produce:

c a bc1 uxc2 uyc3 u z a1 uxa2 u ya3 u z b1 uxb2 u yb3 u z

= c1a1 b1 ux ux uxc1 a1 b2 ux ux u yc1 a1 b3 ux ux u z

c3a3b3 u z u z u z

in which there are now 81 or 3x3x3 coefficients. This product of three vectors is called a triad. Hopefully, you can see that we can take this as far as we wish to produce tetrads, pentads etc. We can consider a way to calculate the number of terms or coeffiecients in each of these objects. Our vectors have three terms, our dyads have 9 terms and our triads have 81 terms. Let us say that a vector has a rank of 1, a dyad a rank of 2 and a triad has a rank of three. Using these numbers we can now say that the number of coefficients in one of these objects is:

ncoefficients=3rank

Now, consider the inner product of a vector with a dyad that we just discussed. The same operation on a triad will produce a dyad times a constant (try it for yourself). Thus, the inner product reduces the rank by one to produce a lower ranked object. With this in mind we can see that a scalar will be an object of rank 0 since the inner product of a rank 1 object with a vector (or simply the inner product of a vector with a vector) is a scalar as we have seen in chapter 3.

Our determination of the number of coefficients is a little artificial since we have been using Cartesian space for our deliberations. To be completely general we would write:

ncoefficients=drank

where 'd' represents the number of dimensions of the space under consideration. For clarity, however, we will continue to work with Cartesian space.

Let's take a closer look at the dyad, D . Our longhand representation of it is:

Page 5: Introduction to Tensors

D=a b= a1 uxa2u ya3u z b1uxb2 uyb3 uz

=a1 b1 ux uxa1 b2 ux u ya1b3 ux uz

a2b1 u y uxa2 b2 uy u ya2 b3 u y uz

a3b1 uz uxa3 b2 uz u ya3b3 uz uz

with coefficients and unit dyads, very similar to the longhand representation of vectors. Recall from chapter 3 that a simple Euclidean vector can be represented using a 1x3 column matrix. Thus vector a has the components a1, a2 and a3: which we include in a matrix representing the vector:

a=[a1a2a3]

In a very similar manner we can represent the dyad as a square matrix using the coefficients of the unit dyads:

D=[a1 b1 a1b2 a1b3

a2 b1 a2b2 a2b3

a3 b1 a3b2 a3b3]

=[d11 d12 d13

d21 d22 d23

d31 d32 d33]

This makes sense, considering our earlier discussion of the formation of a dyad from two vectors using the Dirac notation. This involves an outer product which, as we have seen in chapter 3, results in a matrix. Since we can represent the dyad as a square matrix we expect that the algebra of the dyad will be identical to that of matrices:

A+B=B+A (commutative)A+(B+C)=(A+B)+C (associative)

A+0=A (identity)A+(−A)=0 (additive inverse)

α(A+B)=α A+αB (scalar distributive )(α+β)A=αA+β A (matrix distributive )

(αβ)A=α(β A) (associative law for multiplication )

The dyad 0 represents the zero dyad with all zero coefficients, as you no doubt already suspected.

So, all dyads can be represented by matrices. How about the reverse? Are all square matrices representations of dyads? From our

Page 6: Introduction to Tensors

previous discussion this would require that we be able to factor a dyad into two vectors. In matrix notation this is:

D⇒ a bT

or

[a1 b1 a1b2 a1 b3

a2 b1 a2b2 a2 b3

a3 b1 a3b2 a3 b3]⇒[

a1

a2

a3] [b1 b2 b3 ]

Any matrix can have any values that we want to put into it so if we have the matrix:

[a1 b1 a2b2 a3b3

a2 b1 a2b2 a2b3

a3 b1 a3b2 a3b3]

in which the first row differs from the previous matrix, we cannot construct this matrix from the direct product of vectors a and b(except in the trivial case of 0 vectors ) nor can it be factored

into a and b . Therefore we cannot say that in general, all square matrices are dyads.

There is an operation that we can do on a dyad called contraction. As we have learned, the dyad can be constructed from the direct product of two vectors. The dyad is said to be contracted if inner product is taken of the two component vectors (using Dirac notation):

D⇒ a⊗ bD(contracted )=<a |b>=η

This reduces dyad, D, to a scalar. Of course for higher rank objects there are multiple contractions … in general there will be (rank – 1) possible ways to do a contraction. Also, note that the contraction operation reduces the rank by two.

We must point out some potential notational problems before proceeding. First, in our discussion of matrices we distinguished between the matrix product of two matrices and the direct product of two matrices (equation [2-3]). We must also be careful to do so here for dyads. 'Regular' multiplication is the same as matrix multiplication:

Page 7: Introduction to Tensors

A B

=[a11 a12 a13

a21 a22 a23

a31 a32 a33][b11 b12 b13

b21 b22 b23

b31 b32 b33]

=[a11 b11+a12b21+a13b31 a11b12+a12b22+a13 b32 a11b13+a12b23+a13 b33

a21 b11+a22b21+a23b31 a21b12+a22b22+a23 b32 a21b13+a22b23+a23 b33

a31 b11+a32b21+a33b31 a31b12+a32b22+a33 b32 a31b13+a32b23+a33 b33]

=C

The direct product of two dyads is:

A B

=[a11 a12 a13

a21 a22 a23

a31 a32 a33][b11 b12 b13

b21 b22 b23

b31 b32 b33]

=[a11 [

b11 b12 b13

b21 b22 b23

b31 b32 b33] a12[

b11 b12 b13

b21 b22 b23

b31 b32 b33] a13[

b11 b12 b13

b21 b22 b23

b31 b32 b33]

a21[b11 b12 b13

b21 b22 b23

b31 b32 b33] a22[

b11 b12 b13

b21 b22 b23

b31 b32 b33] a23[

b11 b12 b13

b21 b22 b23

b31 b32 b33]

a31 [b11 b12 b13

b21 b22 b23

b31 b32 b33] a32[

b11 b12 b13

b21 b22 b23

b31 b32 b33] a33[

b11 b12 b13

b21 b22 b23

b31 b32 b33]]¿

= [a11b11 a11b12 a11b13 ⋯ a13b13

⋮ ⋮a31b11 ⋯ ⋯ ⋯ a33b33

] 81 terms

Also, in linear algebra it is common to write the premultiplication of a vector by a matrix as:

M x= y

the result of which is a new vector. However, in the context of dyads this would produce a triad, increasing the rank of the object:

M x=O

To produce a vector we must use the inner product notation:

Page 8: Introduction to Tensors

M⋅x=y

We must take care not to confuse the two. Our notation here has been to use M do denote a matrix and M do denote a dyad. Usually the context will tell us which is which; however in other texts the distinction may not be so clear. Thus A B is used in this text for standard matrix multiplication and AB or A⊗B for dyad (or triad, tetrad etc.) direct product multiplication.

So, to recap, we have some new mathematical objects developed from the application of the direct product of vectors with each other. Each of these objects has a 'rank' associated with it which is equivalent to the power that the dimensionality of the space of the vector(s) is raised to in order to generate the number of coefficients of the object. Thus, the dyad results from the direct product of two 3D vectors and has rank 2 or the power of 2 in 32. Three vectors give a triad of rank 3 and four vectors give a tetrad of rank 4. Scalars are ranked 0 since they consist of no vectors.

5.2 The gradient of a vector

In chapter 4 we alluded to the gradient calculation:

∇a or grada

and made the assertion that the result is a dyad. We now show that this is so via the direct product of ∇ and a :

∇⊗ a=( ∂∂ x

ux+∂∂ y

u y+∂∂ z

uz) (ax ux+a y u y+az uz )

=∂ax

∂ xux ux+

∂a y

∂ xux uy+

∂ az

∂ xux uz

+∂ ax

∂ yu y ux+

∂ ay

∂ yu y uy+

∂ az

∂ yu y uz

+∂ ax

∂ zuz ux+

∂a y

∂ zuz u y+

∂ az

∂ zuz uz

The ui u j are the unit dyads as above and the partial derivatives are the components of the dyad. We can compact this a bit using matrix notation:

Page 9: Introduction to Tensors

∇a=D=[∂ ax

∂ x∂ ay

∂ x∂az

∂ x∂ ax

∂ y

∂ ay

∂ y

∂az

∂ y∂ ax

∂ z∂ ay

∂ z∂az

∂ z]

5.3 Transformations

We have seen in chapter three that the norm of a vector or more generally, the inner product of a pair of vectors is invariant to rotations. Rotation operators are orthogonal which in visual terms means that an operator and its transpose rotate in opposite directions by the same amount. Also, we have seen that the rotation operation may be considered a rotation of coordinates with a consequent change of basis set. One can also envision a change of coordinates involving translation or perhaps both translation and rotation together. In magnetic resonance spectroscopy we are primarily concerned with rotations. Intuitively, we expect that the norm of the vector will remain the same in the new coordinate system as in the old one, as it did for rotations only.

Thus, a vector in coordinate system A is considered to be the same vector in coordinate system B, assuming the beginning and ending points of the vector do not move with the coordinate system change. The components of the vector in each coordinate system will generally not be equal however we expect the norm (the length) to remain constant. Let's suppose, then, that we have a 2D vector, a=a1 uxa2 uy , in coordinate system A. To transform to coordinate

system B we use a function of some type:

b1=b1a1,a2

b2=b2a1,a2

and our vector is now:

b=b1 vxb2 v y

However, we have just said that we expect the norm of the new vector to be the same as the old vector since the vector itself doesn't change as a result of the transformation. An observer in coordinate system A must see the same vector that an observer in B will see. Thus to indicate that these are the same vector we write:

{ a= b}⇒< a | a >=< b | b >=τ

Page 10: Introduction to Tensors

which is meant to indicate that (although their components are different) their norms are the same and that they are in reality the same vector.

Are there any vectors to which this reasoning might not apply? Yes, the position vector that locates a point in space is one example. The head of the vector is at the point in space and the tail is located at the origin of the coordinate system. Moving the coordinate system (as in translational motion) will potentially move the origin and very likely change the length of the position vector. Thus, our condition for equivalence of vectors in different coordinate systems is not, in general, satisfied for this type of vector. This will not however, be a problem for us as all of our considerations of coordinate changes will involve rotations in which the origins of the old and new coordinates will be at the same point in the space.

Let's apply this idea to our higher rank objects starting with the dyad. Thus, we assert that in transforming from coordinate system A to coordinate system B, the dyad in question will remain the same dyad in both coordinate systems much the same as is the case with vectors. An observer in A sees dyad D and an observer in B sees dyad

E . In the case of the vectors we used the inner product operation to reduce them to scalars that were invariant with respect to coordinate changes so let's try to do the same type of thing with dyads. Our tool for doing so is the dyad contraction. We will contract dyads D and E to scalars d and e and compare them. We begin by supposing the the dyads are in fact, equal2:

D= a b⇒ a⋅b=dE= c d⇒ c⋅d=e

{D=E }⇒ a b= c d

Taking the left inner product with a :

a⋅a b=a⋅c da2b=a⋅c d

b=a⋅c

a2d

The term in brackets is the scalar result of an inner product calculation which is divided by a2, another scalar. For convenience we replace this with a single scalar variable:

2 This exposition is that of J.C. Kolecki. See the references.

Page 11: Introduction to Tensors

letχ=( a⋅c )

a2

b=χ d

Now we do the right inner product with b :

a b⋅b= c d⋅ba b2= c ( d⋅b)

a=c ( d⋅b )

b2

Using the result of our left inner product calculation:

a=c d⋅b

b2

a=c b⋅b

b2 =c

Now, we have:

a⋅b=c⋅d=c⋅d

ord=e

Thus, if the dyads are equivalent so are their associated scalars. Presumably the reverse is true as well. If the contractions of each dyad are equal to each other then so are the dyads. We mean this in the same sense in which we discussed vectors. In other words, although the components of D and E may not be the same, they represent the same dyad if their contractions are equivalent.

5.4 A Tensor Definition

We can now define a tensor. We mean by this term a mathematical object which is invariant to transformation of basis. We have already seen that the scalar object is invariant to a change of basis as are vectors and dyads. In other words a scalar such as temperature of a cup of tea is the same whether the coordinate system's origin is on the earth or on the moon. Formally, if the temperature in coordinate system A is T and in coordinate system A' it is T' then the transformation from T to T' is:

Page 12: Introduction to Tensors

T '=aT

where a is always unity and T' is therefore equal to T and is said to be a tensor of rank 0.

The vector object is also a tensor if it too can be said to be the same in any basis. In terms of coefficients of a 3D vector the transformation from coordinate basis u to u' is, for vector a :

a=a1 u1a2 u2a3 u3

a '=a1 ' u1 'a2 ' u2 'a3 ' u3 '

ai '=∑j

3

cos u i ' , u ja j

using equation 3-27. Intuitively, we know that the vector itself does not change even though its coefficients may do so. Thus a calculation of the norm of the vector will be the same in the new basis as in the old basis. If we have two vectors, a and b, and b has been produced by a change of basis from u to u' then:

a=a1 u1a2 u2a3 u3

b=b1 u1 'b2 u2 'b3 u3 '

|a |2=a12a2

2a3

2

|b |2=b12b2

2b3

2

and|a |=|b |

or equivalently, as we showed in Dirac notation in the last section:

{a=b}⇒<a |a >=<b |b >=

So, if two vectors are equivalent after a basis transformation then they are said to be tensors of rank 1.

We seen that the dyad, as well, can be invariant to transformation such that dyad D is equivalent to dyad E if E is produced by a change of basis from D . The dyad D produced from a pair of 3D vectors can conveniently be represented by a 3 x 3 matrix with 9 (or 32)components. In order to transform D to E we must perform a set of operations that is similar to the vector transformation. In the case of the vector (or rank-1 tensor in our present context) each new component of the vector is a linear combination of all of the old components. The case with tensors is much the same; each component of the new rank-2 tensor is a linear combination of all of the components of the old tensor and is expressed in a similar fashion to vectors:

Page 13: Introduction to Tensors

D=[d11 d12 d13

d21 d22 d23

d31 d32 d33]

E=[e11 e12 e13

e21 e22 e23

e31 e32 e33]

To transform from D to E :

e ij '=∑k

3

cos ui ' , uk∑l

3

cos ui ' , u ldkl

where, as with the vector case, cos ui ' , uk is the direction cosine of the angle between axes i' in the new set of coordinates and k in the old coordinates (see Appendix II). We can simplify the equation a bit:

e ij '=∑k

3

∑l

3

i ' k j ' l dkl

where the cosine terms have been replace with for brevity.

An alternate way to look at how we might define rank-2 tensors is to say that the action of the tensor T on a vector v is to produce a new vector times a scalar. We have already encountered this in projection operators (see eq. 3-22). Recall that we defined the projection operator as:

P i=ui uiT

orP i=|ui><ui|

and its action on vector v is:

Pi v=vi u i

orP i v=|ui><ui| v >=v i|ui>

We could say then that the projection operator is a tensor and in terms of our notation for tensors:

Pi=ui u iT

P i v=v i ui

where we emphasize the tensor character of the operator.

Page 14: Introduction to Tensors

What would happen if we were to change our basis from u to u'? Would the new tensor still give the same results when it operates on the new vector? Intuitively, we would suspect that the answer is yes but let us explore it a little.

We start with a vector a in coodinate system A and do a basis transformation to coordinate system B in which we measure the vector as b :

a=a1 u1a2 u2a3 u3

b=b1 u1 'b2 u2 'b3 u3 '

Since we are familiar with the effect of the projection operator/tensor on a vector we will use it in our example. We apply the operator in coordinate system A and then in coordinate system B, remembering that the operator must be transformed as well as the vector:

P i a=ai ui

P i ' b=bi ui '

The transformations are:

b i=∑j

3

ij a j

pij '=∑k

3

∑l

3

i' k j' l pkl

where, as before, the 's are the direction cosines for the indicated axes in the old and new coordinate systems. Let us look closely at one coordinate of the projection operator:

p11 '=∑k

3

∑l

3

1' k1 ' l pkl

Page 15: Introduction to Tensors

5.5 Problems

5.6 References

1. J.C. Kolecki, Foundations of Tensor Analysis for Students of Physics and Engineering With an Introduction to the Theory of Relativity, NASA Science and Technical Information, TP-2005-213115.

2. A.I. Borisenko and I.E. Tarapov, Vector and Tensor Analysis with Applications, Dover Publications Inc., 1968.