Appendix Tensors
Appendix Tensors
1 We recall that in describing stresses and strains one must specify not only the magnitude of the quantity, but also
1
2 A. Brief Review of Tensors
Tensor Rank
Tensors may be classified by rank or order according to the particular form of transformation law
they obey. This classification is also reflected in the number of components a given tensor possesses
in an N -dimensional space. Thus, a tensor of order p has N p components. For example, in a three-
dimensional Euclidean space, the number of components of a tensor is 3p . It follows therefore, that
in three-dimensional space:
• A tensor of order zero has one component and is called a scalar. Physical quantities possessing
magnitude only are represented by scalars.
• A tensor of order one has three components and is called a vector; quantities possessing both
magnitude and direction are represented by vectors. Geometrically, vectors are represented by
directed line segments that obey the Parallelogram Law of addition.
• A tensor of order two has nine components and is typically represented by a matrix.
Notation
The following symbols are used herein:
Cartesian Tensors
When only transformations from one homogeneous coordinate system (e.g., a Cartesian coordinate
system) to another are considered, the tensors involved are referred to as Cartesian tensors. The
Cartesian coordinate system can be rectangular (x1 , x2 , x3 ) or curvilinear, such as cylindrical (R, θ, z)
or spherical (r, θ, φ).
A.3. Indicial Notation 3
Index rule
In a given term, a letter index may occur no more than twice.
Range Convention
When an index occurs unrepeated in a term, that index is understood to take on the values
1, 2, · · · , N where N is a specified integer that, depending on the space considered, determines
the range of the index.
Summation Convention
When an index appears twice in a term, that index is understood to take on all the values of its
range, and the resulting terms are summed. For example, Akk = a11 + a22 + · · · + aN N .
Free Indices
By virtue of the range convention, unrepeated indices are free to take the values over the range, that
is, 1, 2, · · · , N . These indices are thus termed “free.” The following items apply to free indices:
• Any equation must have the same free indices in each term.
• The tensorial rank of a given term is equal to the number of free indices.
Dummy Indices
In the summation convention, repeated indices are often referred to as “dummy” indices, since their
replacement by any other letter not appearing as a free index does not change the meaning of the
term in which they occur.
In the following equations, the repeated indices are thus “dummy” indices: Akk = Amm and
aik bkl = ain bnl . In the equation Eij = eim emj i and j represent free indices and m is a dummy
index. Assuming N = 3 and using the range convention, it follows that Eij = ei1 e1j + ei2 e2j + ei3 e3j .
Care must be taken to avoid breaking grammatical rules in the indicial “language.” For example,
the expression a • b = (ak êk ) • (bk êk ) is erroneous since the summation on the dummy indices is
ambiguous. To avoid such ambiguity, a dummy index can only be paired with one other dummy
index in an expression. A good rule to follow is use separate dummy indices for each implied
summation in an expression.
4 A. Brief Review of Tensors
Contraction of Indices
Contraction refers to the process of summing over a pair of repeated indices. This reduces the order
of a tensor by two.
For example:
• Contracting the indices of Aij (a second-order tensor) leads to Akk (a zeroth-order tensor or
scalar).
• Contracting the indices of Bijk (a third-order tensor) leads to Bikk (a first-order tensor).
• Contracting the indices of Cijkl (a fourth-order tensor) leads to Cijmm (a second-order tensor).
• If i remains a free index, differentiation of a tensor with respect to i produces a tensor of order
one higher. For example
∂Aj
Aj,i = (A.2)
∂xi
• If i is a dummy index, differentiation of a tensor with respect to i produces a tensor of order
one lower. For example
x
3
e^
3
e^
1
x1 ^ x
e 2
2
Any vector in the RCC system may be expressed as a linear combination of three arbitrary,
nonzero, non-coplanar vectors called the base vectors. Base vectors are, by hypothesis, linearly
independent. A set of base vectors in a given coordinate system is said to constitute a basis for that
system. The most frequent choice of base vectors for the RCC system is the set of unit vectors ê1 ,
ê2 , ê3 , directed parallel to the x1 , x2 and x3 coordinate axes, respectively.
.
Remark
1. The summation convention is very often employed in connection with the representation of
vectors and tensors by indexed base vectors written in symbolic notation. In Euclidean space any
vector is completely specified by its three components. The range on indices is thus 3 (i.e., N = 3).
A point with coordinates (q1 , q2 , q3 ) is thus located by a position vector x, where
x = qi êi (A.5)
where i is a summed index (i.e., the summation convention applies even though there is no repeated
index on the same kernal).
.
The base vectors constitute a right-handed unit vector triad or right orthogonal triad that satisfy
the following relations:
The set of base vectors satisfying the above conditions are often called an orthonormal basis.
In equation (A.6), δij denotes the Kronecker delta (a second-order tensor typically denoted by
I), defined by
(
1 if i = j
δij = (A.8)
0 if i =
6 j
In equation (A.7), ²ijk is the permutation symbol or alternating tensor (a third-order tensor),
that is defined in the following manner:
1 if i, j, k are an even permutation of 1,2,3
²ijk = −1 if i, j, k are an odd permutation of 1,2,3 (A.9)
0 if i, j, k are not a permutation of 1,2,3
.
Remarks
1. The Kronecker delta is sometimes called the substitution operator since, for example,
2 2 2
a • a = ak ak = (a1 ) + (a2 ) + (a3 ) (A.13)
¯ ¯
¯A11 A12 A13 ¯¯
¯
det A = |A| = ¯¯A21 A22 A23 ¯¯
¯A31 A32 A33 ¯
= A11 A22 A33 + A12 A23 A31 + A13 A21 A32
− A31 A22 A13 − A32 A23 A11 − A33 A21 A12
= ²ijk A1i A2j A3k (A.16)
²ijk ²ijk = 6
.
8 A. Brief Review of Tensors
.
Remark
1. In both above transformations, the second index on R is associated with the unprimed system.
.
In order to gain insight into the direction cosines Rij , we differentiate equation (A.21) with
respect to xi giving (with due change of dummy indices),
∂x0m ∂xj
= Rmj = Rmj δji = Rmi (A.25)
∂xi ∂xi
A.5. Transformation Laws for Cartesian Tensors 9
∂xk ∂x0j
= R jk = Rjk δjm = Rmk (A.26)
∂x0m ∂x0m
Using the chain rule, it follows that
I = RT R (A.28)
−1 T
implying that the R are orthogonal tensors (i.e, R = R ). Linear transformations such as those
given by equations (A.21) and (A.24), whose direction cosines satisfy the above equation, are thus
called orthogonal transformations.
The transformation rules for second-order Cartesian tensors are derived in the following manner.
Let S be a second-order Cartesian tensor, and let
u = Sv (A.29)
in the unprimed coordinates. Similarly, in primed coordinates let
u0 = S0 v0 (A.30)
0 0 0
Next we desire to relate S to S . Using equation (A.21), substitute for u and v to give
Ru = S0 Rv (A.31)
But from equation (A.29)
Ru = RSv (A.32)
implying that
RSv = S0 Rv (A.33)
Since v is an arbitrary vector, and since R is an orthogonal tensor, it follows that
S0 = RSRT 0
or Sij = Rik Rjl Skl (A.34)
In a similar manner,
Finally, the fourth-order Cartesian tensor C transforms according to the following relations:
0
Cijkl = Rip Rjq Rkr Rls Cpqrs (A.38)
and
0
Cijkl = Rpi Rqj Rrk Rsl Cpqrs (A.39)
v = An (A.40)
This is shown schematically in Figure A.2.
n
v = An
.
Remark
1. A may be viewed as a linear vector operator that produces the vector v conjugate to the
direction n.
.
If v is parallel to n, the above inner product may be expressed as a scalar multiple of n; viz.,
This is called the characteristic equation of A. In light of the symmetry of A, the expansion of
equation (A.43) gives
¯ ¯
¯(A11 − λ) A12 A13 ¯¯
¯
¯ A12 (A22 − λ) A23 ¯¯ = 0 (A.44)
¯
¯ A13 A23 (A33 − λ)¯
The evaluation of this determinant leads to a cubic polynomial in λ, known as the characteristic
polynomial of A; viz.,
1
I¯2 = (Aii Ajj − Aij Aij ) (A.47)
2
.
Remark
1. Eigenvalues and eigenvectors have a useful geometric interpretation in two- and three-
dimensional space. If λ is an eigenvalue of A corresponding to v, then Av = λv, so that depending
on the value of λ, multiplication by A dilates v (if λ > 1), contracts v (if 0 < λ < 1), or reverses
the direction of v (if λ < 0).
.
Consider a vector v. If the coordinate axes are rotated, the components of v will change.
However, the length (magnitude) of v remains unchanged. As such, the length is said to be invariant.
In fact a vector (first-order tensor) has only one invariant, its length.
.
Example 2: Invariants of Second-Order Tensors
12 A. Brief Review of Tensors
A second order tensor possesses three invariants. Denoting the tensor by A, its invariants are
(these differ from the ones derived from the characteristic equation of A)
1 £¡ 2 ¢¤ 1
I2 = tr A = Aik Aki (A.51)
2 2
1 £¡ 3 ¢¤ 1
I3 = tr A = Aik Akj Aji (A.52)
3 3
Any function of the invariants is also an invariant. To verify that the first invariant is unchanged
under coordinate transformation, recall that
A0ik A0km A0mi = (Ril Rkp Alp ) (Rkn Rmq Anq ) (Rms Rit Ast )
= Ril Rit Alp Rkp Rkn Anq Rmq Rms Ast
= δlt Alp δpn Anq δqs Ast
= Atp Apq Aqt (A.56)
.
A.7. Tensor Calculus 13
Gradient Operator
The linear differential operator
∂ ∂ ∂ ∂
∇= ê1 + ê2 + ê3 = êi (A.57)
∂x1 ∂x2 ∂x3 ∂xi
is called the gradient or del operator.
∂φ
∇φ = grad φ = êi = êi φ,i (A.58)
∂xi
If n = ni êi is a unit vector, the scalar operator
∂
n • ∇ = ni (A.59)
∂xi
is called the directional derivative operator in the direction n.
∂vi
∇ • v = div v = = vi,i (A.60)
∂xi
is called the divergence of v.
∂uk
∇ × u = curl u = εijk êi = εijk uk,j êi (A.61)
∂xj
is called the curl of u.
.
Remark
1. When using uk,j for ∂uk /∂xj , the indices are reversed in order as compared to the definition
of the vector (cross) product; that is,
∂2( )
∇2 ( ) = div grad ( ) = ∇ • ∇ ( ) = = ( ),ii (A.63)
∂xi ∂xi
Let φ(x1 , x2 , x3 ) be a scalar field. The Laplacian of φ is then
µ ¶
2 ∂
∇ φ= êi • (φ,j êj )
∂xi
∂2φ
= (êi • êj ) = φ,ji δij = φ,ii (A.64)
∂xj ∂xi
Let v(x1 , x2 , x3 ) be a vector field. The Laplacian of v is the following vector quantity:
∂ 2 uk
∇2 v = êk = uk,ii êk (A.65)
∂xi ∂xi
.
Remark
1. An alternate statement of the Laplacian of a vector is
∇2 v = ∇ (∇ • v) − ∇ × (∇ × v) (A.66)
.
References
[1] Fung, Y. C., A First Course in Continuum Mechanics, Second Edition. Englewood Cliffs, NJ:
Prentice Hall (1977).
[2] Joshi, A. W., Matrices and Tensors in Physics, 2nd Edition. A Halsted Press Book, New York:
J. Wiley and Sons (1984).
[3] Mase, G. E., Continuum Mechanics, Schaums Outline Series. New York: McGraw-Hill Book Co.
(1970).
[4] Sokolnikoff, I. S., Tensor Analysis, Theory and Applications. New York: J. Wiley and Sons
(1958).
15