SlideShare a Scribd company logo
1 of 26
Download to read offline
A Some Basic Rules of Tensor Calculus




The tensor calculus is a powerful tool for the description of the fundamentals in con-
tinuum mechanics and the derivation of the governing equations for applied prob-
lems. In general, there are two possibilities for the representation of the tensors and
the tensorial equations:
– the direct (symbolic) notation and
– the index (component) notation
The direct notation operates with scalars, vectors and tensors as physical objects
defined in the three dimensional space. A vector (first rank tensor) a is considered
as a directed line segment rather than a triple of numbers (coordinates). A second
rank tensor A is any finite sum of ordered vector pairs A = a ⊗ b + . . . + c ⊗ d The
                                                                               d.
scalars, vectors and tensors are handled as invariant (independent from the choice of
the coordinate system) objects. This is the reason for the use of the direct notation
in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199,
246, 313, 334] among others.
    The index notation deals with components or coordinates of vectors and tensors.
For a selected basis, e.g. g i , i = 1, 2, 3 one can write

                   a = ai g i ,    A = ai b j + . . . + ci d j g i ⊗ g j

Here the Einstein’s summation convention is used: in one expression the twice re-
peated indices are summed up from 1 to 3, e.g.
                                   3                       3
                       ak g k ≡   ∑ ak g k ,   Aik bk ≡   ∑ Aik bk
                                  k =1                    k =1

In the above examples k is a so-called dummy index. Within the index notation the
basic operations with tensors are defined with respect to their coordinates, e. g. the
sum of two vectors is computed as a sum of their coordinates ci = ai + bi . The
introduced basis remains in the background. It must be remembered that a change
of the coordinate system leads to the change of the components of tensors.
    In this work we prefer the direct tensor notation over the index one. When solv-
ing applied problems the tensor equations can be “translated” into the language
of matrices for a specified coordinate system. The purpose of this Appendix is to
168       A Some Basic Rules of Tensor Calculus


give a brief guide to notations and rules of the tensor calculus applied through-
out this work. For more comprehensive overviews on tensor calculus we recom-
mend [54, 96, 123, 191, 199, 311, 334]. The calculus of matrices is presented in
[40, 111, 340], for example. Section A.1 provides a brief overview of basic alge-
braic operations with vectors and second rank tensors. Several rules from tensor
analysis are summarized in Sect. A.2. Basic sets of invariants for different groups
of symmetry transformation are presented in Sect. A.3, where a novel approach to
find the functional basis is discussed.


A.1 Basic Operations of Tensor Algebra

A.1.1 Polar and Axial Vectors
A vector in the three-dimensional Euclidean space is defined as a directed line seg-
ment with specified magnitude (scalar) and direction. The magnitude (the length) of
a vector a is denoted by |a |. Two vectors a and b are equal if they have the same
direction and the same magnitude. The zero vector 0 has a magnitude equal to zero.
In mechanics two types of vectors can be introduced. The vectors of the first type are
directed line segments. These vectors are associated with translations in the three-
dimensional space. Examples for polar vectors include the force, the displacement,
the velocity, the acceleration, the momentum, etc. The second type is used to char-
acterize spinor motions and related quantities, i.e. the moment, the angular velocity,
the angular momentum, etc. Figure A.1a shows the so-called spin vector a ∗ which
represents a rotation about the given axis. The direction of rotation is specified by
the circular arrow and the “magnitude” of rotation is the corresponding length. For
the given spin vector a ∗ the directed line segment a is introduced according to the
following rules [334]:
1. the vector a is placed on the axis of the spin vector,

2. the magnitude of a is equal to the magnitude of a ∗ ,


      a                                b                                c
                                              a




                 a∗                               a∗                              a∗

                                                                              a

Figure A.1 Axial vectors. a Spin vector, b axial vector in the right-screw oriented reference
frame, c axial vector in the left-screw oriented reference frame
A.1 Basic Operations of Tensor Algebra        169


3. the vector a is directed according to the right-handed screw, Fig A.1b, or
   the left-handed screw, Fig A.1c
The selection of one of the two cases in 3. corresponds to the convention of ori-
entation of the reference frame [334] (it should be not confused with the right- or
left-handed triples of vectors or coordinate systems). The directed line segment is
called a polar vector if it does not change by changing the orientation of the refer-
ence frame. The vector is called to be axial if it changes the sign by changing the
orientation of the reference frame. The above definitions are valid for scalars and
tensors of any rank too. The axial vectors (and tensors) are widely used in the rigid
body dynamics, e.g. [333], in the theories of rods, plates and shells, e.g. [25], in the
asymmetric theory of elasticity, e.g. [231], as well as in dynamics of micro-polar
media, e.g. [108]. By dealing with polar and axial vectors it should be remembered
that they have different physical meanings. Therefore, a sum of a polar and an axial
vector has no sense.


A.1.2 Operations with Vectors
Addition. For a given pair of vectors a and b of the same type the sum c = a + b
is defined according to one of the rules in Fig. A.2. The sum has the following
properties
– a + b = b + a (commutativity),
– (a + b ) + c = a + (b + c ) (associativity),
– a +0 = a

Multiplication by a Scalar. For any vector a and for any scalar α a vector b = αa
                                                                                a
is defined in such a way that
– |b | = |α||a |,
– for α > 0 the direction of b coincides with that of a ,
– for α < 0 the direction of b is opposite to that of a .
For α = 0 the product yields the zero vector, i.e. 0 = 0a . It is easy to verify that
                                                        a

                    α(a + b ) = αa + αb ,
                                 a    b        (α + β)a = αa + βa
                                                           a    a


    a                                               b

        b           c                                       c
                                                                            b

             a                                        a
Figure A.2 Addition of two vectors. a Parallelogram rule, b triangle rule
170     A Some Basic Rules of Tensor Calculus


   a                                          b
            b                                            b


                ϕ                                            ϕ

                              a                    a                     a
       2π − ϕ                          na =              (b · n a )n a
                                                  |a |
Figure A.3 Scalar product of two vectors. a Angles between two vectors, b unit vector and
projection


Scalar (Dot) Product of two Vectors. For any pair of vectors a and b a scalar
α is defined by
                         α = a · b = |a ||b | cos ϕ,
where ϕ is the angle between the vectors a and b . As ϕ one can use any of the two
angles between the vectors, Fig. A.3a. The properties of the scalar product are
– a · b = b · a (commutativity),
– a · (b + c ) = a · b + a · c (distributivity)
Two nonzero vectors are said to be orthogonal if their scalar product is zero. The
unit vector directed along the vector a is defined by (see Fig. A.3b)
                                                   a
                                        na =
                                                  |a |
The projection of the vector b onto the vector a is the vector (b · n a )n a , Fig. A.3b.
The length of the projection is |b || cos ϕ|.
Vector (Cross) Product of Two Vectors. For the ordered pair of vectors a and
b the vector c = a × b is defined in two following steps [334]:
– the spin vector c ∗ is defined in such a way that
  • the axis is orthogonal to the plane spanned on a and b , Fig. A.4a,
  • the circular arrow shows the direction of the “shortest” rotation from a to b ,
     Fig. A.4b,
  • the length is |a ||b | sin ϕ, where ϕ is the angle of the “shortest” rotation from a
     to b ,
– from the resulting spin vector the directed line segment c is constructed according
  to one of the rules listed in Subsect. A.1.1.
The properties of the vector product are

                    a × b = −b × a ,      a × (b + c ) = a × b + a × c

The type of the vector c = a × b can be established for the known types of the
vectors a and b , [334]. If a and b are polar vectors the result of the cross product
A.1 Basic Operations of Tensor Algebra      171

           a                                     b                           c

                                                                                  c
                                                              c∗

                  ϕ                                  ϕ                            ϕ
     a                    b              a                         b   a                  b



Figure A.4 Vector product of two vectors. a Plane spanned on two vectors, b spin vector, c
axial vector in the right-screw oriented reference frame


will be the axial vector. An example is the moment of momentum for a mass point m
defined by r × (mv ), where r is the position of the mass point and v is the velocity
                     ˙
of the mass point. The next example is the formula for the distribution of velocities
in a rigid body v = ω × r . Here the cross product of the axial vector ω (angular
velocity) with the polar vector r (position vector) results in the polar vector v.
     The mixed product of three vectors a , b and c is defined by (a × b ) · c . The result
is a scalar. For the mixed product the following identities are valid

                         a · (b × c ) = b · (c × a ) = c · (a × b )                   (A.1.1)

If the cross product is applied twice, the first operation must be set in parentheses,
e.g., a × (b × c ). The result of this operation is a vector. The following relation can
be applied
                           a × (b × c ) = b (a · c ) − c (a · b )                (A.1.2)
By use of (A.1.1) and (A.1.2) one can calculate

                      (a × b ) · (c × d ) = a · [b × (c × d )]
                                          = a · (c b · d − d b · c )                  (A.1.3)
                                          = a·c b·d −a·d b·c

A.1.3 Bases
Any triple of linear independent vectors e 1 , e 2 , e 3 is called basis. A triple of vectors
e i is linear independent if and only if e 1 · (e 2 × e 3 ) = 0.
      For a given basis e i any vector a can be represented as follows

                              a = a1 e 1 + a2 e 2 + a3 e 3 ≡ a i e i

The numbers ai are called the coordinates of the vector a for the basis e i . In order to
compute the coordinates the dual (reciprocal) basis e k is introduced in such a way
that
                                            1, k = i,
                          e k · e i = δik =
                                            0, k = i
172     A Some Basic Rules of Tensor Calculus


δik is the Kronecker symbol. The coordinates ai can be found by

                           e i · a = a · e i = am e m · e i = am δm = ai
                                                                  i


For the selected basis e i the dual basis can be found from
                 e2 × e3                       e3 × e1                       e1 × e2
      e1 =                      ,   e2 =                      ,   e3 =                         (A.1.4)
             (e 1 × e 2 ) · e 3            (e 1 × e 2 ) · e 3            (e 1 × e 2 ) · e 3
By use of the dual basis a vector a can be represented as follows

              a = a1 e 1 + a2 e 2 + a3 e 3 ≡ a i e i ,      am = a · e m ,     am = am

In the special case of the orthonormal vectors e i , i.e. |e i | = 1 and e i · e k = 0 for
i = k, from (A.1.4) follows that e k = e k and consequently ak = ak .

A.1.4 Operations with Second Rank Tensors
A second rank tensor is a finite sum of ordered vector pairs A = a ⊗ b + . . . + c ⊗ d
[334]. One ordered pair of vectors is called the dyad. The symbol ⊗ is called the
dyadic (tensor) product of two vectors. A single dyad or a sum of two dyads are
special cases of the second rank tensor. Any finite sum of more than three dyads can
be reduced to a sum of three dyads. For example, let
                                                  n
                                       A=       ∑ a (i) ⊗ b (i)
                                               i =1

be a second rank tensor. Introducing a basis e k the vectors a (i) can be represented
by a (i) = aki) e k , where aki) are coordinates of the vectors a (i) . Now we may write
            (                (

              n                               n                                        n
      A=     ∑ aki)e k ⊗ b (i) = e k ⊗ ∑ aki)b (i) = e k ⊗ d k ,
                (                         (                                  dk ≡     ∑ aki)b (i)
                                                                                         (
             i =1                            i =1                                     i =1

Addition. The sum of two tensors is defined as the sum of the corresponding
dyads. The sum has the properties of associativity and commutativity. In addition,
for a dyad a ⊗ b the following operation is introduced

             a ⊗ (b + c ) = a ⊗ b + a ⊗ c ,             (a + b ) ⊗ c = a ⊗ c + b ⊗ c
Multiplication by a Scalar. This operation is introduced first for one dyad. For
any scalar α and any dyad a ⊗ b
                               α(a ⊗ b ) = (αa ) ⊗ b = a ⊗ (αb ),
                                             a               b
                                                                                               (A.1.5)
                               (α + β)a ⊗ b = αa ⊗ b + βa ⊗ b
                                                 a        a

By setting α = 0 in the first equation of (A.1.5) the zero dyad can be defined, i.e.
0(a ⊗ b ) = 0 ⊗ b = a ⊗ 0 . The above operations can be generalized for any finite
sum of dyads, i.e. for second rank tensors.
A.1 Basic Operations of Tensor Algebra          173


Inner Dot Product. For any two second rank tensors A and B the inner dot prod-
uct is specified by A · B . The rule and the result of this operation can be explained
in the special case of two dyads, i.e. by setting A = a ⊗ b and B = c ⊗ d

            A · B = a ⊗ b · c ⊗ d = (b · c )a ⊗ d = αa ⊗ d ,
                                                     a                        α ≡b·c

The result of this operation is a second rank tensor. Note that A · B = B · A . This
can be again verified for two dyads. The operation can be generalized for two second
rank tensors as follows
                3                      3                      3       3
      A·B =    ∑ a (i) ⊗ b (i) ·   ∑ c (k) ⊗ d (k) =       ∑ ∑ (b (i) · c (k) )a (i) ⊗ d (k)
               i =1                k =1                   i =1 k =1

Transpose of a Second Rank Tensor. The transpose of a second rank tensor
A is constructed by the following rule

                                   3                  T           3
                      AT
                      A =      ∑ a (i) ⊗ b (i)            =   ∑ b (i) ⊗ a (i)
                              i =1                            i =1

Double Inner Dot Product. For any two second rank tensors A and B the double
inner dot product is specified by A ·· B The result of this operation is a scalar. This
operation can be explained for two dyads as follows

                       A ·· B = a ⊗ b ·· c ⊗ d = (b · c )(a · d )

By analogy to the inner dot product one can generalize this operation for two second
rank tensors. It can be verified that A ·· B = B ·· A for second rank tensors A and
B . For a second rank tensor A and for a dyad a ⊗ b

                                       A ·· a ⊗ b = b · A · a                             (A.1.6)

A scalar product of two second rank tensors A and B is defined by

                                           α = A ·· B T

One can verify that
                            A ·· B T = B T ·· A = B ·· A T
Dot Products of a Second Rank Tensor and a Vector. The right dot product
of a second rank tensor A and a vector c is defined by
                               3                                  3
                    A·c =     ∑ a (i) ⊗ b (i)         ·c =     ∑ (b (i) · c )a (i)
                              i =1                            i =1

For a single dyad this operation is

                                        a ⊗ b · c = a (b · c )
174     A Some Basic Rules of Tensor Calculus


The left dot product is defined by
                                   3                             3
                  c· A =c·        ∑ a (i) ⊗ b (i)       =    ∑ (c · a (i) )b (i)
                                  i =1                       i =1

The results of these operations are vectors. One can verify that

                          A · c = c · A,         A · c = c · AT

Cross Products of a Second Rank Tensor and a Vector. The right cross
product of a second rank tensor A and a vector c is defined by
                           3                                 3
              A ×c =      ∑ a (i) ⊗ b (i)       ×c =        ∑ a (i) ⊗ (b (i) × c )
                          i =1                              i =1

The left cross product is defined by
                                  3                          3
              c×A =c×            ∑ a (i) ⊗ b (i)        =   ∑ (c × a (i) ) ⊗ b (i)
                                 i =1                       i =1

The results of these operations are second rank tensors. It can be shown that

                                 A × c = −[c × A T ]T

Trace. The trace of a second rank tensor is defined by
                                        3                            3
                   tr A = tr          ∑ a (i) ⊗ b (i)       =    ∑ a (i) · b (i)
                                      i =1                       i =1

By taking the trace of a second rank tensor the dyadic product is replaced by the dot
product. It can be shown that

        tr A = tr A T ,   tr ( A · B ) = tr (B · A ) = tr ( A T · B T ) = A ·· B

Symmetric Tensors. A second rank tensor is said to be symmetric if it satisfies
the following equality
                              A = AT
An alternative definition of the symmetric tensor can be given as follows. A second
rank tensor is said to be symmetric if for any vector c = 0 the following equality is
valid
                                   c· A = A ·c
An important example of a symmetric tensor is the unit or identity tensor I , which
is defined by such a way that for any vector c

                                        c·I = I·c =c
A.1 Basic Operations of Tensor Algebra      175


The representations of the identity tensor are

                                   I = ek ⊗ ek = ek ⊗ ek

for any basis e k and e k , e k · e m = δk . For three orthonormal vectors m , n and p
                                         m


                             I = n ⊗n +m ⊗m + p ⊗ p

A symmetric second rank tensor P satisfying the condition P · P = P is called
projector. Examples of projectors are

                         m ⊗ m,        n ⊗ n + p ⊗ p = I − m ⊗ m,

where m , n and p are orthonormal vectors. The result of the dot product of the tensor
m ⊗ m with any vector a is the projection of the vector a onto the line spanned on
the vector m , i.e. m ⊗ m · a = (a · m )m . The result of (n ⊗ n + p ⊗ p ) · a is the
projection of the vector a onto the plane spanned on the vectors n and p .
Skew-symmetric Tensors. A second rank tensor is said to be skew-symmetric
if it satisfies the following equality

                                           A = −A T

or if for any vector c
                                         c · A = −A · c
Any skew symmetric tensor A can be represented by

                                       A = a ×I = I ×a

The vector a is called the associated vector. Any second rank tensor can be uniquely
decomposed into the symmetric and skew-symmetric parts
                      1            1
                  A=     A + AT +    A − A T = A1 + A2,
                      2            2
                       1
                  A1 =    A + AT , A1 = A1 ,
                                           T
                       2
                       1
                  A2 =    A − A T , A 2 = −A 2
                                             T
                       2
Vector Invariant. The vector invariant or “Gibbsian Cross” of a second rank ten-
sor A is defined by
                                   3                         3
                     A× =         ∑ a (i) ⊗ b (i)       =   ∑ a (i) × b (i)
                                  i =1              ×       i =1

The result of this operation is a vector. The vector invariant of a symmetric tensor is
the zero vector. The following identities can be verified

                  (a × I )× = −2a ,
                                a           a × I × b = b ⊗ a − (a · b ) I
176      A Some Basic Rules of Tensor Calculus


Linear Transformations of Vectors. A vector valued function of a vector ar-
gument f (a ) is called to be linear if f (α1 a 1 + α2a 2 ) = α1 f (a 1 ) + α2 f (a 2 ) for any
two vectors a 1 and a 2 and any two scalars α1 and α2 . It can be shown that any linear
vector valued function can be represented by f (a ) = A · a , where A is a second
rank tensor. In many textbooks, e.g. [32, 293] a second rank tensor A is defined to
be the linear transformation of the vector space into itself.
Determinant and Inverse of a Second Rank Tensor. Let a , b and c be ar-
bitrary linearly-independent vectors. The determinant of a second rank tensor A is
defined by
                                 ( A · a ) · [( A · b ) × ( A · c )]
                       det A =
                                            a · (b × c )
The following identities can be verified

                 det( A T ) = det( A ),         det( A · B ) = det( A ) det(B )

The inverse of a second rank tensor A −1 is introduced as the solution of the follow-
ing equation
                             A −1 · A = A · A −1 = I
A is invertible if and only if det A = 0. A tensor A with det A = 0 is called
singular. Examples of singular tensors are projectors.
Cayley-Hamilton Theorem. Any second rank tensor satisfies the following
equation
                A 3 − J1 ( A ) A 2 + J2 ( A ) A − J3 ( A ) I = 0 , (A.1.7)
where A 2 = A · A ,        A 3 = A · A · A and
                                      1
            J1 ( A ) = tr A ,           [(tr A )2 − tr A 2 ],
                                   J2 ( A ) =
                                      2                                                         (A.1.8)
                              1            1                1
            J3 ( A ) = det A = (tr A )3 − tr A tr A 2 + tr A 3
                              6            2                3
The scalar-valued functions Ji ( A ) are called principal invariants of the tensor A .
Coordinates of Second Rank Tensors. Let e i be a basis and e k the dual basis.
Any two vectors a and b can be represented as follows

                            a = ai e i = a j e j ,   b = b l e l = bm e m

A dyad a ⊗ b has the following representations

            a ⊗ b = ai b j e i ⊗ e j = ai b j e i ⊗ e j = ai b j e i ⊗ e j = ai b j e i ⊗ e j

For the representation of a second rank tensor A one of the following four bases can
be used
                        ei ⊗ e j, ei ⊗ e j, ei ⊗ e j, ei ⊗ e j
With these bases one can write
A.1 Basic Operations of Tensor Algebra      177

                                                                       ∗j
              A = Aije i ⊗ e j = Aije i ⊗ e j = Ai∗je i ⊗ e j = Ai∗ e i ⊗ e j
                                                 ∗

For a selected basis the coordinates of a second rank tensor can be computed as
follows
                        Aij = e i · A · e j , Aij = e i · A · e j ,
                                                ∗j
                        Ai∗j = e i · A · e j , Ai∗ = e i · A · e j
                         ∗

Principal Values and Directions of Symmetric Second Rank Tensors.
Consider a dot product of a second rank tensor A and a unit vector n . The resulting
vector a = A · n differs in general from n both by the length and the direction.
However, one can find those unit vectors n , for which A · n is collinear with n , i.e.
only the length of n is changed. Such vectors can be found from the equation

                         A · n = λn
                                  n       or    ( A − λI ) · n = 0
                                                       I                            (A.1.9)

The unit vector n is called the principal vector and the scalar λ the principal value
of the tensor A . Let A be a symmetric tensor. In this case the principal values are
real numbers and there exist at least three mutually orthogonal principal vectors.
The principal values can be found as roots of the characteristic polynomial

             det( A − λI ) = −λ3 + J1 ( A )λ2 − J2 ( A )λ + J3 ( A ) = 0
                       I

The principal values are specified by λ I , λ I I , λ I I I . For known principal values and
principal directions the second rank tensor can be represented as follows (spectral
representation)

                 A = λIn I ⊗ n I + λI In I I ⊗ n I I + λI I In I I I ⊗ n I I I

Orthogonal Tensors. A second rank tensor Q is said to be orthogonal if it sat-
isfies the equation Q T · Q = I . If Q operates on a vector its length remains un-
changed, i.e. let b = Q · a , then

                     |b |2 = b · b = a · Q T · Q · a = a · a = |a |2

Furthermore, the orthogonal tensor does not change the scalar product of two arbi-
trary vectors. For two vectors a and b as well as a ′ = Q · a and b ′ = Q · b one can
calculate
                           a′ · b′ = a · QT · Q · b = a · b
From the definition of the orthogonal tensor follows

            Q T = Q −1 ,     QT · Q = Q · QT = I,
            det(Q · Q T ) = (det Q )2 = det I = 1            ⇒      det Q = ±1

Orthogonal tensors with det Q = 1 are called proper orthogonal or rotation tensors.
The rotation tensors are widely used in the rigid body dynamics, e.g. [333], and in
the theories of rods, plates and shells, e.g. [25, 32]. Any orthogonal tensor is either
178     A Some Basic Rules of Tensor Calculus


the rotation tensor or the composition of the rotation (proper orthogonal tensor) and
the tensor − I . Let P be a rotation tensor, det P = 1, then an orthogonal tensor Q
with det Q = −1 can be composed by

           Q = (− I ) · P = P · (− I ),                      −
                                                 det Q = det(− I ) det P = −1

For any two orthogonal tensors Q 1 and Q 2 the composition Q 3 = Q 1 · Q 2 is the or-
thogonal tensor, too. This property is used in the theory of symmetry and symmetry
groups, e.g. [232, 331]. Two important examples for orthogonal tensors are
• rotation tensor about a fixed axis
               Q (ψm ) = m ⊗ m + cos ψ( I − m ⊗ m ) + sin ψm × I ,
                   m                                       m
               −π < ψ < π,             det Q = 1,

  where the unit vector m represents the axis and ψ is the angle of rotation,
• reflection tensor
                         Q = I − 2n ⊗ n , det Q = −1,
                                    n
  where the unit vector n represents a normal to the mirror plane.
One can prove the following identities [334]

         (Q · a ) × (Q · b ) = det Q Q · (a × b )
                                                                                 (A.1.10)
         Q · (a × Q T ) = Q · (a × I ) · Q T = det Q [(Q · a ) × I ]


A.2 Elements of Tensor Analysis

A.2.1 Coordinate Systems
The vector r characterizing the position of a point P can be represented by use of
the Cartesian coordinates xi as follows, Fig. A.5,

                     r ( x1 , x2 , x3 ) = x1 e 1 + x2 e 2 + x3 e 3 = x i e i

Instead of coordinates xi one can introduce any triple of curvilinear coordinates
q1 , q2 , q3 by means of one-to-one transformations

                  x k = x k ( q1 , q2 , q3 )   ⇔      qk = qk ( x1 , x2 , x3 )

It is assumed that the above transformations are continuous and continuous differ-
entiable as many times as necessary and for the Jacobians

                                 ∂xk                      ∂qi
                         det              = 0,     det             =0
                                 ∂qi                      ∂xk
A.2 Elements of Tensor Analysis          179


                                                                               q3
                                                                  r3

                                                                                              q2
                                                                          r2
                                                                                          t
                                                                                c   ons
                                                                            q =
                                                                               3
                                                                  P
            x3                                                            r1
                                r

       e3
                                                                                    q1
        e1 e2           x2

  x1
Figure A.5 Cartesian and curvilinear coordinates


must be valid. With these assumptions the position vector can be considered as a
function of curvilinear coordinates qi , i.e. r = r (q1 , q2 , q3 ). Surfaces q1 = const,
q2 = const, and q3 = const, Fig. A.5, are called coordinate surfaces. For given
fixed values q2 = q2 and q3 = q3 a curve can be obtained along which only q1
                      ∗               ∗
varies. This curve is called the q1 -coordinate line, Fig. A.5. Analogously, one can
obtain the q2 - and q3 -coordinate lines. The partial derivatives of the position vector
with respect the to selected coordinates

                         r
                        ∂r                   r
                                            ∂r             r
                                                          ∂r
                 r1 =       ,       r2 =       ,   r3 =       ,        r 1 · (r 2 × r 3 ) = 0
                        ∂q1                ∂q2            ∂q3

define the tangential vectors to the coordinate lines in a point P, Fig. A.5. The vec-
tors r i are used as the local basis in the point P. By use of (A.1.4) the dual basis
r k can be introduced. The vector dr connecting the point P with a point P′ in the
                                      r
differential neighborhood of P is defined by

                                     r
                                    ∂r 1      r
                                             ∂r       r
                                                     ∂r
                         dr =
                          r            1
                                         dq + 2 dq2 + 3 dq3 = r k dqk
                                    ∂q       ∂q      ∂q

The square of the arc length of the line element in the differential neighborhood of
P is calculated by

                        ds2 = dr · dr = (r i dqi ) · (r k dqk ) = gik dqi dqk ,
                               r r

where gik ≡ r i · r k are the so-called contravariant components of the metric tensor.
With gik one can represent the basis vectors r i by the dual basis vectors r k as follows

                                       r i = (r i · r k )r k = gikr k
180     A Some Basic Rules of Tensor Calculus


Similarly
                        r i = (r i · r k )r k = gikr k ,   gik ≡ r i · r k ,
where gik are termed covariant components of the metric tensor. For the selected
bases r i and r k the second rank unit tensor has the following representations

            I = r i ⊗ r i = r i ⊗ gikr k = gikr i ⊗ r k = gikr i ⊗ r k = r i ⊗ r i


A.2.2 The Hamilton (Nabla) Operator
A scalar field is a function which assigns a scalar to each spatial point P for the
domain of definition. Let us consider a scalar field ϕ(r ) = ϕ(q1 , q2 , q3 ). The total
differential of ϕ by moving from a point P to a point P′ in the differential neighbor-
hood is
                         ∂ϕ         ∂ϕ          ∂ϕ          ∂ϕ
                   dϕ = 1 dq1 + 2 dq2 + 3 dq3 = k dqk
                         ∂q         ∂q         ∂q          ∂q
Taking into account that dqk = dr · r k
                                r

                                                ∂ϕ
                               dϕ = dr · r k
                                     r              = dr · ∇ ϕ
                                                       r
                                                ∂qk

The vector ∇ ϕ is called the gradient of the scalar field ϕ and the invariant operator
∇ (the Hamilton or nabla operator) is defined by
                                                      ∂
                                          ∇ = rk
                                                     ∂qk

For a vector field a (r ) one may write

                   ∂aa               a
                                    ∂a
        da = (dr · r k )
         a     r        = dr · r k ⊗ k = dr · ∇ ⊗ a = (∇ ⊗ a ) T · dr ,
                           r              r            ∇
                   ∂qk              ∂q
                    ∂aa
       ∇ ⊗ a = rk ⊗ k
                    ∂q

The gradient of a vector field is a second rank tensor. The operation ∇ can be applied
to tensors of any rank. For vectors and tensors the following additional operations
are defined
                                                     a
                                                    ∂a
                             diva ≡ ∇ · a = r k · k
                                  a
                                                   ∂q
                                                        a
                                                       ∂a
                             rota ≡ ∇ × a = r k × k
                                 a
                                                      ∂q
The following identities can be verified

                                        r
                                       ∂r
                   ∇ ⊗ r = rk ⊗            = rk ⊗ rk = I,           ∇·r = 3
                                       ∂qk
A.2 Elements of Tensor Analysis      181


For a scalar α, a vector a and for a second rank tensor A the following identities are
valid
                    ∂(αa )
                       a              ∂α               a
                                                      ∂a
  ∇ (αa ) = r k ⊗
      a                    =     rk          ⊗ a + αr k ⊗
                                                    r        ∇
                                                          = (∇ α) ⊗ a + α∇ ⊗ a ,
                     ∂qk              ∂qk            ∂qk
                                                                          (A.2.1)
                           k ∂( A · a )     k ∂A A        k       a
                                                                 ∂a
        ∇ · (A · a)     = r ·           = r · k ·a+r · A· k
                                ∂qk            ∂q               ∂q
                                                 a
                                                ∂a                        (A.2.2)
                           ∇
                        = (∇ · A ) · a + A ··      k
                                                     ⊗ rk
                                                ∂q
                        = (∇ · A ) · a + A ·· (∇ ⊗ a ) T
                           ∇                   ∇
Here the identity (A.1.6) is used. For a second rank tensor A and a position vector r
one can prove the following identity

                             ∂( A × r )          A
                                                ∂A                    r
                                                                     ∂r
    ∇ · (A × r ) = r k ·           k
                                        = rk · k × r + rk · A × k
                                ∂q              ∂q                  ∂q                (A.2.3)
                    =    (∇ · A ) × r + r k · A × r k = (∇ · A ) × r − A ×
                          ∇                              ∇

Here we used the definition of the vector invariant as follows

              A× = r k ⊗ r k · A            = r k × (r k · A ) = −r k · A × r k
                                        ×


A.2.3 Integral Theorems
Let ϕ(r ), a (r ) and A (r ) be scalar, vector and second rank tensor fields. Let V be
the volume of a bounded domain with a regular surface A(V ) and n be the outer
unit normal to the surface at r . The integral theorems can be summarized as follows
– Gradient Theorems

                               ∇ ϕ dV           =            n ϕ dA,
                          V                         A (V )

                               ∇ ⊗ a dV         =            n ⊗ a dA,
                          V                         A (V )

                               ∇ ⊗ A dV         =            n ⊗ A dA
                          V                         A (V )

– Divergence Theorems

                                ∇ · a dV        =            n · a dA,
                           V                        A (V )

                                ∇ · A dV        =            n · A dA
                           V                        A (V )
182     A Some Basic Rules of Tensor Calculus


– Curl Theorems

                                   ∇ × a dV             =            n × a dA,
                               V                            A (V )

                                   ∇ × A dV             =            n × A dA
                               V                            A (V )


A.2.4 Scalar-Valued Functions of Vectors and Second
      Rank Tensors
Let ψ be a scalar valued function of a vector a and a second rank tensor A , i.e.
ψ = ψ(a , A ). Introducing a basis e i the function ψ can be represented as follows

                         ψ(a , A ) = ψ( ai e i , Aije i ⊗ e j ) = ψ( ai , Aij )

The partial derivatives of ψ with respect to a and A are defined according to the
following rule
                                     ∂ψ i        ∂ψ
                     dψ        =        i
                                          da +     ij
                                                      dAij
                                     ∂a         ∂A                                            (A.2.4)
                                           i ∂ψ                  ∂ψ
                               =     da · e i + dA ·· e j ⊗ e i
                                      a             A                dAij
                                             ∂a                 ∂Aij
In the coordinates-free form the above rule can be rewritten as follows
                                                   T
                         ∂ψ                 ∂ψ
        dψ = da ·
              a             + dA ··
                               A                        = da · ψ,a + dA ·· (ψ,A ) T
                                                           a     a    A       A               (A.2.5)
                          a
                         ∂a                  A
                                            ∂A
with
                        ∂ψ    ∂ψ              ∂ψ      ∂ψ i
                   ψ,a ≡
                     a     = i e i , ψ,A ≡
                                         A        =       e ⊗ ej
                         a
                        ∂a    ∂a             ∂AA    ∂Aij
One can verify that ψ,a and ψ,A are independent from the choice of the basis. One
                      a       A
may prove the following formulae for the derivatives of principal invariants of a
second rank tensor A
                                                                                        T
            J1 ( A ),A
                     A     = I,           J1 ( A 2 ),A = 2A T ,
                                                     A    A           J1 ( A 3 ),A = 3A 2 ,
                                                                                 A    A
            J2 ( A ),A
                     A     =       J1 ( A ) I − A T ,                                         (A.2.6)
                                     2T
            J3 ( A ),A
                     A     = A            − J1 ( A ) A T + J2 ( A ) I = J3 ( A )( A T )−1


A.3 Orthogonal Transformations and Orthogonal
    Invariants
An application of the theory of tensor functions is to find a basic set of scalar invari-
ants for a given group of symmetry transformations, such that each invariant relative
A.3 Orthogonal Transformations and Orthogonal Invariants                       183


to the same group is expressible as a single-valued function of the basic set. The ba-
sic set of invariants is called functional basis. To obtain a compact representation
for invariants, it is required that the functional basis is irreducible in the sense that
removing any one invariant from the basis will imply that a complete representation
for all the invariants is no longer possible.
    Such a problem arises in the formulation of constitutive equations for a given
group of material symmetries. For example, the strain energy density of an elastic
non-polar material is a scalar valued function of the second rank symmetric strain
tensor. In the theory of the Cosserat continuum two strain measures are introduced,
where the first strain measure is the polar tensor while the second one is the axial
tensor, e.g. [108]. The strain energy density of a thin elastic shell is a function of
two second rank tensors and one vector, e.g. [25]. In all cases the problem is to find
a minimum set of functionally independent invariants for the considered tensorial
arguments.
    For the theory of tensor functions we refer to [71]. Representations of tensor
functions are reviewed in [280, 330]. An orthogonal transformation of a scalar α, a
vector a and a second rank tensor A is defined by [25, 332]

   α′ ≡ (det Q )ζ α,       a ′ ≡ (det Q )ζ Q · a ,              A ′ ≡ (det Q )ζ Q · A · Q T ,      (A.3.1)

where Q is an orthogonal tensor, i.e. Q · Q T = I , det Q = ±1, I is the second
rank unit tensor, ζ = 0 for absolute (polar) scalars, vectors and tensors and ζ = 1
for axial ones. An example of the axial scalar is the mixed product of three polar
vectors, i.e. α = a · (b × c ). A typical example of the axial vector is the cross product
of two polar vectors, i.e. c = a × b . An example of the second rank axial tensor
is the skew-symmetric tensor W = a × I , where a is a polar vector. Consider a
group of orthogonal transformations S (e.g., the material symmetry transformations)
characterized by a set of orthogonal tensors Q . A scalar-valued function of a second
rank tensor f = f ( A ) is called to be an orthogonal invariant under the group S if

                           ∀Q ∈ S :             f ( A ′ ) = (det Q )η f ( A ),                     (A.3.2)

where η = 0 if values of f are absolute scalars and η = 1 if values of f are axial
scalars.
    Any second rank tensor B can be decomposed into the symmetric and the skew-
symmetric part, i.e. B = A + a × I , where A is the symmetric tensor and a is the
associated vector. Therefore f (B ) = f ( A , a ). If B is a polar (axial) tensor, then a is
an axial (polar) vector. For the set of second rank tensors and vectors the definition
of an orthogonal invariant (A.3.2) can be generalized as follows

    ∀Q ∈ S :           ′     ′             ′ ′ ′                     ′
                  f (A 1 , A 2 , . . . , A n , a 1 , a 2 , . . . , a k )
                         = (det Q )η f ( A 1 , A 2 , . . . A n , a 1 , a 2 , . . . , a k ),   Ai = AiT
                                                                                                   (A.3.3)
184     A Some Basic Rules of Tensor Calculus


A.3.1 Invariants for the Full Orthogonal Group
In [335] orthogonal invariants for different sets of second rank tensors and vectors
with respect to the full orthogonal group are presented. It is shown that orthogonal
invariants are integrals of a generic partial differential equation (basic equations for
invariants). Let us present two following examples
– Orthogonal invariants of a symmetric second rank tensor A are
                                 Ik = tr A k ,   k = 1, 2, 3
  Instead of Ik it is possible to use the principal invariants Jk defined by (A.1.8).
– Orthogonal invariants of a symmetric second rank tensor A and a vector a are

            Ik = tr A k , k = 1, 2, 3, I4 = a · a , I5 = a · A · a ,
                                                                                 (A.3.4)
            I6 = a · A 2 · a , I7 = a · A 2 · (a × A · a )
  In the above set of invariants only 6 are functionally independent. The relation
  between the invariants (so-called syzygy, [71]) can be formulated as follows
                                   I4       I5         I6
                          I2 =
                           7       I5       I6     a · A3 · a   ,                (A.3.5)
                                   I6   a · A3 · a a · A4 · a

  where a · A 3 · a and a · A 4 · a can be expressed by Il , l = 1, . . . 6 applying the
  Cayley-Hamilton theorem (A.1.7).
The set of invariants for a symmetric second rank tensor A and a vector a can be
applied for a non-symmetric second rank tensor B since it can be represented by
B = A + a × I, A = AT.

A.3.2 Invariants for the Transverse Isotropy Group
Transverse isotropy is an important type of the symmetry transformation due to a
variety of applications. Transverse isotropy is usually assumed in constitutive mod-
eling of fiber reinforced materials, e.g. [21], fiber suspensions, e.g. [22], direction-
ally solidified alloys, e.g. [213], deep drawing sheets, e.g. [50, 57] and piezoelectric
materials, e.g. [285]. The invariants and generating sets for tensor-valued functions
with respect to different cases of transverse isotropy are discussed in [79, 328] (see
also relevant references therein). In what follows we analyze the problem of a func-
tional basis within the theory of linear first order partial differential equations rather
than the algebra of polynomials. We develop the idea proposed in [335] for the in-
variants with respect to the full orthogonal group to the case of transverse isotropy.
The invariants will be found as integrals of the generic partial differential equa-
tions. Although a functional basis formed by these invariants does not include any
redundant element, functional relations between them may exist. It may be there-
fore useful to find out simple forms of such relations. We show that the proposed
approach may supply results in a direct, natural manner.
A.3 Orthogonal Transformations and Orthogonal Invariants    185


Invariants for a Single Second Rank Symmetric Tensor. Consider the
proper orthogonal tensor which represents a rotation about a fixed axis, i.e.

   Q ( ϕm ) = m ⊗ m + cos ϕ( I − m ⊗ m ) + sin ϕm × I ,
        m                                       m              det Q ( ϕm ) = 1,
                                                                         m
                                                                              (A.3.6)
where m is assumed to be a constant unit vector (axis of rotation) and ϕ denotes
the angle of rotation about m . The symmetry transformation defined by this tensor
corresponds to the transverse isotropy, whereby five different cases are possible, e.g.
[299, 331]. Let us find scalar-valued functions of a second rank symmetric tensor A
satisfying the condition

f ( A ′ ( ϕ)) = f (Q ( ϕm ) · A · Q T ( ϕm )) = f ( A ),
                        m                m          A ′ ( ϕ) ≡ Q ( ϕm ) · A · Q T ( ϕm )
                                                                    m                m
                                                                                (A.3.7)
Equation (A.3.7) must be valid for any angle of rotation ϕ. In (A.3.7) only the left-
hand side depends on ϕ. Therefore its derivative with respect to ϕ can be set to zero,
i.e.
                          df      dA ′
                                    A       ∂f T
                               =       ··           =0                          (A.3.8)
                          dϕ       dϕ       ∂A ′
                                              A
The derivative of A ′ with respect to ϕ can be calculated by the following rules

      dA ′ ( ϕ) = dQ ( ϕm ) · A · Q T ( ϕm ) + Q ( ϕm ) · A · dQ T ( ϕm ),
       A           Q m                   m          m          Q      m
      dQ ( ϕm ) = m × Q ( ϕm )dϕ
       Q m                 m               dQ T ( ϕm ) = −Q T ( ϕm ) × m dϕ
                                            ⇒Q     m             m
                                                                          (A.3.9)
By inserting the above equations into (A.3.8) we obtain
                                                                T
                                                         ∂f
                           (m × A − A × m ) ··                      =0         (A.3.10)
                                                          A
                                                         ∂A
Equation (A.3.10) is classified in [92] to be the linear homogeneous first order par-
tial differential equation. The characteristic system of (A.3.10) is
                                  A
                                 dA
                                    = (m × A − A × m )                         (A.3.11)
                                 ds
Any system of n linear ordinary differential equations has not more then n − 1
functionally independent integrals [92]. By introducing a basis e i the tensor A can
be written down in the form A = Aije i ⊗ e j and (A.3.11) is a system of six ordi-
nary differential equations with respect to the coordinates Aij . The five integrals of
(A.3.11) may be written down as follows

                              gi ( A ) = c i ,   i = 1, 2, . . . , 5,

where ci are integration constants. Any function of the five integrals gi is the so-
lution of the partial differential equation (A.3.10). Therefore the five integrals gi
represent the invariants of the symmetric tensor A with respect to the symmetry
transformation (A.3.6). The solutions of (A.3.11) are
186       A Some Basic Rules of Tensor Calculus


                      A k (s) = Q (sm ) · A 0 · Q T (sm ),
                                    m       k         m          k = 1, 2, 3,       (A.3.12)
where A 0 is the initial condition. In order to find the integrals, the variable s must
be eliminated from (A.3.12). Taking into account the following identities
      tr (Q · A k · Q T ) = tr (Q T · Q · A k ) = tr A k ,      m · Q (sm ) = m ,
                                                                        m
                                                                                    (A.3.13)
      (Q · a ) × (Q · b ) = (det Q )Q · (a × b )
and using the notation Q m ≡ Q (sm ) the integrals can be found as follows
                                 m

      tr ( A k )                           k
                                  = tr ( A 0 ),        k = 1, 2, 3,
      m · Al · m                               l     T
                                  = m · Q m · A0 · Q m · m
                                         l
                                  = m · A0 · m,            l = 1, 2,
      m · A 2 · (m × A · m )              T                      T
                                  = m · Q m · A 2 · Q m · (m × Q m · A 0 · Q m · m )
                                                0
                                                       T            T
                                  = m · A 2 · Q m · (Q m · m ) × (Q m · A 0 · m )
                                          0
                                  = m · A 2 · (m × A 0 · m )
                                          0
                                                                          (A.3.14)
As a result we can formulate the six invariants of the tensor A with respect to the
symmetry transformation (A.3.6) as follows
                      Ik = tr ( A k ),     k = 1, 2, 3,     I4 = m · A · m ,
                                                                                    (A.3.15)
                      I5 = m · A 2 · m ,       I6 = m · A 2 · (m × A · m )
The invariants with respect to various symmetry transformations are discussed in
[79]. For the case of the transverse isotropy six invariants are derived in [79] by the
use of another approach. In this sense our result coincides with the result given
in [79]. However, from our derivations it follows that only five invariants listed
in (A.3.15) are functionally independent. Taking into account that I6 is the mixed
product of vectors m , A · m and A 2 · m the relation between the invariants can be
written down as follows
                                                                
                                  1        I4            I5
                     I2 = det  I4
                      6                    I5       m · A3 · m               (A.3.16)
                                  I5 m · A   3 · m m · A4 · m


One can verify that m · A 3 · m and m · A 4 · m are transversely isotropic invari-
ants, too. However, applying the the Cayley-Hamilton theorem (A.1.7) they can be
uniquely expressed by I1 , I2 , . . . I5 in the following way [54]
                   m · A3 · m    =       J1 I5 + J2 I4 + J3 ,
                   m · A4 · m         2
                                 = ( J1 + J2 ) I5 + ( J1 J2 + J3 ) I4 + J1 J3 ,
where J1 , J2 and J3 are the principal invariants of A defined by (A.1.8). Let us
note that the invariant I6 cannot be dropped. In order to verify this, it is enough to
consider two different tensors
A.3 Orthogonal Transformations and Orthogonal Invariants         187

                                                            T
                                  A     and B = Q n · A · Q n ,

where

      Q n ≡ Q (πn ) = 2n ⊗ n − I ,
                n      n                        n · n = 1,     n · m = 0,     det Q n = 1

One can prove that the tensor A and the tensor B have the same invariants
I1 , I2 , . . . , I5 . Taking into account that m · Q n = −m and applying the last iden-
tity in (A.3.13) we may write

        I6 (B )                                            T
                    = m · B 2 · (m × B · m ) = m · A 2 · Q n · (m × Q n · A · m )
                    = −m · A 2 · (m × A · m ) = − I6 ( A )
We observe that the only difference between the two considered tensors is the sign
of I6 . Therefore, the triples of vectors m , A · m , A 2 · m and m , B · m , B 2 · m have
different orientations and cannot be combined by a rotation. It should be noted that
the functional relation (A.3.16) would in no way imply that the invariant I6 should
be “dependent” and hence “redundant”, namely should be removed from the basis
(A.3.15). In fact, the relation (A.3.16) determines the magnitude but not the sign of
I6 .
     To describe yielding and failure of oriented solids a dyad M = v ⊗ v has been
used in [53, 75], where the vector v specifies a privileged direction. A plastic po-
tential is assumed to be an isotropic function of the symmetric Cauchy stress tensor
and the tensor generator M . Applying the representation of isotropic functions the
integrity basis including ten invariants was found. In the special case v = m the
number of invariants reduces to the five I1 , I2 , . . . I5 defined by (A.3.15). Further de-
tails of this approach and applications in continuum mechanics are given in [59, 71].
However, the problem statement to find an integrity basis of a symmetric tensor A
and a dyad M , i.e. to find scalar valued functions f ( A , M ) satisfying the condition

                f (Q · A · Q T , Q · M · Q T ) = (det Q )η f ( A , M ),
                                                                                        (A.3.17)
                ∀Q ,     Q · QT = I,       det Q = ±1

essentially differs from the problem statement (A.3.7). In order to show this we
take into account that the symmetry group of a dyad M , i.e. the set of orthogonal
solutions of the equation Q · M · Q T = M includes the following elements

        Q 1,2      = ±I ,
                                     v
        Q3         = Q ( ϕm ),
                          m           m= ,                                              (A.3.18)
                                    |v |
        Q4         = Q (πn ) = 2n ⊗ n − I ,
                         n      n                      n · n = 1,     n · v = 0,

where Q ( ϕm ) is defined by (A.3.6). The solutions of the problem (A.3.17) are
           m
automatically the solutions of the following problem

                  f (Q i · A · Q i , M ) = (det Q i )η f ( A , M ),
                                 T
                                                                      i = 1, 2, 3, 4,
188     A Some Basic Rules of Tensor Calculus


i.e. the problem to find the invariants of A relative to the symmetry group (A.3.18).
However, (A.3.18) includes much more symmetry elements if compared to the prob-
lem statement (A.3.7).
     An alternative set of transversely isotropic invariants can be formulated by the
use of the following decomposition

        A = αm ⊗ m + β( I − m ⊗ m ) + A pD + t ⊗ m + m ⊗ t,
             m                                                                (A.3.19)

where α, β, A pD and t are projections of A . With the projectors P 1 = m ⊗ m and
P 2 = I − m ⊗ m we may write

             α      = m · A · m = tr ( A · P 1 ),
                      1                           1
             β      =   (tr A − m · A · m ) = tr ( A · P 2 ),
                      2                           2                           (A.3.20)
             A pD   = P 2 · A · P 2 − βP 2 ,
                                       P
            t       = m · A · P2
The decomposition (A.3.19) is the analogue to the following representation of a
vector a

 a = I · a = m ⊗ m · a + ( I − m ⊗ m ) · a = ψm + τ ,
                                              m               ψ = a · m,
                                                                       τ = P2 · a
                                                                          (A.3.21)
Decompositions of the type (A.3.19) are applied in [68, 79]. The projections intro-
duced in (A.3.20) have the following properties

         tr ( A pD ) = 0,   A pD · m = m · A pD = 0 ,     t·m = 0             (A.3.22)

With (A.3.19) and (A.3.22) the tensor equation (A.3.11) can be transformed to the
following system of equations
                      
                       dα = 0,
                      
                       ds
                      
                      
                       dβ
                      
                             = 0,
                      
                      
                      
                          ds                                            (A.3.23)
                       A
                       dA pD
                       ds =
                      
                      
                                   m × A pD − A pD × m ,
                      
                       t
                       dt
                      
                            = m ×t
                         ds
From the first two equations we observe that α and β are transversely isotropic in-
variants. The third equation can be transformed to one scalar and one vector equation
as follows
            A
           dA pD                     d( A pD ·· A pD )           b
                                                                db
                 ·· A pD = 0     ⇒                     = 0,        = m ×b
            ds                              ds                  ds
with b ≡ A pD · t . We observe that tr ( A 2 ) = A pD ·· A pD is the transversely
                                              pD
isotropic invariant, too. Finally, we have to find the integrals of the following system
A.3 Orthogonal Transformations and Orthogonal Invariants                189

                                          t
                                        dt = t × m ,
                                       
                                       
                                         ds                                                  (A.3.24)
                                        b = b ×m
                                        db
                                         ds
The solutions of (A.3.24) are

                         t (s) = Q (sm ) · t 0 ,
                                     m               b (s) = Q (sm ) · b 0 ,
                                                                 m

where t 0 and b 0 are initial conditions. The vectors t and b belong to the plane of
isotropy, i.e. t · m = 0 and b · m = 0. Therefore, one can verify the following
integrals

  t · t = t0 · t0,     b · b = b0 · b 0,     t · b = t0 · b 0,
                                                      (t × b ) · m = (t 0 × b 0 ) · m
                                                                             (A.3.25)
We found seven integrals, but only five of them are functionally independent. In
order to formulate the relation between the integrals we compute

                           b · b = t · A2 · t ,
                                        pD           t · b = t · A pD · t

For any plane tensor A p satisfying the equations A p · m = m · A p = 0 the Cayley-
Hamilton theorem can be formulated as follows, see e.g. [71]
                                     1
           A 2 − (tr A p ) A p +
             p                         (tr A p )2 − tr ( A 2 ) ( I − m ⊗ m ) = 0
                                                           p
                                     2
Since tr A pD = 0 we have

                                                                      1
         2A 2 = tr ( A 2 )( I − m ⊗ m ),
          A pD         pD                             t · A2 · t =
                                                           pD           tr ( A 2 )(t · t )
                                                                               pD
                                                                      2
Because tr ( A 2 ) and t · t are already defined, the invariant b · b can be omitted.
               pD
The vector t × b is spanned on the axis m . Therefore
                      t × b = γm ,
                               m           γ = (t × b ) · m ,
                      γ2 = (t × b ) · (t × b ) = (t · t )(b · b ) − (t · b )2

Now we can summarize six invariants and one relation between them as follows
                                 1
      I1 = α, I2 = β, I3 = tr ( A 2 ), I4 = t · t = t · A · m ,
      ¯          ¯          ¯
                                        pD
                                                  ¯
                                 2
      ¯                   ¯
      I5 = t · A pD · t , I6 = (t × A pD · t ) · m ,                                         (A.3.26)
      ¯2   ¯2 ¯    ¯2
      I6 = I4 I3 − I5

     Let us assume that the symmetry transformation Q n ≡ Q (πn ) belongs to the
                                                                       n
symmetry group of the transverse isotropy, as it was made in [71, 59]. In this case
f ( A ′ ) = f (Q n · A · Q n ) = f ( A ) must be valid. With Q n · m = −m we can write
                           T


                     α′ = α,    β′ = β,        A ′pD = A pD ,     t ′ = −Q n · t
190     A Some Basic Rules of Tensor Calculus


                      ¯′   ¯
Therefore in (A.3.26) Ik = Ik , k = 1, 2, . . . , 5 and
             ¯′
             I6    = (t ′ × A ′pD · t ′ ) · m = (Q n · t ) × Q n · A pD · t · m
                   = (t × A pD · t ) · Q n · m = −(t × A pD · t ) · m = − I6   ¯

Consequently
                   f ( A ′ ) = f ( I1 , I2 , . . . , I5 , I6 ) = f ( I1 , I2 , . . . , I5 , − I6 )
                                   ¯′ ¯′             ¯′ ¯′           ¯ ¯               ¯      ¯
                  ⇒                       ¯ ¯               ¯ ¯2
                            f ( A ) = f ( I1 , I2 , . . . , I5 , I6 )
    ¯2
and I6 can be omitted due to the last relation in (A.3.26).
Invariants for a Set of Vectors and Second Rank Tensors. By setting
Q = Q ( ϕm ) in (A.3.3) and taking the derivative of (A.3.3) with respect to ϕ results
           m
in the following generic partial differential equation
       n               T                                         k
              ∂f                                                     ∂f
      ∑        A
              ∂A i
                           ·· (m × A i − A i × m ) + ∑
                                                                      a
                                                                     ∂a j
                                                                          · (m × a j ) = 0           (A.3.27)
      i =1                                                      j =1

The characteristic system of (A.3.27) is
                 A
           dA i = (m × A − A × m ), i = 1, 2, . . . , n,
          
                              i    i
                ds                                                                                   (A.3.28)
           a = m × a , j = 1, 2, . . . , k
           da j
                            j
               ds
The above system is a system of N ordinary differential equations, where N = 6n +
3k is the total number of coordinates of A i and a j for a selected basis. The system
(A.3.28) has not more then N − 1 functionally independent integrals. Therefore we
can formulate:
Theorem A.3.1. A set of n symmetric second rank tensors and k vectors with
N = 6n + 3k independent coordinates for a given basis has not more than N − 1
functionally independent invariants for N > 1 and one invariant for N = 1 with
respect to the symmetry transformation Q ( ϕm ).
                                            m
In essence, the proof of this theorem is given within the theory of linear first order
partial differential equations [92].
    As an example let us consider the set of a symmetric second rank tensor A and
a vector a . This set has eight independent invariants. For a visual perception it is
useful to keep in mind that the considered set is equivalent to
                                        A,      a,     A · a,        A2 · a
Therefore it is necessary to find the list of invariants, whose fixation determines this
set as a rigid whole. The generic equation (A.3.27) takes the form
                        T
                  ∂f                                            ∂f
                            ·· (m × A − A × m ) +                  · (m × a ) = 0                    (A.3.29)
                   A
                  ∂A                                             a
                                                                ∂a
A.3 Orthogonal Transformations and Orthogonal Invariants   191


The characteristic system of (A.3.29) is
                        A
                       dA                            a
                                                    da
                           = m × A − A × m,            = m ×a                (A.3.30)
                       ds                           ds
This system of ninth order has eight independent integrals. Six of them are invariants
of A and a with respect to the full orthogonal group. They fix the considered set as
a rigid whole. The orthogonal invariants are defined by Eqs (A.3.4) and (A.3.5).
    Let us note that the invariant I7 in (A.3.4) cannot be ignored. To verify this it is
enough to consider two different sets
                                 A,    a   and B = Q p · A · Q T ,
                                                               p           a,
where Q p = I − 2p ⊗ p , p · p = 1, p · a = 0. One can prove that the invariants
                            p
I1 , I2 , . . . , I6 are the same for these two sets. The only difference is the invariant I7 ,
i.e. a · B 2 · (a × B · a ) = −a · A 2 · (a × A · a ) Therefore the triples of vectors a , A ·
a , A 2 · a and a , B · a , B 2 · a have different orientations and cannot be combined by
a rotation. In order to fix the considered set with respect to the unit vector m it is
enough to fix the next two invariants
                                      I8 = m · A · m ,      I9 = m · a                 (A.3.31)
The eight independent transversely isotropic invariants are (A.3.4), (A.3.5) and
(A.3.31).

A.3.3 Invariants for the Orthotropic Symmetry Group
Consider orthogonal tensors Q 1 = I − 2n 1 ⊗ n 1 and Q 2 = I − 2n 2 ⊗ n 2 ,
                                             n                            n
det Q 1 = det Q 2 = −1. These tensors represent the mirror reflections, whereby
the unit orthogonal vectors ±n 1 and ±n 2 , are the normal directions to the mirror
planes. The above tensors are the symmetry elements of the orthotropic symmetry
group. The invariants must be found from
                                        T                    T
                         f (Q 1 · A · Q 1 ) = f (Q 2 · A · Q 2 ) = f ( A )
Consequently,
                           T     T                    T                    T
      f (Q 1 · Q 2 · A · Q 2 · Q 1 ) = f (Q 1 · A · Q 1 ) = f (Q 2 · A · Q 2 ) = f ( A )
and the tensor Q 3 = Q 1 · Q 2 = 2 n3 ⊗ n 3 − I belongs to the symmetry group, where
the unit vector n 3 is orthogonal to n 1 and n 2 . Taking into account that Q i · n i = −n i
(no summation convention), Q i · n j = n j , i = j and using the notation A i =         ′

Qi · A · QiT we can write

          tr ( A ′k )             = tr ( A k ),     k = 1, . . . , 3,   i = 1, 2, 3
          ni · A′ · ni            = n i · Q i · A · Q iT · n i
                                  = ni · A · ni,      i = 1, 2, 3                      (A.3.32)
          ni   · A ′2 · n   i     =                       T
                                       ni · Qi · A2 · Qi · ni
                                  =    n i · A 2 · n i , i = 1, 2, 3
192    A Some Basic Rules of Tensor Calculus


The above set of includes 9 scalars. The number of independent scalars is 7 due to
the obvious relations

        tr ( A k ) = n 1 · A k · n 1 + n 2 · A k · n 2 + n 3 · A k · n 3 ,   k = 1, 2, 3

More Related Content

What's hot

Factoring quadratic expressions
Factoring quadratic expressionsFactoring quadratic expressions
Factoring quadratic expressionsAlicia Jane
 
Matrix Operations
Matrix OperationsMatrix Operations
Matrix OperationsRon Eick
 
Lesson 8: Curves, Arc Length, Acceleration
Lesson 8: Curves, Arc Length, AccelerationLesson 8: Curves, Arc Length, Acceleration
Lesson 8: Curves, Arc Length, AccelerationMatthew Leingang
 
Partial Differentiation & Application
Partial Differentiation & Application Partial Differentiation & Application
Partial Differentiation & Application Yana Qlah
 
2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdf
2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdf2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdf
2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdfVEGACHRISTINEF
 
MATRICES
MATRICESMATRICES
MATRICESdaferro
 
Beam theory
Beam theoryBeam theory
Beam theorybissla19
 
Composite Materials Used in Aerospace Industry
Composite Materials Used in Aerospace IndustryComposite Materials Used in Aerospace Industry
Composite Materials Used in Aerospace IndustrySazzad Hossain
 
MATRICES AND ITS TYPE
MATRICES AND ITS TYPEMATRICES AND ITS TYPE
MATRICES AND ITS TYPEHimanshu Negi
 
Advanced mechanics of materials (AMOM) by s jose
Advanced mechanics of materials (AMOM) by s joseAdvanced mechanics of materials (AMOM) by s jose
Advanced mechanics of materials (AMOM) by s joseEagle .
 
Vector calculus
Vector calculusVector calculus
Vector calculusraghu ram
 
Strain Energy and Modulus Of Resilience
Strain Energy and Modulus Of ResilienceStrain Energy and Modulus Of Resilience
Strain Energy and Modulus Of ResilienceLotachukwu Ernest Eze
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian IntegrationReza Rahimi
 

What's hot (20)

Group Theory
Group TheoryGroup Theory
Group Theory
 
Factoring quadratic expressions
Factoring quadratic expressionsFactoring quadratic expressions
Factoring quadratic expressions
 
Matrix Operations
Matrix OperationsMatrix Operations
Matrix Operations
 
Lesson 8: Curves, Arc Length, Acceleration
Lesson 8: Curves, Arc Length, AccelerationLesson 8: Curves, Arc Length, Acceleration
Lesson 8: Curves, Arc Length, Acceleration
 
Partial Differentiation & Application
Partial Differentiation & Application Partial Differentiation & Application
Partial Differentiation & Application
 
Matrix algebra
Matrix algebraMatrix algebra
Matrix algebra
 
2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdf
2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdf2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdf
2. Mechanics_Materials_7th_txtbk Gere_Goodno_1045 pages.pdf
 
MATRICES
MATRICESMATRICES
MATRICES
 
taylors theorem
taylors theoremtaylors theorem
taylors theorem
 
Unit 6.4
Unit 6.4Unit 6.4
Unit 6.4
 
Beam theory
Beam theoryBeam theory
Beam theory
 
Hw ch7
Hw ch7Hw ch7
Hw ch7
 
Unit 8: Torsion of circular shafts and elastic stability of columns
Unit 8: Torsion of circular shafts and elastic stability of columnsUnit 8: Torsion of circular shafts and elastic stability of columns
Unit 8: Torsion of circular shafts and elastic stability of columns
 
Composite Materials Used in Aerospace Industry
Composite Materials Used in Aerospace IndustryComposite Materials Used in Aerospace Industry
Composite Materials Used in Aerospace Industry
 
MATRICES AND ITS TYPE
MATRICES AND ITS TYPEMATRICES AND ITS TYPE
MATRICES AND ITS TYPE
 
Advanced mechanics of materials (AMOM) by s jose
Advanced mechanics of materials (AMOM) by s joseAdvanced mechanics of materials (AMOM) by s jose
Advanced mechanics of materials (AMOM) by s jose
 
Vector calculus
Vector calculusVector calculus
Vector calculus
 
3 torsion
3 torsion3 torsion
3 torsion
 
Strain Energy and Modulus Of Resilience
Strain Energy and Modulus Of ResilienceStrain Energy and Modulus Of Resilience
Strain Energy and Modulus Of Resilience
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
 

Viewers also liked

Tensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersTensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersSpringer
 
Recurrence equations
Recurrence equationsRecurrence equations
Recurrence equationsTarun Gehlot
 
Real meaning of functions
Real meaning of functionsReal meaning of functions
Real meaning of functionsTarun Gehlot
 
Graphs of trigonometric functions
Graphs of trigonometric functionsGraphs of trigonometric functions
Graphs of trigonometric functionsTarun Gehlot
 
Intervals of validity
Intervals of validityIntervals of validity
Intervals of validityTarun Gehlot
 
An applied approach to calculas
An applied approach to calculasAn applied approach to calculas
An applied approach to calculasTarun Gehlot
 
Describing and exploring data
Describing and exploring dataDescribing and exploring data
Describing and exploring dataTarun Gehlot
 
Solution of nonlinear_equations
Solution of nonlinear_equationsSolution of nonlinear_equations
Solution of nonlinear_equationsTarun Gehlot
 
How to draw a good graph
How to draw a good graphHow to draw a good graph
How to draw a good graphTarun Gehlot
 
Modelling with first order differential equations
Modelling with first order differential equationsModelling with first order differential equations
Modelling with first order differential equationsTarun Gehlot
 
Linear approximations
Linear approximationsLinear approximations
Linear approximationsTarun Gehlot
 
Probability and statistics as helpers in real life
Probability and statistics as helpers in real lifeProbability and statistics as helpers in real life
Probability and statistics as helpers in real lifeTarun Gehlot
 
Dental Bioaerosol, a silent threat
Dental Bioaerosol, a silent threatDental Bioaerosol, a silent threat
Dental Bioaerosol, a silent threatMunshi Kamrul Hasan
 
C4 discontinuities
C4 discontinuitiesC4 discontinuities
C4 discontinuitiesTarun Gehlot
 
The shortest distance between skew lines
The shortest distance between skew linesThe shortest distance between skew lines
The shortest distance between skew linesTarun Gehlot
 
(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanics
(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanics(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanics
(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanicsAntonio Vinnie
 
AQUASOMES: A POTENTIAL DRUG DELIVERY CARRIER
AQUASOMES: A POTENTIAL DRUG  DELIVERY CARRIERAQUASOMES: A POTENTIAL DRUG  DELIVERY CARRIER
AQUASOMES: A POTENTIAL DRUG DELIVERY CARRIERGourab Banerjee
 

Viewers also liked (20)

Tensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersTensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineers
 
Recurrence equations
Recurrence equationsRecurrence equations
Recurrence equations
 
Logicgates
LogicgatesLogicgates
Logicgates
 
Real meaning of functions
Real meaning of functionsReal meaning of functions
Real meaning of functions
 
Graphs of trigonometric functions
Graphs of trigonometric functionsGraphs of trigonometric functions
Graphs of trigonometric functions
 
Intervals of validity
Intervals of validityIntervals of validity
Intervals of validity
 
An applied approach to calculas
An applied approach to calculasAn applied approach to calculas
An applied approach to calculas
 
Describing and exploring data
Describing and exploring dataDescribing and exploring data
Describing and exploring data
 
Solution of nonlinear_equations
Solution of nonlinear_equationsSolution of nonlinear_equations
Solution of nonlinear_equations
 
Critical points
Critical pointsCritical points
Critical points
 
How to draw a good graph
How to draw a good graphHow to draw a good graph
How to draw a good graph
 
Modelling with first order differential equations
Modelling with first order differential equationsModelling with first order differential equations
Modelling with first order differential equations
 
Linear approximations
Linear approximationsLinear approximations
Linear approximations
 
Probability and statistics as helpers in real life
Probability and statistics as helpers in real lifeProbability and statistics as helpers in real life
Probability and statistics as helpers in real life
 
Matrix algebra
Matrix algebraMatrix algebra
Matrix algebra
 
Dental Bioaerosol, a silent threat
Dental Bioaerosol, a silent threatDental Bioaerosol, a silent threat
Dental Bioaerosol, a silent threat
 
C4 discontinuities
C4 discontinuitiesC4 discontinuities
C4 discontinuities
 
The shortest distance between skew lines
The shortest distance between skew linesThe shortest distance between skew lines
The shortest distance between skew lines
 
(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanics
(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanics(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanics
(Ebook pdf) - physics - introduction to tensor calculus and continuum mechanics
 
AQUASOMES: A POTENTIAL DRUG DELIVERY CARRIER
AQUASOMES: A POTENTIAL DRUG  DELIVERY CARRIERAQUASOMES: A POTENTIAL DRUG  DELIVERY CARRIER
AQUASOMES: A POTENTIAL DRUG DELIVERY CARRIER
 

Similar to A some basic rules of tensor calculus

Vectors
VectorsVectors
Vectorskleybf
 
Split result 2021_05_07_10_11_44
Split result 2021_05_07_10_11_44Split result 2021_05_07_10_11_44
Split result 2021_05_07_10_11_44Reema
 
Scalers and Vectors: By DBU-MESA
Scalers and Vectors: By DBU-MESAScalers and Vectors: By DBU-MESA
Scalers and Vectors: By DBU-MESAYitagesu Ethiopia
 
Electromagnetic fields: Review of vector algebra
Electromagnetic fields: Review of vector algebraElectromagnetic fields: Review of vector algebra
Electromagnetic fields: Review of vector algebraDr.SHANTHI K.G
 
vector-algebra-ppt-160215075153.pdf
vector-algebra-ppt-160215075153.pdfvector-algebra-ppt-160215075153.pdf
vector-algebra-ppt-160215075153.pdfSonalikaDandasena
 
Vector Algebra.pptx
Vector Algebra.pptxVector Algebra.pptx
Vector Algebra.pptxazrulZamir2
 
Scalar and Vector pdf.pdffxdgfghfgdrfggh
Scalar and Vector pdf.pdffxdgfghfgdrfgghScalar and Vector pdf.pdffxdgfghfgdrfggh
Scalar and Vector pdf.pdffxdgfghfgdrfgghmakhjanabithee
 
Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...
Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...
Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...James Smith
 
Chap12_Sec3 - Dot Product.ppt
Chap12_Sec3 - Dot Product.pptChap12_Sec3 - Dot Product.ppt
Chap12_Sec3 - Dot Product.pptMahmudulHaque71
 
SPHA021 Notes-Classical Mechanics-2020.docx
SPHA021 Notes-Classical Mechanics-2020.docxSPHA021 Notes-Classical Mechanics-2020.docx
SPHA021 Notes-Classical Mechanics-2020.docxlekago
 

Similar to A some basic rules of tensor calculus (20)

Ch 2 ~ vector
Ch 2 ~ vectorCh 2 ~ vector
Ch 2 ~ vector
 
Vectors
VectorsVectors
Vectors
 
Chapter 1
Chapter 1 Chapter 1
Chapter 1
 
Split result 2021_05_07_10_11_44
Split result 2021_05_07_10_11_44Split result 2021_05_07_10_11_44
Split result 2021_05_07_10_11_44
 
Scalers and Vectors: By DBU-MESA
Scalers and Vectors: By DBU-MESAScalers and Vectors: By DBU-MESA
Scalers and Vectors: By DBU-MESA
 
Module No. 21
Module No. 21Module No. 21
Module No. 21
 
Electromagnetic fields: Review of vector algebra
Electromagnetic fields: Review of vector algebraElectromagnetic fields: Review of vector algebra
Electromagnetic fields: Review of vector algebra
 
Fundamentals of Physics "VECTORS"
Fundamentals of Physics "VECTORS"Fundamentals of Physics "VECTORS"
Fundamentals of Physics "VECTORS"
 
vector-algebra-ppt-160215075153.pdf
vector-algebra-ppt-160215075153.pdfvector-algebra-ppt-160215075153.pdf
vector-algebra-ppt-160215075153.pdf
 
Vector algebra
Vector algebra Vector algebra
Vector algebra
 
Physics Presentation
Physics PresentationPhysics Presentation
Physics Presentation
 
Chapter 1(4)SCALAR AND VECTOR
Chapter 1(4)SCALAR AND VECTORChapter 1(4)SCALAR AND VECTOR
Chapter 1(4)SCALAR AND VECTOR
 
Vector Algebra.pptx
Vector Algebra.pptxVector Algebra.pptx
Vector Algebra.pptx
 
Ch.08
Ch.08Ch.08
Ch.08
 
Scalar and Vector pdf.pdffxdgfghfgdrfggh
Scalar and Vector pdf.pdffxdgfghfgdrfgghScalar and Vector pdf.pdffxdgfghfgdrfggh
Scalar and Vector pdf.pdffxdgfghfgdrfggh
 
Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...
Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...
Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geo...
 
Chap12_Sec3 - Dot Product.ppt
Chap12_Sec3 - Dot Product.pptChap12_Sec3 - Dot Product.ppt
Chap12_Sec3 - Dot Product.ppt
 
SPHA021 Notes-Classical Mechanics-2020.docx
SPHA021 Notes-Classical Mechanics-2020.docxSPHA021 Notes-Classical Mechanics-2020.docx
SPHA021 Notes-Classical Mechanics-2020.docx
 
Vectors
VectorsVectors
Vectors
 
Vectors
VectorsVectors
Vectors
 

More from Tarun Gehlot

Materials 11-01228
Materials 11-01228Materials 11-01228
Materials 11-01228Tarun Gehlot
 
Continuity and end_behavior
Continuity and  end_behaviorContinuity and  end_behavior
Continuity and end_behaviorTarun Gehlot
 
Continuity of functions by graph (exercises with detailed solutions)
Continuity of functions by graph   (exercises with detailed solutions)Continuity of functions by graph   (exercises with detailed solutions)
Continuity of functions by graph (exercises with detailed solutions)Tarun Gehlot
 
Factoring by the trial and-error method
Factoring by the trial and-error methodFactoring by the trial and-error method
Factoring by the trial and-error methodTarun Gehlot
 
Introduction to finite element analysis
Introduction to finite element analysisIntroduction to finite element analysis
Introduction to finite element analysisTarun Gehlot
 
Finite elements : basis functions
Finite elements : basis functionsFinite elements : basis functions
Finite elements : basis functionsTarun Gehlot
 
Finite elements for 2‐d problems
Finite elements  for 2‐d problemsFinite elements  for 2‐d problems
Finite elements for 2‐d problemsTarun Gehlot
 
Error analysis statistics
Error analysis   statisticsError analysis   statistics
Error analysis statisticsTarun Gehlot
 
Introduction to matlab
Introduction to matlabIntroduction to matlab
Introduction to matlabTarun Gehlot
 
Linear approximations and_differentials
Linear approximations and_differentialsLinear approximations and_differentials
Linear approximations and_differentialsTarun Gehlot
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximationTarun Gehlot
 
Interpolation functions
Interpolation functionsInterpolation functions
Interpolation functionsTarun Gehlot
 
Propeties of-triangles
Propeties of-trianglesPropeties of-triangles
Propeties of-trianglesTarun Gehlot
 
Gaussian quadratures
Gaussian quadraturesGaussian quadratures
Gaussian quadraturesTarun Gehlot
 
Basics of set theory
Basics of set theoryBasics of set theory
Basics of set theoryTarun Gehlot
 
Numerical integration
Numerical integrationNumerical integration
Numerical integrationTarun Gehlot
 
Applications of set theory
Applications of  set theoryApplications of  set theory
Applications of set theoryTarun Gehlot
 
Miscellneous functions
Miscellneous  functionsMiscellneous  functions
Miscellneous functionsTarun Gehlot
 

More from Tarun Gehlot (20)

Materials 11-01228
Materials 11-01228Materials 11-01228
Materials 11-01228
 
Binary relations
Binary relationsBinary relations
Binary relations
 
Continuity and end_behavior
Continuity and  end_behaviorContinuity and  end_behavior
Continuity and end_behavior
 
Continuity of functions by graph (exercises with detailed solutions)
Continuity of functions by graph   (exercises with detailed solutions)Continuity of functions by graph   (exercises with detailed solutions)
Continuity of functions by graph (exercises with detailed solutions)
 
Factoring by the trial and-error method
Factoring by the trial and-error methodFactoring by the trial and-error method
Factoring by the trial and-error method
 
Introduction to finite element analysis
Introduction to finite element analysisIntroduction to finite element analysis
Introduction to finite element analysis
 
Finite elements : basis functions
Finite elements : basis functionsFinite elements : basis functions
Finite elements : basis functions
 
Finite elements for 2‐d problems
Finite elements  for 2‐d problemsFinite elements  for 2‐d problems
Finite elements for 2‐d problems
 
Error analysis statistics
Error analysis   statisticsError analysis   statistics
Error analysis statistics
 
Matlab commands
Matlab commandsMatlab commands
Matlab commands
 
Introduction to matlab
Introduction to matlabIntroduction to matlab
Introduction to matlab
 
Linear approximations and_differentials
Linear approximations and_differentialsLinear approximations and_differentials
Linear approximations and_differentials
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
 
Interpolation functions
Interpolation functionsInterpolation functions
Interpolation functions
 
Propeties of-triangles
Propeties of-trianglesPropeties of-triangles
Propeties of-triangles
 
Gaussian quadratures
Gaussian quadraturesGaussian quadratures
Gaussian quadratures
 
Basics of set theory
Basics of set theoryBasics of set theory
Basics of set theory
 
Numerical integration
Numerical integrationNumerical integration
Numerical integration
 
Applications of set theory
Applications of  set theoryApplications of  set theory
Applications of set theory
 
Miscellneous functions
Miscellneous  functionsMiscellneous  functions
Miscellneous functions
 

Recently uploaded

CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 

Recently uploaded (20)

CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 

A some basic rules of tensor calculus

  • 1. A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a ⊗ b + . . . + c ⊗ d The d. scalars, vectors and tensors are handled as invariant (independent from the choice of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. g i , i = 1, 2, 3 one can write a = ai g i , A = ai b j + . . . + ci d j g i ⊗ g j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 ak g k ≡ ∑ ak g k , Aik bk ≡ ∑ Aik bk k =1 k =1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e. g. the sum of two vectors is computed as a sum of their coordinates ci = ai + bi . The introduced basis remains in the background. It must be remembered that a change of the coordinate system leads to the change of the components of tensors. In this work we prefer the direct tensor notation over the index one. When solv- ing applied problems the tensor equations can be “translated” into the language of matrices for a specified coordinate system. The purpose of this Appendix is to
  • 2. 168 A Some Basic Rules of Tensor Calculus give a brief guide to notations and rules of the tensor calculus applied through- out this work. For more comprehensive overviews on tensor calculus we recom- mend [54, 96, 123, 191, 199, 311, 334]. The calculus of matrices is presented in [40, 111, 340], for example. Section A.1 provides a brief overview of basic alge- braic operations with vectors and second rank tensors. Several rules from tensor analysis are summarized in Sect. A.2. Basic sets of invariants for different groups of symmetry transformation are presented in Sect. A.3, where a novel approach to find the functional basis is discussed. A.1 Basic Operations of Tensor Algebra A.1.1 Polar and Axial Vectors A vector in the three-dimensional Euclidean space is defined as a directed line seg- ment with specified magnitude (scalar) and direction. The magnitude (the length) of a vector a is denoted by |a |. Two vectors a and b are equal if they have the same direction and the same magnitude. The zero vector 0 has a magnitude equal to zero. In mechanics two types of vectors can be introduced. The vectors of the first type are directed line segments. These vectors are associated with translations in the three- dimensional space. Examples for polar vectors include the force, the displacement, the velocity, the acceleration, the momentum, etc. The second type is used to char- acterize spinor motions and related quantities, i.e. the moment, the angular velocity, the angular momentum, etc. Figure A.1a shows the so-called spin vector a ∗ which represents a rotation about the given axis. The direction of rotation is specified by the circular arrow and the “magnitude” of rotation is the corresponding length. For the given spin vector a ∗ the directed line segment a is introduced according to the following rules [334]: 1. the vector a is placed on the axis of the spin vector, 2. the magnitude of a is equal to the magnitude of a ∗ , a b c a a∗ a∗ a∗ a Figure A.1 Axial vectors. a Spin vector, b axial vector in the right-screw oriented reference frame, c axial vector in the left-screw oriented reference frame
  • 3. A.1 Basic Operations of Tensor Algebra 169 3. the vector a is directed according to the right-handed screw, Fig A.1b, or the left-handed screw, Fig A.1c The selection of one of the two cases in 3. corresponds to the convention of ori- entation of the reference frame [334] (it should be not confused with the right- or left-handed triples of vectors or coordinate systems). The directed line segment is called a polar vector if it does not change by changing the orientation of the refer- ence frame. The vector is called to be axial if it changes the sign by changing the orientation of the reference frame. The above definitions are valid for scalars and tensors of any rank too. The axial vectors (and tensors) are widely used in the rigid body dynamics, e.g. [333], in the theories of rods, plates and shells, e.g. [25], in the asymmetric theory of elasticity, e.g. [231], as well as in dynamics of micro-polar media, e.g. [108]. By dealing with polar and axial vectors it should be remembered that they have different physical meanings. Therefore, a sum of a polar and an axial vector has no sense. A.1.2 Operations with Vectors Addition. For a given pair of vectors a and b of the same type the sum c = a + b is defined according to one of the rules in Fig. A.2. The sum has the following properties – a + b = b + a (commutativity), – (a + b ) + c = a + (b + c ) (associativity), – a +0 = a Multiplication by a Scalar. For any vector a and for any scalar α a vector b = αa a is defined in such a way that – |b | = |α||a |, – for α > 0 the direction of b coincides with that of a , – for α < 0 the direction of b is opposite to that of a . For α = 0 the product yields the zero vector, i.e. 0 = 0a . It is easy to verify that a α(a + b ) = αa + αb , a b (α + β)a = αa + βa a a a b b c c b a a Figure A.2 Addition of two vectors. a Parallelogram rule, b triangle rule
  • 4. 170 A Some Basic Rules of Tensor Calculus a b b b ϕ ϕ a a a 2π − ϕ na = (b · n a )n a |a | Figure A.3 Scalar product of two vectors. a Angles between two vectors, b unit vector and projection Scalar (Dot) Product of two Vectors. For any pair of vectors a and b a scalar α is defined by α = a · b = |a ||b | cos ϕ, where ϕ is the angle between the vectors a and b . As ϕ one can use any of the two angles between the vectors, Fig. A.3a. The properties of the scalar product are – a · b = b · a (commutativity), – a · (b + c ) = a · b + a · c (distributivity) Two nonzero vectors are said to be orthogonal if their scalar product is zero. The unit vector directed along the vector a is defined by (see Fig. A.3b) a na = |a | The projection of the vector b onto the vector a is the vector (b · n a )n a , Fig. A.3b. The length of the projection is |b || cos ϕ|. Vector (Cross) Product of Two Vectors. For the ordered pair of vectors a and b the vector c = a × b is defined in two following steps [334]: – the spin vector c ∗ is defined in such a way that • the axis is orthogonal to the plane spanned on a and b , Fig. A.4a, • the circular arrow shows the direction of the “shortest” rotation from a to b , Fig. A.4b, • the length is |a ||b | sin ϕ, where ϕ is the angle of the “shortest” rotation from a to b , – from the resulting spin vector the directed line segment c is constructed according to one of the rules listed in Subsect. A.1.1. The properties of the vector product are a × b = −b × a , a × (b + c ) = a × b + a × c The type of the vector c = a × b can be established for the known types of the vectors a and b , [334]. If a and b are polar vectors the result of the cross product
  • 5. A.1 Basic Operations of Tensor Algebra 171 a b c c c∗ ϕ ϕ ϕ a b a b a b Figure A.4 Vector product of two vectors. a Plane spanned on two vectors, b spin vector, c axial vector in the right-screw oriented reference frame will be the axial vector. An example is the moment of momentum for a mass point m defined by r × (mv ), where r is the position of the mass point and v is the velocity ˙ of the mass point. The next example is the formula for the distribution of velocities in a rigid body v = ω × r . Here the cross product of the axial vector ω (angular velocity) with the polar vector r (position vector) results in the polar vector v. The mixed product of three vectors a , b and c is defined by (a × b ) · c . The result is a scalar. For the mixed product the following identities are valid a · (b × c ) = b · (c × a ) = c · (a × b ) (A.1.1) If the cross product is applied twice, the first operation must be set in parentheses, e.g., a × (b × c ). The result of this operation is a vector. The following relation can be applied a × (b × c ) = b (a · c ) − c (a · b ) (A.1.2) By use of (A.1.1) and (A.1.2) one can calculate (a × b ) · (c × d ) = a · [b × (c × d )] = a · (c b · d − d b · c ) (A.1.3) = a·c b·d −a·d b·c A.1.3 Bases Any triple of linear independent vectors e 1 , e 2 , e 3 is called basis. A triple of vectors e i is linear independent if and only if e 1 · (e 2 × e 3 ) = 0. For a given basis e i any vector a can be represented as follows a = a1 e 1 + a2 e 2 + a3 e 3 ≡ a i e i The numbers ai are called the coordinates of the vector a for the basis e i . In order to compute the coordinates the dual (reciprocal) basis e k is introduced in such a way that 1, k = i, e k · e i = δik = 0, k = i
  • 6. 172 A Some Basic Rules of Tensor Calculus δik is the Kronecker symbol. The coordinates ai can be found by e i · a = a · e i = am e m · e i = am δm = ai i For the selected basis e i the dual basis can be found from e2 × e3 e3 × e1 e1 × e2 e1 = , e2 = , e3 = (A.1.4) (e 1 × e 2 ) · e 3 (e 1 × e 2 ) · e 3 (e 1 × e 2 ) · e 3 By use of the dual basis a vector a can be represented as follows a = a1 e 1 + a2 e 2 + a3 e 3 ≡ a i e i , am = a · e m , am = am In the special case of the orthonormal vectors e i , i.e. |e i | = 1 and e i · e k = 0 for i = k, from (A.1.4) follows that e k = e k and consequently ak = ak . A.1.4 Operations with Second Rank Tensors A second rank tensor is a finite sum of ordered vector pairs A = a ⊗ b + . . . + c ⊗ d [334]. One ordered pair of vectors is called the dyad. The symbol ⊗ is called the dyadic (tensor) product of two vectors. A single dyad or a sum of two dyads are special cases of the second rank tensor. Any finite sum of more than three dyads can be reduced to a sum of three dyads. For example, let n A= ∑ a (i) ⊗ b (i) i =1 be a second rank tensor. Introducing a basis e k the vectors a (i) can be represented by a (i) = aki) e k , where aki) are coordinates of the vectors a (i) . Now we may write ( ( n n n A= ∑ aki)e k ⊗ b (i) = e k ⊗ ∑ aki)b (i) = e k ⊗ d k , ( ( dk ≡ ∑ aki)b (i) ( i =1 i =1 i =1 Addition. The sum of two tensors is defined as the sum of the corresponding dyads. The sum has the properties of associativity and commutativity. In addition, for a dyad a ⊗ b the following operation is introduced a ⊗ (b + c ) = a ⊗ b + a ⊗ c , (a + b ) ⊗ c = a ⊗ c + b ⊗ c Multiplication by a Scalar. This operation is introduced first for one dyad. For any scalar α and any dyad a ⊗ b α(a ⊗ b ) = (αa ) ⊗ b = a ⊗ (αb ), a b (A.1.5) (α + β)a ⊗ b = αa ⊗ b + βa ⊗ b a a By setting α = 0 in the first equation of (A.1.5) the zero dyad can be defined, i.e. 0(a ⊗ b ) = 0 ⊗ b = a ⊗ 0 . The above operations can be generalized for any finite sum of dyads, i.e. for second rank tensors.
  • 7. A.1 Basic Operations of Tensor Algebra 173 Inner Dot Product. For any two second rank tensors A and B the inner dot prod- uct is specified by A · B . The rule and the result of this operation can be explained in the special case of two dyads, i.e. by setting A = a ⊗ b and B = c ⊗ d A · B = a ⊗ b · c ⊗ d = (b · c )a ⊗ d = αa ⊗ d , a α ≡b·c The result of this operation is a second rank tensor. Note that A · B = B · A . This can be again verified for two dyads. The operation can be generalized for two second rank tensors as follows 3 3 3 3 A·B = ∑ a (i) ⊗ b (i) · ∑ c (k) ⊗ d (k) = ∑ ∑ (b (i) · c (k) )a (i) ⊗ d (k) i =1 k =1 i =1 k =1 Transpose of a Second Rank Tensor. The transpose of a second rank tensor A is constructed by the following rule 3 T 3 AT A = ∑ a (i) ⊗ b (i) = ∑ b (i) ⊗ a (i) i =1 i =1 Double Inner Dot Product. For any two second rank tensors A and B the double inner dot product is specified by A ·· B The result of this operation is a scalar. This operation can be explained for two dyads as follows A ·· B = a ⊗ b ·· c ⊗ d = (b · c )(a · d ) By analogy to the inner dot product one can generalize this operation for two second rank tensors. It can be verified that A ·· B = B ·· A for second rank tensors A and B . For a second rank tensor A and for a dyad a ⊗ b A ·· a ⊗ b = b · A · a (A.1.6) A scalar product of two second rank tensors A and B is defined by α = A ·· B T One can verify that A ·· B T = B T ·· A = B ·· A T Dot Products of a Second Rank Tensor and a Vector. The right dot product of a second rank tensor A and a vector c is defined by 3 3 A·c = ∑ a (i) ⊗ b (i) ·c = ∑ (b (i) · c )a (i) i =1 i =1 For a single dyad this operation is a ⊗ b · c = a (b · c )
  • 8. 174 A Some Basic Rules of Tensor Calculus The left dot product is defined by 3 3 c· A =c· ∑ a (i) ⊗ b (i) = ∑ (c · a (i) )b (i) i =1 i =1 The results of these operations are vectors. One can verify that A · c = c · A, A · c = c · AT Cross Products of a Second Rank Tensor and a Vector. The right cross product of a second rank tensor A and a vector c is defined by 3 3 A ×c = ∑ a (i) ⊗ b (i) ×c = ∑ a (i) ⊗ (b (i) × c ) i =1 i =1 The left cross product is defined by 3 3 c×A =c× ∑ a (i) ⊗ b (i) = ∑ (c × a (i) ) ⊗ b (i) i =1 i =1 The results of these operations are second rank tensors. It can be shown that A × c = −[c × A T ]T Trace. The trace of a second rank tensor is defined by 3 3 tr A = tr ∑ a (i) ⊗ b (i) = ∑ a (i) · b (i) i =1 i =1 By taking the trace of a second rank tensor the dyadic product is replaced by the dot product. It can be shown that tr A = tr A T , tr ( A · B ) = tr (B · A ) = tr ( A T · B T ) = A ·· B Symmetric Tensors. A second rank tensor is said to be symmetric if it satisfies the following equality A = AT An alternative definition of the symmetric tensor can be given as follows. A second rank tensor is said to be symmetric if for any vector c = 0 the following equality is valid c· A = A ·c An important example of a symmetric tensor is the unit or identity tensor I , which is defined by such a way that for any vector c c·I = I·c =c
  • 9. A.1 Basic Operations of Tensor Algebra 175 The representations of the identity tensor are I = ek ⊗ ek = ek ⊗ ek for any basis e k and e k , e k · e m = δk . For three orthonormal vectors m , n and p m I = n ⊗n +m ⊗m + p ⊗ p A symmetric second rank tensor P satisfying the condition P · P = P is called projector. Examples of projectors are m ⊗ m, n ⊗ n + p ⊗ p = I − m ⊗ m, where m , n and p are orthonormal vectors. The result of the dot product of the tensor m ⊗ m with any vector a is the projection of the vector a onto the line spanned on the vector m , i.e. m ⊗ m · a = (a · m )m . The result of (n ⊗ n + p ⊗ p ) · a is the projection of the vector a onto the plane spanned on the vectors n and p . Skew-symmetric Tensors. A second rank tensor is said to be skew-symmetric if it satisfies the following equality A = −A T or if for any vector c c · A = −A · c Any skew symmetric tensor A can be represented by A = a ×I = I ×a The vector a is called the associated vector. Any second rank tensor can be uniquely decomposed into the symmetric and skew-symmetric parts 1 1 A= A + AT + A − A T = A1 + A2, 2 2 1 A1 = A + AT , A1 = A1 , T 2 1 A2 = A − A T , A 2 = −A 2 T 2 Vector Invariant. The vector invariant or “Gibbsian Cross” of a second rank ten- sor A is defined by 3 3 A× = ∑ a (i) ⊗ b (i) = ∑ a (i) × b (i) i =1 × i =1 The result of this operation is a vector. The vector invariant of a symmetric tensor is the zero vector. The following identities can be verified (a × I )× = −2a , a a × I × b = b ⊗ a − (a · b ) I
  • 10. 176 A Some Basic Rules of Tensor Calculus Linear Transformations of Vectors. A vector valued function of a vector ar- gument f (a ) is called to be linear if f (α1 a 1 + α2a 2 ) = α1 f (a 1 ) + α2 f (a 2 ) for any two vectors a 1 and a 2 and any two scalars α1 and α2 . It can be shown that any linear vector valued function can be represented by f (a ) = A · a , where A is a second rank tensor. In many textbooks, e.g. [32, 293] a second rank tensor A is defined to be the linear transformation of the vector space into itself. Determinant and Inverse of a Second Rank Tensor. Let a , b and c be ar- bitrary linearly-independent vectors. The determinant of a second rank tensor A is defined by ( A · a ) · [( A · b ) × ( A · c )] det A = a · (b × c ) The following identities can be verified det( A T ) = det( A ), det( A · B ) = det( A ) det(B ) The inverse of a second rank tensor A −1 is introduced as the solution of the follow- ing equation A −1 · A = A · A −1 = I A is invertible if and only if det A = 0. A tensor A with det A = 0 is called singular. Examples of singular tensors are projectors. Cayley-Hamilton Theorem. Any second rank tensor satisfies the following equation A 3 − J1 ( A ) A 2 + J2 ( A ) A − J3 ( A ) I = 0 , (A.1.7) where A 2 = A · A , A 3 = A · A · A and 1 J1 ( A ) = tr A , [(tr A )2 − tr A 2 ], J2 ( A ) = 2 (A.1.8) 1 1 1 J3 ( A ) = det A = (tr A )3 − tr A tr A 2 + tr A 3 6 2 3 The scalar-valued functions Ji ( A ) are called principal invariants of the tensor A . Coordinates of Second Rank Tensors. Let e i be a basis and e k the dual basis. Any two vectors a and b can be represented as follows a = ai e i = a j e j , b = b l e l = bm e m A dyad a ⊗ b has the following representations a ⊗ b = ai b j e i ⊗ e j = ai b j e i ⊗ e j = ai b j e i ⊗ e j = ai b j e i ⊗ e j For the representation of a second rank tensor A one of the following four bases can be used ei ⊗ e j, ei ⊗ e j, ei ⊗ e j, ei ⊗ e j With these bases one can write
  • 11. A.1 Basic Operations of Tensor Algebra 177 ∗j A = Aije i ⊗ e j = Aije i ⊗ e j = Ai∗je i ⊗ e j = Ai∗ e i ⊗ e j ∗ For a selected basis the coordinates of a second rank tensor can be computed as follows Aij = e i · A · e j , Aij = e i · A · e j , ∗j Ai∗j = e i · A · e j , Ai∗ = e i · A · e j ∗ Principal Values and Directions of Symmetric Second Rank Tensors. Consider a dot product of a second rank tensor A and a unit vector n . The resulting vector a = A · n differs in general from n both by the length and the direction. However, one can find those unit vectors n , for which A · n is collinear with n , i.e. only the length of n is changed. Such vectors can be found from the equation A · n = λn n or ( A − λI ) · n = 0 I (A.1.9) The unit vector n is called the principal vector and the scalar λ the principal value of the tensor A . Let A be a symmetric tensor. In this case the principal values are real numbers and there exist at least three mutually orthogonal principal vectors. The principal values can be found as roots of the characteristic polynomial det( A − λI ) = −λ3 + J1 ( A )λ2 − J2 ( A )λ + J3 ( A ) = 0 I The principal values are specified by λ I , λ I I , λ I I I . For known principal values and principal directions the second rank tensor can be represented as follows (spectral representation) A = λIn I ⊗ n I + λI In I I ⊗ n I I + λI I In I I I ⊗ n I I I Orthogonal Tensors. A second rank tensor Q is said to be orthogonal if it sat- isfies the equation Q T · Q = I . If Q operates on a vector its length remains un- changed, i.e. let b = Q · a , then |b |2 = b · b = a · Q T · Q · a = a · a = |a |2 Furthermore, the orthogonal tensor does not change the scalar product of two arbi- trary vectors. For two vectors a and b as well as a ′ = Q · a and b ′ = Q · b one can calculate a′ · b′ = a · QT · Q · b = a · b From the definition of the orthogonal tensor follows Q T = Q −1 , QT · Q = Q · QT = I, det(Q · Q T ) = (det Q )2 = det I = 1 ⇒ det Q = ±1 Orthogonal tensors with det Q = 1 are called proper orthogonal or rotation tensors. The rotation tensors are widely used in the rigid body dynamics, e.g. [333], and in the theories of rods, plates and shells, e.g. [25, 32]. Any orthogonal tensor is either
  • 12. 178 A Some Basic Rules of Tensor Calculus the rotation tensor or the composition of the rotation (proper orthogonal tensor) and the tensor − I . Let P be a rotation tensor, det P = 1, then an orthogonal tensor Q with det Q = −1 can be composed by Q = (− I ) · P = P · (− I ), − det Q = det(− I ) det P = −1 For any two orthogonal tensors Q 1 and Q 2 the composition Q 3 = Q 1 · Q 2 is the or- thogonal tensor, too. This property is used in the theory of symmetry and symmetry groups, e.g. [232, 331]. Two important examples for orthogonal tensors are • rotation tensor about a fixed axis Q (ψm ) = m ⊗ m + cos ψ( I − m ⊗ m ) + sin ψm × I , m m −π < ψ < π, det Q = 1, where the unit vector m represents the axis and ψ is the angle of rotation, • reflection tensor Q = I − 2n ⊗ n , det Q = −1, n where the unit vector n represents a normal to the mirror plane. One can prove the following identities [334] (Q · a ) × (Q · b ) = det Q Q · (a × b ) (A.1.10) Q · (a × Q T ) = Q · (a × I ) · Q T = det Q [(Q · a ) × I ] A.2 Elements of Tensor Analysis A.2.1 Coordinate Systems The vector r characterizing the position of a point P can be represented by use of the Cartesian coordinates xi as follows, Fig. A.5, r ( x1 , x2 , x3 ) = x1 e 1 + x2 e 2 + x3 e 3 = x i e i Instead of coordinates xi one can introduce any triple of curvilinear coordinates q1 , q2 , q3 by means of one-to-one transformations x k = x k ( q1 , q2 , q3 ) ⇔ qk = qk ( x1 , x2 , x3 ) It is assumed that the above transformations are continuous and continuous differ- entiable as many times as necessary and for the Jacobians ∂xk ∂qi det = 0, det =0 ∂qi ∂xk
  • 13. A.2 Elements of Tensor Analysis 179 q3 r3 q2 r2 t c ons q = 3 P x3 r1 r e3 q1 e1 e2 x2 x1 Figure A.5 Cartesian and curvilinear coordinates must be valid. With these assumptions the position vector can be considered as a function of curvilinear coordinates qi , i.e. r = r (q1 , q2 , q3 ). Surfaces q1 = const, q2 = const, and q3 = const, Fig. A.5, are called coordinate surfaces. For given fixed values q2 = q2 and q3 = q3 a curve can be obtained along which only q1 ∗ ∗ varies. This curve is called the q1 -coordinate line, Fig. A.5. Analogously, one can obtain the q2 - and q3 -coordinate lines. The partial derivatives of the position vector with respect the to selected coordinates r ∂r r ∂r r ∂r r1 = , r2 = , r3 = , r 1 · (r 2 × r 3 ) = 0 ∂q1 ∂q2 ∂q3 define the tangential vectors to the coordinate lines in a point P, Fig. A.5. The vec- tors r i are used as the local basis in the point P. By use of (A.1.4) the dual basis r k can be introduced. The vector dr connecting the point P with a point P′ in the r differential neighborhood of P is defined by r ∂r 1 r ∂r r ∂r dr = r 1 dq + 2 dq2 + 3 dq3 = r k dqk ∂q ∂q ∂q The square of the arc length of the line element in the differential neighborhood of P is calculated by ds2 = dr · dr = (r i dqi ) · (r k dqk ) = gik dqi dqk , r r where gik ≡ r i · r k are the so-called contravariant components of the metric tensor. With gik one can represent the basis vectors r i by the dual basis vectors r k as follows r i = (r i · r k )r k = gikr k
  • 14. 180 A Some Basic Rules of Tensor Calculus Similarly r i = (r i · r k )r k = gikr k , gik ≡ r i · r k , where gik are termed covariant components of the metric tensor. For the selected bases r i and r k the second rank unit tensor has the following representations I = r i ⊗ r i = r i ⊗ gikr k = gikr i ⊗ r k = gikr i ⊗ r k = r i ⊗ r i A.2.2 The Hamilton (Nabla) Operator A scalar field is a function which assigns a scalar to each spatial point P for the domain of definition. Let us consider a scalar field ϕ(r ) = ϕ(q1 , q2 , q3 ). The total differential of ϕ by moving from a point P to a point P′ in the differential neighbor- hood is ∂ϕ ∂ϕ ∂ϕ ∂ϕ dϕ = 1 dq1 + 2 dq2 + 3 dq3 = k dqk ∂q ∂q ∂q ∂q Taking into account that dqk = dr · r k r ∂ϕ dϕ = dr · r k r = dr · ∇ ϕ r ∂qk The vector ∇ ϕ is called the gradient of the scalar field ϕ and the invariant operator ∇ (the Hamilton or nabla operator) is defined by ∂ ∇ = rk ∂qk For a vector field a (r ) one may write ∂aa a ∂a da = (dr · r k ) a r = dr · r k ⊗ k = dr · ∇ ⊗ a = (∇ ⊗ a ) T · dr , r r ∇ ∂qk ∂q ∂aa ∇ ⊗ a = rk ⊗ k ∂q The gradient of a vector field is a second rank tensor. The operation ∇ can be applied to tensors of any rank. For vectors and tensors the following additional operations are defined a ∂a diva ≡ ∇ · a = r k · k a ∂q a ∂a rota ≡ ∇ × a = r k × k a ∂q The following identities can be verified r ∂r ∇ ⊗ r = rk ⊗ = rk ⊗ rk = I, ∇·r = 3 ∂qk
  • 15. A.2 Elements of Tensor Analysis 181 For a scalar α, a vector a and for a second rank tensor A the following identities are valid ∂(αa ) a ∂α a ∂a ∇ (αa ) = r k ⊗ a = rk ⊗ a + αr k ⊗ r ∇ = (∇ α) ⊗ a + α∇ ⊗ a , ∂qk ∂qk ∂qk (A.2.1) k ∂( A · a ) k ∂A A k a ∂a ∇ · (A · a) = r · = r · k ·a+r · A· k ∂qk ∂q ∂q a ∂a (A.2.2) ∇ = (∇ · A ) · a + A ·· k ⊗ rk ∂q = (∇ · A ) · a + A ·· (∇ ⊗ a ) T ∇ ∇ Here the identity (A.1.6) is used. For a second rank tensor A and a position vector r one can prove the following identity ∂( A × r ) A ∂A r ∂r ∇ · (A × r ) = r k · k = rk · k × r + rk · A × k ∂q ∂q ∂q (A.2.3) = (∇ · A ) × r + r k · A × r k = (∇ · A ) × r − A × ∇ ∇ Here we used the definition of the vector invariant as follows A× = r k ⊗ r k · A = r k × (r k · A ) = −r k · A × r k × A.2.3 Integral Theorems Let ϕ(r ), a (r ) and A (r ) be scalar, vector and second rank tensor fields. Let V be the volume of a bounded domain with a regular surface A(V ) and n be the outer unit normal to the surface at r . The integral theorems can be summarized as follows – Gradient Theorems ∇ ϕ dV = n ϕ dA, V A (V ) ∇ ⊗ a dV = n ⊗ a dA, V A (V ) ∇ ⊗ A dV = n ⊗ A dA V A (V ) – Divergence Theorems ∇ · a dV = n · a dA, V A (V ) ∇ · A dV = n · A dA V A (V )
  • 16. 182 A Some Basic Rules of Tensor Calculus – Curl Theorems ∇ × a dV = n × a dA, V A (V ) ∇ × A dV = n × A dA V A (V ) A.2.4 Scalar-Valued Functions of Vectors and Second Rank Tensors Let ψ be a scalar valued function of a vector a and a second rank tensor A , i.e. ψ = ψ(a , A ). Introducing a basis e i the function ψ can be represented as follows ψ(a , A ) = ψ( ai e i , Aije i ⊗ e j ) = ψ( ai , Aij ) The partial derivatives of ψ with respect to a and A are defined according to the following rule ∂ψ i ∂ψ dψ = i da + ij dAij ∂a ∂A (A.2.4) i ∂ψ ∂ψ = da · e i + dA ·· e j ⊗ e i a A dAij ∂a ∂Aij In the coordinates-free form the above rule can be rewritten as follows T ∂ψ ∂ψ dψ = da · a + dA ·· A = da · ψ,a + dA ·· (ψ,A ) T a a A A (A.2.5) a ∂a A ∂A with ∂ψ ∂ψ ∂ψ ∂ψ i ψ,a ≡ a = i e i , ψ,A ≡ A = e ⊗ ej a ∂a ∂a ∂AA ∂Aij One can verify that ψ,a and ψ,A are independent from the choice of the basis. One a A may prove the following formulae for the derivatives of principal invariants of a second rank tensor A T J1 ( A ),A A = I, J1 ( A 2 ),A = 2A T , A A J1 ( A 3 ),A = 3A 2 , A A J2 ( A ),A A = J1 ( A ) I − A T , (A.2.6) 2T J3 ( A ),A A = A − J1 ( A ) A T + J2 ( A ) I = J3 ( A )( A T )−1 A.3 Orthogonal Transformations and Orthogonal Invariants An application of the theory of tensor functions is to find a basic set of scalar invari- ants for a given group of symmetry transformations, such that each invariant relative
  • 17. A.3 Orthogonal Transformations and Orthogonal Invariants 183 to the same group is expressible as a single-valued function of the basic set. The ba- sic set of invariants is called functional basis. To obtain a compact representation for invariants, it is required that the functional basis is irreducible in the sense that removing any one invariant from the basis will imply that a complete representation for all the invariants is no longer possible. Such a problem arises in the formulation of constitutive equations for a given group of material symmetries. For example, the strain energy density of an elastic non-polar material is a scalar valued function of the second rank symmetric strain tensor. In the theory of the Cosserat continuum two strain measures are introduced, where the first strain measure is the polar tensor while the second one is the axial tensor, e.g. [108]. The strain energy density of a thin elastic shell is a function of two second rank tensors and one vector, e.g. [25]. In all cases the problem is to find a minimum set of functionally independent invariants for the considered tensorial arguments. For the theory of tensor functions we refer to [71]. Representations of tensor functions are reviewed in [280, 330]. An orthogonal transformation of a scalar α, a vector a and a second rank tensor A is defined by [25, 332] α′ ≡ (det Q )ζ α, a ′ ≡ (det Q )ζ Q · a , A ′ ≡ (det Q )ζ Q · A · Q T , (A.3.1) where Q is an orthogonal tensor, i.e. Q · Q T = I , det Q = ±1, I is the second rank unit tensor, ζ = 0 for absolute (polar) scalars, vectors and tensors and ζ = 1 for axial ones. An example of the axial scalar is the mixed product of three polar vectors, i.e. α = a · (b × c ). A typical example of the axial vector is the cross product of two polar vectors, i.e. c = a × b . An example of the second rank axial tensor is the skew-symmetric tensor W = a × I , where a is a polar vector. Consider a group of orthogonal transformations S (e.g., the material symmetry transformations) characterized by a set of orthogonal tensors Q . A scalar-valued function of a second rank tensor f = f ( A ) is called to be an orthogonal invariant under the group S if ∀Q ∈ S : f ( A ′ ) = (det Q )η f ( A ), (A.3.2) where η = 0 if values of f are absolute scalars and η = 1 if values of f are axial scalars. Any second rank tensor B can be decomposed into the symmetric and the skew- symmetric part, i.e. B = A + a × I , where A is the symmetric tensor and a is the associated vector. Therefore f (B ) = f ( A , a ). If B is a polar (axial) tensor, then a is an axial (polar) vector. For the set of second rank tensors and vectors the definition of an orthogonal invariant (A.3.2) can be generalized as follows ∀Q ∈ S : ′ ′ ′ ′ ′ ′ f (A 1 , A 2 , . . . , A n , a 1 , a 2 , . . . , a k ) = (det Q )η f ( A 1 , A 2 , . . . A n , a 1 , a 2 , . . . , a k ), Ai = AiT (A.3.3)
  • 18. 184 A Some Basic Rules of Tensor Calculus A.3.1 Invariants for the Full Orthogonal Group In [335] orthogonal invariants for different sets of second rank tensors and vectors with respect to the full orthogonal group are presented. It is shown that orthogonal invariants are integrals of a generic partial differential equation (basic equations for invariants). Let us present two following examples – Orthogonal invariants of a symmetric second rank tensor A are Ik = tr A k , k = 1, 2, 3 Instead of Ik it is possible to use the principal invariants Jk defined by (A.1.8). – Orthogonal invariants of a symmetric second rank tensor A and a vector a are Ik = tr A k , k = 1, 2, 3, I4 = a · a , I5 = a · A · a , (A.3.4) I6 = a · A 2 · a , I7 = a · A 2 · (a × A · a ) In the above set of invariants only 6 are functionally independent. The relation between the invariants (so-called syzygy, [71]) can be formulated as follows I4 I5 I6 I2 = 7 I5 I6 a · A3 · a , (A.3.5) I6 a · A3 · a a · A4 · a where a · A 3 · a and a · A 4 · a can be expressed by Il , l = 1, . . . 6 applying the Cayley-Hamilton theorem (A.1.7). The set of invariants for a symmetric second rank tensor A and a vector a can be applied for a non-symmetric second rank tensor B since it can be represented by B = A + a × I, A = AT. A.3.2 Invariants for the Transverse Isotropy Group Transverse isotropy is an important type of the symmetry transformation due to a variety of applications. Transverse isotropy is usually assumed in constitutive mod- eling of fiber reinforced materials, e.g. [21], fiber suspensions, e.g. [22], direction- ally solidified alloys, e.g. [213], deep drawing sheets, e.g. [50, 57] and piezoelectric materials, e.g. [285]. The invariants and generating sets for tensor-valued functions with respect to different cases of transverse isotropy are discussed in [79, 328] (see also relevant references therein). In what follows we analyze the problem of a func- tional basis within the theory of linear first order partial differential equations rather than the algebra of polynomials. We develop the idea proposed in [335] for the in- variants with respect to the full orthogonal group to the case of transverse isotropy. The invariants will be found as integrals of the generic partial differential equa- tions. Although a functional basis formed by these invariants does not include any redundant element, functional relations between them may exist. It may be there- fore useful to find out simple forms of such relations. We show that the proposed approach may supply results in a direct, natural manner.
  • 19. A.3 Orthogonal Transformations and Orthogonal Invariants 185 Invariants for a Single Second Rank Symmetric Tensor. Consider the proper orthogonal tensor which represents a rotation about a fixed axis, i.e. Q ( ϕm ) = m ⊗ m + cos ϕ( I − m ⊗ m ) + sin ϕm × I , m m det Q ( ϕm ) = 1, m (A.3.6) where m is assumed to be a constant unit vector (axis of rotation) and ϕ denotes the angle of rotation about m . The symmetry transformation defined by this tensor corresponds to the transverse isotropy, whereby five different cases are possible, e.g. [299, 331]. Let us find scalar-valued functions of a second rank symmetric tensor A satisfying the condition f ( A ′ ( ϕ)) = f (Q ( ϕm ) · A · Q T ( ϕm )) = f ( A ), m m A ′ ( ϕ) ≡ Q ( ϕm ) · A · Q T ( ϕm ) m m (A.3.7) Equation (A.3.7) must be valid for any angle of rotation ϕ. In (A.3.7) only the left- hand side depends on ϕ. Therefore its derivative with respect to ϕ can be set to zero, i.e. df dA ′ A ∂f T = ·· =0 (A.3.8) dϕ dϕ ∂A ′ A The derivative of A ′ with respect to ϕ can be calculated by the following rules dA ′ ( ϕ) = dQ ( ϕm ) · A · Q T ( ϕm ) + Q ( ϕm ) · A · dQ T ( ϕm ), A Q m m m Q m dQ ( ϕm ) = m × Q ( ϕm )dϕ Q m m dQ T ( ϕm ) = −Q T ( ϕm ) × m dϕ ⇒Q m m (A.3.9) By inserting the above equations into (A.3.8) we obtain T ∂f (m × A − A × m ) ·· =0 (A.3.10) A ∂A Equation (A.3.10) is classified in [92] to be the linear homogeneous first order par- tial differential equation. The characteristic system of (A.3.10) is A dA = (m × A − A × m ) (A.3.11) ds Any system of n linear ordinary differential equations has not more then n − 1 functionally independent integrals [92]. By introducing a basis e i the tensor A can be written down in the form A = Aije i ⊗ e j and (A.3.11) is a system of six ordi- nary differential equations with respect to the coordinates Aij . The five integrals of (A.3.11) may be written down as follows gi ( A ) = c i , i = 1, 2, . . . , 5, where ci are integration constants. Any function of the five integrals gi is the so- lution of the partial differential equation (A.3.10). Therefore the five integrals gi represent the invariants of the symmetric tensor A with respect to the symmetry transformation (A.3.6). The solutions of (A.3.11) are
  • 20. 186 A Some Basic Rules of Tensor Calculus A k (s) = Q (sm ) · A 0 · Q T (sm ), m k m k = 1, 2, 3, (A.3.12) where A 0 is the initial condition. In order to find the integrals, the variable s must be eliminated from (A.3.12). Taking into account the following identities tr (Q · A k · Q T ) = tr (Q T · Q · A k ) = tr A k , m · Q (sm ) = m , m (A.3.13) (Q · a ) × (Q · b ) = (det Q )Q · (a × b ) and using the notation Q m ≡ Q (sm ) the integrals can be found as follows m tr ( A k ) k = tr ( A 0 ), k = 1, 2, 3, m · Al · m l T = m · Q m · A0 · Q m · m l = m · A0 · m, l = 1, 2, m · A 2 · (m × A · m ) T T = m · Q m · A 2 · Q m · (m × Q m · A 0 · Q m · m ) 0 T T = m · A 2 · Q m · (Q m · m ) × (Q m · A 0 · m ) 0 = m · A 2 · (m × A 0 · m ) 0 (A.3.14) As a result we can formulate the six invariants of the tensor A with respect to the symmetry transformation (A.3.6) as follows Ik = tr ( A k ), k = 1, 2, 3, I4 = m · A · m , (A.3.15) I5 = m · A 2 · m , I6 = m · A 2 · (m × A · m ) The invariants with respect to various symmetry transformations are discussed in [79]. For the case of the transverse isotropy six invariants are derived in [79] by the use of another approach. In this sense our result coincides with the result given in [79]. However, from our derivations it follows that only five invariants listed in (A.3.15) are functionally independent. Taking into account that I6 is the mixed product of vectors m , A · m and A 2 · m the relation between the invariants can be written down as follows   1 I4 I5 I2 = det  I4 6 I5 m · A3 · m  (A.3.16) I5 m · A 3 · m m · A4 · m One can verify that m · A 3 · m and m · A 4 · m are transversely isotropic invari- ants, too. However, applying the the Cayley-Hamilton theorem (A.1.7) they can be uniquely expressed by I1 , I2 , . . . I5 in the following way [54] m · A3 · m = J1 I5 + J2 I4 + J3 , m · A4 · m 2 = ( J1 + J2 ) I5 + ( J1 J2 + J3 ) I4 + J1 J3 , where J1 , J2 and J3 are the principal invariants of A defined by (A.1.8). Let us note that the invariant I6 cannot be dropped. In order to verify this, it is enough to consider two different tensors
  • 21. A.3 Orthogonal Transformations and Orthogonal Invariants 187 T A and B = Q n · A · Q n , where Q n ≡ Q (πn ) = 2n ⊗ n − I , n n n · n = 1, n · m = 0, det Q n = 1 One can prove that the tensor A and the tensor B have the same invariants I1 , I2 , . . . , I5 . Taking into account that m · Q n = −m and applying the last iden- tity in (A.3.13) we may write I6 (B ) T = m · B 2 · (m × B · m ) = m · A 2 · Q n · (m × Q n · A · m ) = −m · A 2 · (m × A · m ) = − I6 ( A ) We observe that the only difference between the two considered tensors is the sign of I6 . Therefore, the triples of vectors m , A · m , A 2 · m and m , B · m , B 2 · m have different orientations and cannot be combined by a rotation. It should be noted that the functional relation (A.3.16) would in no way imply that the invariant I6 should be “dependent” and hence “redundant”, namely should be removed from the basis (A.3.15). In fact, the relation (A.3.16) determines the magnitude but not the sign of I6 . To describe yielding and failure of oriented solids a dyad M = v ⊗ v has been used in [53, 75], where the vector v specifies a privileged direction. A plastic po- tential is assumed to be an isotropic function of the symmetric Cauchy stress tensor and the tensor generator M . Applying the representation of isotropic functions the integrity basis including ten invariants was found. In the special case v = m the number of invariants reduces to the five I1 , I2 , . . . I5 defined by (A.3.15). Further de- tails of this approach and applications in continuum mechanics are given in [59, 71]. However, the problem statement to find an integrity basis of a symmetric tensor A and a dyad M , i.e. to find scalar valued functions f ( A , M ) satisfying the condition f (Q · A · Q T , Q · M · Q T ) = (det Q )η f ( A , M ), (A.3.17) ∀Q , Q · QT = I, det Q = ±1 essentially differs from the problem statement (A.3.7). In order to show this we take into account that the symmetry group of a dyad M , i.e. the set of orthogonal solutions of the equation Q · M · Q T = M includes the following elements Q 1,2 = ±I , v Q3 = Q ( ϕm ), m m= , (A.3.18) |v | Q4 = Q (πn ) = 2n ⊗ n − I , n n n · n = 1, n · v = 0, where Q ( ϕm ) is defined by (A.3.6). The solutions of the problem (A.3.17) are m automatically the solutions of the following problem f (Q i · A · Q i , M ) = (det Q i )η f ( A , M ), T i = 1, 2, 3, 4,
  • 22. 188 A Some Basic Rules of Tensor Calculus i.e. the problem to find the invariants of A relative to the symmetry group (A.3.18). However, (A.3.18) includes much more symmetry elements if compared to the prob- lem statement (A.3.7). An alternative set of transversely isotropic invariants can be formulated by the use of the following decomposition A = αm ⊗ m + β( I − m ⊗ m ) + A pD + t ⊗ m + m ⊗ t, m (A.3.19) where α, β, A pD and t are projections of A . With the projectors P 1 = m ⊗ m and P 2 = I − m ⊗ m we may write α = m · A · m = tr ( A · P 1 ), 1 1 β = (tr A − m · A · m ) = tr ( A · P 2 ), 2 2 (A.3.20) A pD = P 2 · A · P 2 − βP 2 , P t = m · A · P2 The decomposition (A.3.19) is the analogue to the following representation of a vector a a = I · a = m ⊗ m · a + ( I − m ⊗ m ) · a = ψm + τ , m ψ = a · m, τ = P2 · a (A.3.21) Decompositions of the type (A.3.19) are applied in [68, 79]. The projections intro- duced in (A.3.20) have the following properties tr ( A pD ) = 0, A pD · m = m · A pD = 0 , t·m = 0 (A.3.22) With (A.3.19) and (A.3.22) the tensor equation (A.3.11) can be transformed to the following system of equations   dα = 0,   ds    dβ  = 0,    ds (A.3.23)  A  dA pD  ds =   m × A pD − A pD × m ,   t  dt   = m ×t ds From the first two equations we observe that α and β are transversely isotropic in- variants. The third equation can be transformed to one scalar and one vector equation as follows A dA pD d( A pD ·· A pD ) b db ·· A pD = 0 ⇒ = 0, = m ×b ds ds ds with b ≡ A pD · t . We observe that tr ( A 2 ) = A pD ·· A pD is the transversely pD isotropic invariant, too. Finally, we have to find the integrals of the following system
  • 23. A.3 Orthogonal Transformations and Orthogonal Invariants 189 t  dt = t × m ,   ds (A.3.24)  b = b ×m  db ds The solutions of (A.3.24) are t (s) = Q (sm ) · t 0 , m b (s) = Q (sm ) · b 0 , m where t 0 and b 0 are initial conditions. The vectors t and b belong to the plane of isotropy, i.e. t · m = 0 and b · m = 0. Therefore, one can verify the following integrals t · t = t0 · t0, b · b = b0 · b 0, t · b = t0 · b 0, (t × b ) · m = (t 0 × b 0 ) · m (A.3.25) We found seven integrals, but only five of them are functionally independent. In order to formulate the relation between the integrals we compute b · b = t · A2 · t , pD t · b = t · A pD · t For any plane tensor A p satisfying the equations A p · m = m · A p = 0 the Cayley- Hamilton theorem can be formulated as follows, see e.g. [71] 1 A 2 − (tr A p ) A p + p (tr A p )2 − tr ( A 2 ) ( I − m ⊗ m ) = 0 p 2 Since tr A pD = 0 we have 1 2A 2 = tr ( A 2 )( I − m ⊗ m ), A pD pD t · A2 · t = pD tr ( A 2 )(t · t ) pD 2 Because tr ( A 2 ) and t · t are already defined, the invariant b · b can be omitted. pD The vector t × b is spanned on the axis m . Therefore t × b = γm , m γ = (t × b ) · m , γ2 = (t × b ) · (t × b ) = (t · t )(b · b ) − (t · b )2 Now we can summarize six invariants and one relation between them as follows 1 I1 = α, I2 = β, I3 = tr ( A 2 ), I4 = t · t = t · A · m , ¯ ¯ ¯ pD ¯ 2 ¯ ¯ I5 = t · A pD · t , I6 = (t × A pD · t ) · m , (A.3.26) ¯2 ¯2 ¯ ¯2 I6 = I4 I3 − I5 Let us assume that the symmetry transformation Q n ≡ Q (πn ) belongs to the n symmetry group of the transverse isotropy, as it was made in [71, 59]. In this case f ( A ′ ) = f (Q n · A · Q n ) = f ( A ) must be valid. With Q n · m = −m we can write T α′ = α, β′ = β, A ′pD = A pD , t ′ = −Q n · t
  • 24. 190 A Some Basic Rules of Tensor Calculus ¯′ ¯ Therefore in (A.3.26) Ik = Ik , k = 1, 2, . . . , 5 and ¯′ I6 = (t ′ × A ′pD · t ′ ) · m = (Q n · t ) × Q n · A pD · t · m = (t × A pD · t ) · Q n · m = −(t × A pD · t ) · m = − I6 ¯ Consequently f ( A ′ ) = f ( I1 , I2 , . . . , I5 , I6 ) = f ( I1 , I2 , . . . , I5 , − I6 ) ¯′ ¯′ ¯′ ¯′ ¯ ¯ ¯ ¯ ⇒ ¯ ¯ ¯ ¯2 f ( A ) = f ( I1 , I2 , . . . , I5 , I6 ) ¯2 and I6 can be omitted due to the last relation in (A.3.26). Invariants for a Set of Vectors and Second Rank Tensors. By setting Q = Q ( ϕm ) in (A.3.3) and taking the derivative of (A.3.3) with respect to ϕ results m in the following generic partial differential equation n T k ∂f ∂f ∑ A ∂A i ·· (m × A i − A i × m ) + ∑ a ∂a j · (m × a j ) = 0 (A.3.27) i =1 j =1 The characteristic system of (A.3.27) is A  dA i = (m × A − A × m ), i = 1, 2, . . . , n,   i i ds (A.3.28)  a = m × a , j = 1, 2, . . . , k  da j j ds The above system is a system of N ordinary differential equations, where N = 6n + 3k is the total number of coordinates of A i and a j for a selected basis. The system (A.3.28) has not more then N − 1 functionally independent integrals. Therefore we can formulate: Theorem A.3.1. A set of n symmetric second rank tensors and k vectors with N = 6n + 3k independent coordinates for a given basis has not more than N − 1 functionally independent invariants for N > 1 and one invariant for N = 1 with respect to the symmetry transformation Q ( ϕm ). m In essence, the proof of this theorem is given within the theory of linear first order partial differential equations [92]. As an example let us consider the set of a symmetric second rank tensor A and a vector a . This set has eight independent invariants. For a visual perception it is useful to keep in mind that the considered set is equivalent to A, a, A · a, A2 · a Therefore it is necessary to find the list of invariants, whose fixation determines this set as a rigid whole. The generic equation (A.3.27) takes the form T ∂f ∂f ·· (m × A − A × m ) + · (m × a ) = 0 (A.3.29) A ∂A a ∂a
  • 25. A.3 Orthogonal Transformations and Orthogonal Invariants 191 The characteristic system of (A.3.29) is A dA a da = m × A − A × m, = m ×a (A.3.30) ds ds This system of ninth order has eight independent integrals. Six of them are invariants of A and a with respect to the full orthogonal group. They fix the considered set as a rigid whole. The orthogonal invariants are defined by Eqs (A.3.4) and (A.3.5). Let us note that the invariant I7 in (A.3.4) cannot be ignored. To verify this it is enough to consider two different sets A, a and B = Q p · A · Q T , p a, where Q p = I − 2p ⊗ p , p · p = 1, p · a = 0. One can prove that the invariants p I1 , I2 , . . . , I6 are the same for these two sets. The only difference is the invariant I7 , i.e. a · B 2 · (a × B · a ) = −a · A 2 · (a × A · a ) Therefore the triples of vectors a , A · a , A 2 · a and a , B · a , B 2 · a have different orientations and cannot be combined by a rotation. In order to fix the considered set with respect to the unit vector m it is enough to fix the next two invariants I8 = m · A · m , I9 = m · a (A.3.31) The eight independent transversely isotropic invariants are (A.3.4), (A.3.5) and (A.3.31). A.3.3 Invariants for the Orthotropic Symmetry Group Consider orthogonal tensors Q 1 = I − 2n 1 ⊗ n 1 and Q 2 = I − 2n 2 ⊗ n 2 , n n det Q 1 = det Q 2 = −1. These tensors represent the mirror reflections, whereby the unit orthogonal vectors ±n 1 and ±n 2 , are the normal directions to the mirror planes. The above tensors are the symmetry elements of the orthotropic symmetry group. The invariants must be found from T T f (Q 1 · A · Q 1 ) = f (Q 2 · A · Q 2 ) = f ( A ) Consequently, T T T T f (Q 1 · Q 2 · A · Q 2 · Q 1 ) = f (Q 1 · A · Q 1 ) = f (Q 2 · A · Q 2 ) = f ( A ) and the tensor Q 3 = Q 1 · Q 2 = 2 n3 ⊗ n 3 − I belongs to the symmetry group, where the unit vector n 3 is orthogonal to n 1 and n 2 . Taking into account that Q i · n i = −n i (no summation convention), Q i · n j = n j , i = j and using the notation A i = ′ Qi · A · QiT we can write tr ( A ′k ) = tr ( A k ), k = 1, . . . , 3, i = 1, 2, 3 ni · A′ · ni = n i · Q i · A · Q iT · n i = ni · A · ni, i = 1, 2, 3 (A.3.32) ni · A ′2 · n i = T ni · Qi · A2 · Qi · ni = n i · A 2 · n i , i = 1, 2, 3
  • 26. 192 A Some Basic Rules of Tensor Calculus The above set of includes 9 scalars. The number of independent scalars is 7 due to the obvious relations tr ( A k ) = n 1 · A k · n 1 + n 2 · A k · n 2 + n 3 · A k · n 3 , k = 1, 2, 3