Sharipov-Course of Differential Geometry
Sharipov-Course of Differential Geometry
SHARIPOV R. A.
The Textbook
Ufa 1996
2
MSC 97U20
UDC 514.7
Sharipov R. A. Course of Differential Geometry: the textbook / Publ. of
Bashkir State University — Ufa, 1996. — pp. 132. — ISBN 5-7477-0129-0.
CONTENTS. ............................................................................................... 3.
PREFACE. .................................................................................................. 5.
CHAPTER I. CURVES IN THREE-DIMENSIONAL SPACE. ....................... 6.
§ 1. Curves. Methods of defining a curve. Regular and singular points
of a curve. ............................................................................................ 6.
§ 2. The length integral and the natural parametrization of a curve. ............. 10.
§ 3. Frenet frame. The dynamics of Frenet frame. Curvature and torsion
of a spacial curve. ............................................................................... 12.
§ 4. The curvature center and the curvature radius of a spacial curve.
The evolute and the evolvent of a curve. ............................................... 14.
§ 5. Curves as trajectories of material points in mechanics. .......................... 16.
CHAPTER II. ELEMENTS OF VECTORIAL
AND TENSORIAL ANALYSIS. .......................................................... 18.
§ 1. Vectorial and tensorial fields in the space. ............................................. 18.
§ 2. Tensor product and contraction. ........................................................... 20.
§ 3. The algebra of tensor fields. ................................................................. 24.
§ 4. Symmetrization and alternation. .......................................................... 26.
§ 5. Differentiation of tensor fields. ............................................................. 28.
§ 6. The metric tensor and the volume pseudotensor. ................................... 31.
§ 7. The properties of pseudotensors. .......................................................... 34.
§ 8. A note on the orientation. .................................................................... 35.
§ 9. Raising and lowering indices. ............................................................... 36.
§ 10. Gradient, divergency and rotor. Some identities
of the vectorial analysis. ................................................................... 38.
§ 11. Potential and vorticular vector fields. .................................................. 41.
CHAPTER III. CURVILINEAR COORDINATES. ...................................... 45.
§ 1.
Some examples of curvilinear coordinate systems. ................................. 45.
§ 2.
Moving frame of a curvilinear coordinate system. .................................. 48.
§ 3.
Change of curvilinear coordinates. ........................................................ 52.
§ 4.
Vectorial and tensorial fields in curvilinear coordinates. ......................... 55.
§ 5.
Differentiation of tensor fields in curvilinear coordinates. ....................... 57.
§ 6.
Transformation of the connection components
under a change of a coordinate system. ................................................. 62.
§ 7. Concordance of metric and connection. Another formula
for Christoffel symbols. ........................................................................ 63.
§ 8. Parallel translation. The equation of a straight line
in curvilinear coordinates. .................................................................... 65.
§ 9. Some calculations in polar, cylindrical, and spherical coordinates. .......... 70.
4 CONTENTS.
This book was planned as the third book in the series of three textbooks for
three basic geometric disciplines of the university education. These are
– «Course of analytical geometry 1 »;
– «Course of linear algebra and multidimensional geometry»;
– «Course of differential geometry».
This book is devoted to the first acquaintance with the differential geometry.
Therefore it begins with the theory of curves in three-dimensional Euclidean space
E. Then the vectorial analysis in E is stated both in Cartesian and curvilinear
coordinates, afterward the theory of surfaces in the space E is considered.
The newly fashionable approach starting with the concept of a differentiable
manifold, to my opinion, is not suitable for the introduction to the subject. In
this way too many efforts are spent for to assimilate this rather abstract notion
and the rather special methods associated with it, while the the essential content
of the subject is postponed for a later time. I think it is more important to make
faster acquaintance with other elements of modern geometry such as the vectorial
and tensorial analysis, covariant differentiation, and the theory of Riemannian
curvature. The restriction of the dimension to the cases n = 2 and n = 3 is
not an essential obstacle for this purpose. The further passage from surfaces to
higher-dimensional manifolds becomes more natural and simple.
I am grateful to D. N. Karbushev, R. R. Bakhitov, S. Yu. Ubiyko, D. I. Borisov
(http://borisovdi.narod.ru), and Yu. N. Polyakov for reading and correcting the
manuscript of the Russian edition of this book.
November, 1996;
December, 2004. R. A. Sharipov.
1 Russian versions of the second and the third books were written in 1096, but the first book
is not yet written. I understand it as my duty to complete the series, but I had not enough time
all these years since 1996.
CHAPTER I
We have one degree of freedom when choosing a point on the curve (1.1), our
choice is determined by the value of the numeric parameter t taken from some
interval, e. g. from the unit interval [0, 1] on the real axis R. Points of the curve
(1.1) are given by their radius-vectors1 r = r(t) whose components x1(t), x2 (t),
x3 (t) are functions of the parameter t.
The continuity of the curve (1.1) means that the functions x1(t), x2 (t), x3(t)
should be continuous. However, this condition is too weak. Among continuous
curves there are some instances which do not agree with our intuitive understand-
ing of a curve. In the course of mathematical analysis the Peano curve is often
considered as an example (see [2]). This is a continuous parametric curve on a
plane such that it is enclosed within a unit square, has no self intersections, and
passes through each point of this square. In order to avoid such unusual curves
the functions xi(t) in (1.1) are assumed to be continuously differentiable (C 1 class)
functions or, at least, piecewise continuously differentiable functions.
Now let’s consider another method of defining a curve. An arbitrary point of
the space E is given by three arbitrary parameters x1, x2, x3 — its coordinates.
We can restrict the degree of arbitrariness by considering a set of points whose
coordinates x1, x2, x3 satisfy an equation of the form
and solving them with respect to x2 and x3 , we get two functions x2(t) and x3(t).
Hence, the same curve can be given in vectorial-parametric form:
t
2
r = r(t) =
x3(t)
.
x (t)
x1 − x1(t) = 0, x1 − x1 (t) = 0,
(1.4)
x2 − x2(t) = 0, x3 − x3 (t) = 0.
Excluding the parameter t from the first system of equations (1.4), we obtain some
functional relation for two variable x1 and x2. We can write it as F (x1, x2) = 0.
Similarly, the second system reduces to the equation G(x1, x3) = 0. Both these
equations constitute a system, which is a special instance of (1.3):
F (x1, x2) = 0,
G(x1, x3) = 0.
CopyRight
c Sharipov R.A., 1996, 2004.
8 CHAPTER I. CURVES IN THREE-DIMENSIONAL SPACE.
At t = 0 both curves (1.7) pass through the origin and the tangent vectors of both
curves at the origin are equal to zero. However, the behavior of these curves near
the origin is quite different: the first curve has a beak-like fracture at the origin,
§ 1. CURVES. METHODS OF DEFINING A CURVE . . . 9
is only the necessary, but not sufficient condition for a parametric curve to have a
singularity at the point r(t). The opposite condition
guaranties that the point r(t) is free of singularities. Therefore, those points of a
parametric curve, where the condition (1.9) is fulfilled, are called regular points.
Let’s study the problem of separating regular and singular points on a curve
given by a system of equations (1.3). Let A = (a1 , a2, a3) be a point of such a
curve. The functions F (x1, x2, x3) and G(x1, x2, x3) in (1.3) are assumed to be
continuously differentiable. The matrix
∂F ∂F ∂F
1
∂x ∂x2 ∂x3
J=
(1.10)
∂G ∂G ∂G
∂x1 ∂x2 ∂x3
in Jacobi matrix is nonzero, the equations (1.3) can be resolved with respect to
x2 and x3 in some neighborhood of the point A. Then we have three functions
x1 = t, x2 = x2(t), x3 = x3 (t) which determine the parametric representation of
our curve. This fact follows from the theorem on implicit functions (see [2]). Note
that the tangent vector of the curve in this parametrization
1
2
τ =
ẋ3
6= 0
ẋ
is nonzero because of its first component. This means that the condition M1 6= 0
is sufficient for the point A to be a regular point of a curve given by the system of
equations (1.3). Remember that the Jacobi matrix (1.10) has two other minors:
∂F ∂F ∂F ∂F
3
∂x1
1
∂x ∂x ∂x2
M2 = det , M3 = det .
∂G ∂G ∂G ∂G
∂x3 ∂x1 ∂x1 ∂x2
10 CHAPTER I. CURVES IN THREE-DIMENSIONAL SPACE.
For both of them the similar propositions are fulfilled. Therefore, we can formulate
the following theorem.
Theorem 1.1. A curve given by a system of equations (1.3) is regular at all
points, where the rank of its Jacobi matrix (1.10) is equal to 2.
A plane curve lying on the plane x3 = 0 can be defined by one equation
F (x1, x2) = 0. The second equation here reduces to x3 = 0. Therefore,
G(x1, x2, x3) = x3 . The Jacoby matrix for the system (1.3) in this case is
∂F ∂F
0
1
∂x
J =
∂x2
.
(1.11)
0 0 1
If rank J = 2, this means that at least one of two partial derivatives in the matrix
(1.11) is nonzero. These derivatives form the gradient vector for the function F :
∂F ∂F
grad F = , .
∂x1 ∂x2
Here ϕ0 (t̃) is the derivative of the function ϕ(t̃). The formula (2.1) is known
as the transformation rule for the tangent vector of a curve under a change of
parametrization.
A monotonic decreasing function ϕ(t̃ ) can also be used for the reparametrization
of curves. In this case ϕ(ã) = b and ϕ(b̃) = a, i. e. the beginning point and the
ending point of a curve are exchanged. Such reparametrizations are called changing
the orientation of a curve.
From the formula (2.1), we see that the tangent vector τ̃ (t̃) can vanish at some
points of the curve due to the derivative ϕ0(t̃ ) even when τ (ϕ(t̃)) is nonzero.
§ 2. THE LENGTH INTEGRAL . . . 11
Certainly, such points are not actually the singular points of a curve. In order
to exclude such formal singularities, only those reparametrizations of a curve
are admitted for which the function ϕ(t̃) is a strictly monotonic function, i. e.
ϕ0 (t̃) > 0 or ϕ0 (t̃) < 0.
The formula (2.1) means that the tangent vector of a curve at its regular point
depends not only on the geometry of the curve, but also on its parametrization.
However, the effect of parametrization is not so big, it can yield a numeric factor to
the vector τ only. Therefore, the natural question arises: is there some preferable
parametrization on a curve ? The answer to this question is given by the length
integral.
Let’s consider a segment of a parametric curve of the smoothness class C 1 with
the parameter t running over the segment [a, b] of real numbers. Let
be a series of points breaking this segment into n parts. The points r(t0 ), . . . , r(tn)
on the curve define a polygonal line with
n segments. Denote 4tk = tk − tk−1 and
let ε be the maximum of 4tk :
ε= max 4tk .
k=1, ... , n
n
X Zb
L = lim Lk = |τ (t)| dt. (2.3)
ε→0
k=1 a
It is natural to take the quantity L in (2.3) for the length of the curve AB. Note
that if we reparametrize a curve according to the formula (2.1), this leads to a
change of variable in the integral. Nevertheless, the value of the integral L remains
unchanged. Hence, the length of a curve is its geometric invariant which does not
depend on the way how it is parameterized.
The length integral (2.3) defines the preferable way for parameterizing a curve
in the Euclidean space E. Let’s denote by s(t) an antiderivative of the function
12 CHAPTER I. CURVES IN THREE-DIMENSIONAL SPACE.
Zt
s(t) = |τ (t)| dt. (2.4)
t0
Definition 2.1. The quantity s determined by the integral (2.4) is called the
natural parameter of a curve in the Euclidean space E.
Note that once the reference point r(t0 ) and some direction (orientation) on a
curve have been chosen, the value of natural parameter depends on the point of
the curve only. Then the change of s for −s means the change of orientation of
the curve for the opposite one.
Let’s differentiate the integral (2.4) with respect to its upper limit t. As a result
we obtain the following relationship:
ds
= |τ (t)|. (2.5)
dt
Now, using the formula (2.5), we can calculate the tangent vector of a curve in its
natural parametrization, i. e. when s is used instead of t as a parameter:
dr dr dt dr ds τ
= · = = . (2.6)
ds dt ds dt dt |τ |
From the formula (2.6), we see that in the tangent vector of a curve in natural
parametrization is a unit vector at all regular points. In singular points this vector
is not defined at all.
dr
τ (s) = . (3.1)
ds
Let’s differentiate the vector τ (s) with respect to s and then apply the following
lemma to its derivative τ 0(s).
Lemma 3.1. The derivative of a vector of a constant length is a vector perpen-
dicular to the original one.
Proof. In order to prove the lemma we choose some standard rectangular
Cartesian coordinate system in E. Then
d d
|τ (s)|2 = (τ 1 )2 + (τ 2 )2 + (τ 3)2 =
ds ds
= 2 τ 1 (τ 1 )0 + 2 τ 2 (τ 2)0 + 2 τ 3 (τ 3)0 = 0.
One can easily see that this relationship is equivalent to (τ (s) | τ 0(s)) = 0. Hence,
τ (s) ⊥ τ 0(s). The lemma is proved.
Due to the above lemma the vector τ 0(s) is perpendicular to the unit vector
τ (s). If the length of τ 0(s) is nonzero, one can represent it as
where k(s) = |τ 0(s)| and |n(s)| = 1. The scalar quantity k(s) = |τ 0(s)| in formula
(3.2) is called the curvature of a curve, while the unit vector n(s) is called its
primary normal vector or simply the normal vector of a curve at the point r(s).
The unit vectors τ (s) and n(s) are orthogonal to each other. We can complement
them by the third unit vector b(s) so that τ , n, b become a right triple 1:
The vector b(s) defined by the formula (3.3) is called the secondary normal
vector or the binormal vector of a curve. Vectors τ (s), n(s), b(s) compose an
orthonormal right basis attached to the point r(s).
Bases, which are attached to some points, are usually called frames. One should
distinguish frames from coordinate systems. Cartesian coordinate systems are also
defined by choosing some point (an origin) and some basis. However, coordinate
systems are used for describing the points of the space through their coordinates.
The purpose of frames is different. They are used for to expand the vectors which,
by their nature, are attached to the same points as the vectors of the frame.
The isolated frames are rarely considered, frames usually arise within families
of frames: typically at each point of some set (a curve, a surface, or even the whole
space) there arises some frame attached to this point. The frame τ (s), n(s), b(s)
is an example of such frame. It is called the Frenet frame of a curve. This is the
moving frame: in typical situation the vectors of this frame change when we move
the attachment point along the curve.
Let’s consider the derivative n0(s). This vector attached to the point r(s) can
be expanded in the Frenet frame at that point. Due to the lemma 3.1 the vector
n0(s) is orthogonal to the vector n(s). Therefore its expansion has the form
The quantity α in formula (3.4) can be expressed through the curvature of the
1 A non-coplanar ordered triple of vectors a , a , a is called a right triple if, upon moving
1 2 3
these vectors to a common origin, when looking from the end of the third vector a3 , we see the
shortest rotation from a1 to a2 as a counterclockwise rotation.
14 CHAPTER I. CURVES IN THREE-DIMENSIONAL SPACE.
Hence, for the expansion of the vector b0 (s) in the Frenet frame we get
Let’s gather the equations (3.2), (3.6), and (3.7) into a system:
0
τ (s) = k(s) · n(s),
n0 (s) = −k(s) · τ (s) + κ(s) · b(s), (3.8)
0
b (s) = −κ(s) · n(s).
The equations (3.8) relate the vectors τ (s), n(s), b(s) and their derivatives with
respect to s. These differential equations describe the dynamics of the Frenet
frame. They are called the Frenet equations. The equations (3.8) should be
complemented with the equation (3.1) which describes the dynamics of the point
r(s) (the point to which the vectors of the Frenet frame are attached).
CopyRight
c Sharipov R.A., 1996, 2004.
§ 4. THE CURVATURE CENTER AND THE CURVATURE RADIUS . . . 15
Let’s consider the circle of the radius R with the center at the origin lying in the
coordinate plane x3 = 0. It is convenient to define this circle as follows:
R cos(s/R)
r(s) =
R sin(s/R)
,
(4.2)
here s is the natural parameter. Substituting (4.2) into (3.1) and then into (3.2),
we find the unit tangent vector τ (s) and the primary normal vector n(s):
− sin(s/R)
− cos(s/R)
τ (s) =
, n(s) =
. (4.3)
cos(s/R)
− sin(s/R)
Now, substituting (4.3) into the formula (4.1), we calculate the curvature of a
circle k(s) = 1/R = const. The curvature k of a circle is constant, the inverse
curvature 1/k coincides with its radius.
Let’s make a step from the point r(s) on a circle to the distance 1/k in the
direction of its primary normal vector n(s). It is easy to see that we come to the
center of a circle. Let’s make the same step for an arbitrary spacial curve. As a
result of this step we come from the initial point r(s) on the curve to the point
with the following radius-vector:
n(s)
ρ(s) = r(s) + . (4.4)
k(s)
Certainly, this can be done only for that points of a curve, where k(s) 6= 0. The
analogy with a circle induces the following terminology: the quantity R(s) =
1/k(s) is called the curvature radius, the point with the radius-vector (4.4) is
called the curvature center of a curve at the point r(s).
In the case of an arbitrary curve its curvature center is not a fixed point. When
parameter s is varied, the curvature center of the curve moves in the space drawing
another curve, which is called the evolute of the original curve. The formula (4.4)
is a vectorial-parametric equation of the evolute. However, note that the natural
parameter s of the original curve is not a natural parameter for its evolute.
Suppose that some spacial curve r(t) is given. A curve r̃(s̃) whose evolute ρ̃(s̃)
coincides with the curve r(t) is called an evolvent of the curve r(t). The problem
of constructing the evolute of a given curve is solved by the formula (4.4). The
inverse problem of constructing an evolvent for a given curve appears to be more
complicated. It is effectively solved only in the case of a planar curve.
Let r(s) be a vector-function defining some planar curve in natural parametri-
zation and let r̃(s̃) be the evolvent in its own natural parametrization. Two
natural parameters s and s̃ are related to each other by some function ϕ in form
of the relationship s̃ = ϕ(s). Let ψ = ϕ−1 be the inverse function for ϕ, then
s = ψ(s̃). Using the formula (4.4), now we obtain
ñ(s̃)
r(ψ(s̃)) = r̃(s̃) + . (4.5)
k̃(s̃)
Let’s differentiate the relationship (4.5) with respect to s̃ and then let’s apply the
formula (3.1) and the Frenet equations written in form of (4.1):
0 d 1
ψ (s̃) · τ (ψ(s̃)) = · ñ(s̃).
ds̃ k̃(s̃)
16 CHAPTER I. CURVES IN THREE-DIMENSIONAL SPACE.
Here τ (ψ(s̃)) and ñ(s̃) both are unit vectors which are collinear due to the above
relationship. Hence, we have the following two equalities:
0 d 1
ñ(s̃) = ±τ (ψ(s̃)), ψ (s̃) = ± . (4.6)
ds̃ k̃(s̃)
1
= ±(ψ(s̃) − C). (4.7)
k̃(s̃)
Here C is a constant of integration. Let’s combine (4.7) with the first relationship
(4.6) and substitute it into the formula (4.5):
Then we substitute s̃ = ϕ(s) into the above formula and denote ρ(s) = r̃(ϕ(s)).
As a result we obtain the following equality:
The formula (4.8) is a parametric equation for the evolvent of a planar curve r(s).
The entry of an arbitrary constant in the equation (4.8) means the evolvent is not
unique. Each curve has the family of evolvents. This fact is valid for non-planar
curves either. However, we should emphasize that the formula (4.8) cannot be
applied to general spacial curves.
The time derivative of the velocity vector is called the acceleration vector:
1
a (t)
dv
2
a(t) = = v̇(t) =
a3 (t)
.
(5.2)
dt
a (t)
§ 5. CURVES AS TRAJECTORIES OF MATERIAL POINTS . . . 17
Taking into account the formula (5.4) and the first Frenet equation, these expres-
sions can be rewritten as
The second formula (5.5) determines the expansion of the acceleration vector into
two components. The first component is tangent to the trajectory, it is called the
tangential acceleration. The second component is perpendicular to the trajectory
and directed toward the curvature center. It is called the centripetal acceleration.
It is important to note that the centripetal acceleration is determined by the
modulus of the velocity and by the geometry of the trajectory (by its curvature).
CHAPTER II
where x = (x1, x2, x3) are the components of the radius-vector of an arbitrary
point of the space E. Writing F(x) instead of F(x1 , x2, x3), we make all formulas
more compact.
The vectorial nature of the field F reveals when we replace one coordinate
system by another. Let (1.1) be the coordinates of a vector field in some
coordinate system O, e1 , e2 , e3 and let Õ, ẽ1, ẽ2, ẽ3 be some other coordinate
system. The transformation rule for the components of a vectorial field under a
change of a Cartesian coordinate system is written as follows:
3
X
F i (x) = Sji F̃ j (x̃),
j=1
3
(1.2)
X
i
x = Sji j i
x̃ + a .
j=1
Here Sji are the components of the transition matrix relating the basis e1 , e2 , e3
with the new basis ẽ1 , ẽ2 , ẽ3 , while a1 , a2, a3 are the components of the vector
−−→
OÕ in the basis e1 , e2 , e3 .
The formula (1.2) combines the transformation rule for the components of a
vector under a change of a basis and the transformation rule for the coordinates of
a point under a change of a Cartesian coordinate system (see [1]). The arguments
x and x̃ beside the vector components F i and F̃ i in (1.2) is an important novelty
as compared to [1]. It is due to the fact that here we deal with vector fields, not
with separate vectors.
Not only vectors can be associated with the points of the space E. In linear
algebra along with vectors one considers covectors, linear operators, bilinear forms
§ 1. VECTORIAL AND TENSORIAL FIELDS IN THE SPACE. 19
and quadratic forms. Associating some covector with each point of E, we get a
covector field. If we associate some linear operator with each point of the space,
we get an operator field. An finally, associating a bilinear (quadratic) form with
each point of E, we obtain a field of bilinear (quadratic) forms. Any choice of a
Cartesian coordinate system O, e1, e2, e3 assumes the choice of a basis e1 , e2, e3,
while the basis defines the numeric representations for all of the above objects:
for a covector this is the list of its components, for linear operators, bilinear
and quadratic forms these are their matrices. Therefore defining a covector field
F is equivalent to defining three functions F1(x), F2 (x), F3(x) that transform
according to the following rule under a change of a coordinate system:
3
X
Fi(x) = Tij F̃j (x̃),
j=1
3
(1.3)
X
i
x = Sji j
x̃ + a . i
j=1
In the case of operator field F the transformation formula for the components of
its matrix under a change of a coordinate system has the following form:
3 X
X 3
Fji (x) = Spi Tjq F̃qp (x̃),
p=1 q=1
3 (1.4)
X
i
x = Spi p
x̃ + a . i
p=1
For a field of bilinear (quadratic) forms F the transformation rule for its compo-
nents under a change of Cartesian coordinates looks like
3 X
X 3
Fij (x) = Tip Tjq F̃p q (x̃),
p=1 q=1
3
(1.5)
X
i
x = Spi p
x̃ + a . i
p=1
Each of the relationships (1.2), (1.3), (1.4), and (1.5) consists of two formulas.
The first formula relates the components of a field, which are the functions of
two different sets of arguments x = (x1, x2, x3) and x̃ = (x̃1 , x̃2, x̃3). The second
formula establishes the functional dependence of these two sets of arguments.
The first formulas in (1.2), (1.3), and (1.4) are different. However, one can see
some regular pattern in them. The number of summation signs and the number
of summation indices in their right hand sides are determined by the number of
indices in the components of a field F. The total number of transition matrices
used in the right hand sides of these formulas is also determined by the number of
indices in the components of F. Thus, each upper index of F implies the usage of
the transition matrix S, while each lower index of F means that the inverse matrix
T = S −1 is used.
20 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
The number of indices of the field F in the above examples doesn’t exceed
two. However, the regular pattern detected in the transformation rules for the
components of F can be generalized for the case of an arbitrary number of indices:
X
Fji11...j
...ir
s
= Spi11 . . . Spirr Tjq11 . . . Tjqss F̃qp11...q
...pr
s
(1.6)
p1 ...pr
q1 ...qs
The formula (1.6) comprises the multiple summation with respect to (r +s) indices
p1 , . . . , pr and q1, . . . , qs each of which runs from 1 to 3.
Definition 1.1. A tensor of the type (r, s) is a geometric object F whose
components in each basis are enumerated by (r + s) indices and obey the transfor-
mation rule (1.6) under a change of basis.
Lower indices in the components of a tensor are called covariant indices, upper
indices are called contravariant indices respectively. Generalizing the concept of
a vector field, we can attach some tensor of the type (r, s), to each point of the
space. As a result we get the concept of a tensor field. This concept is convenient
because it describes in the unified way any vectorial and covectorial fields, operator
fields, and arbitrary fields of bilinear (quadratic) forms. Vectorial fields are fields
of the type (1, 0), covectorial fields have the type (0, 1), operator fields are of the
type (1, 1), and finally, any field of bilinear (quadratic) forms are of the type (0, 2).
Tensor fields of some other types are also meaningful. In Chapter IV we consider
the curvature field with four indices.
Passing from separate tensors to tensor fields, we acquire the arguments in for-
mula (1.6). Now this formula should be written as the couple of two relationships
similar to (1.2), (1.3), (1.4), or (1.5):
X
Fji11...
... ir
js (x) = Spi11 . . . Spirr Tjq11 . . . Tjqss F̃qp11...
... pr
qs (x̃),
p1 ... pr
q1 ... qs
3 (1.7)
X
i
x = Sji j
x̃ + a . i
j=1
The formula (1.7) expresses the transformation rule for the components of a
tensorial field of the type (r, s) under a change of Cartesian coordinates.
The most simple type of tensorial fields is the type (0, 0). Such fields are
called scalar fields. Their components have no indices at all, i. e. they are numeric
functions in the space E.
If we denote by c̃pq (x̃) the product of ãi (x̃) and b̃j (x̃), then we find that the quan-
tities cij (x) and c̃pq (x̃) are related by the formula (1.5). This means that taking
two covectorial fields one can compose a field of bilinear forms by multiplying the
components of these two covectorial fields in an arbitrary Cartesian coordinate
system. This operation is called the tensor product of the fields a and b. Its result
is denoted as c = a ⊗ b.
The above trick of multiplying components can be applied to an arbitrary pair
of tensor fields. Suppose we have a tensorial field A of the type (r, s) and another
tensorial field B of the type (m, n). Denote
i ... i i ... i i1 ... ir i ... i
Cj11... jsrjs+1
r+1
... js+n (x) = Aj1 ... js (x) Bjs+1 ... js+n (x).
r+m r+1 r+m
(2.2)
Definition 2.1. The tensor field C of the type (r+m, s+n) whose components
are determined by the formula (2.2) is called the tensor product of the fields A
and B. It is denoted C = A ⊗ B.
This definition should be checked for correctness. We should make sure that
the components of the field C are transformed according to the rule (1.7) when we
pass from one Cartesian coordinate system to another. The transformation rule
(1.7), when applied to the fields A and B, yields
X
Aij11... ir
... js = Spi11 . . . Spirr Tjq11 . . . Tjqss Ãpq11...
... pr
qs ,
p..q
i ... ir+m q q
X
Bjr+1
s+1 ... js+n
= Spir+1
r+1
. . . Spir+m
r+m
Tjs+1
s+1
. . . Tjs+n
s+n
B̃qps+1
r+1 ... pr+m
... qs+n .
p..q
The summation in right hand sides of this formulas is carried out with respect
to each double index which enters the formula twice — once as an upper index
and once as a lower index. Multiplying these two formulas, we get exactly the
transformation rule (1.7) for the components of C.
Theorem 2.1. The operation of tensor product is associative, this means that
(A ⊗ B) ⊗ C = A ⊗ (B ⊗ C).
Proof. Let A be a tensor of the type (r, s), let B be a tensor of the type
(m, n), and let C be a tensor of the type (p, q). Then one can write the following
obvious numeric equality for their components:
ir+1 ... ir+m ir+m+1 ... ir+m+p
Aij11... ir
... js Bjs+1 ... js+n Cjs+n+1 ... js+n+q =
(2.3)
i ... ir+m ir+m+1 ... ir+m+p
= Aij11... ir
... js Bjr+1
s+1 ... js+n
Cjs+n+1 ... js+n+q .
As we see in (2.3), the associativity of the tensor product follows from the
associativity of the multiplication of numbers.
The tensor product is not commutative. One can easily construct an example
illustrating this fact. Let’s consider two covectorial fields a and b with the
following components in some coordinate system: a = (1, 0, 0) and b = (0, 1, 0).
Denote c = a ⊗ b and d = b ⊗ a. Then for c12 and d12 with the use of the formula
(2.2) we derive: c12 = 1 and d12 = 0. Hence, c 6= d and a ⊗ b 6= b ⊗ a.
CopyRight
c Sharipov R.A., 1996, 2004.
22 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
Let’s consider an operator field F. Its components Fji(x) are the components of
the operator F(x) in the basis e1, e2, e3 . It is known that the trace of the matrix
Fji(x) is a scalar invariant of the operator F(x) (see [1]). Therefore, the formula
3
X
f(x) = tr F(x) = Fii (x) (2.4)
i=1
determines a scalar field f(x) in the space E. The sum similar to (2.4) can be
written for an arbitrary tensorial field F with at least one upper index and at least
one lower index in its components:
3
i ... i i ... i k i ... i
X
Hj11... js−1
r−1
(x) = Fj11... jn−1
m−1
k jn ... js−1 (x).
m r−1
(2.5)
k=1
In the formula (2.5) the summation index k is placed to m-th upper position
and to n-th lower position. The succeeding indices im , . . . ir−1 and jn , . . . js−1
in writing the components of the field F are shifted one position to the right as
compared to their positions in left hand side of the equality (2.5):
Definition 2.2. The tensor field H whose components are calculated according
to the formula (2.5) from the components of the tensor field F is called the
contraction of the field F with respect to m-th and n-th indices.
Like the definition 2.1, this definition should be tested for correctness. Let’s
verify that the components of the field H are transformed according to the
formula (1.7). For this purpose we write the transformation rule (1.7) applied to
the components of the field F in right hand side of the formula (2.5):
i ... im−1 k im ... ir−1
X
Fj11... jn−1 k jn ... js−1 = Spi11 . . . Spim−1
m−1
Sαk Spim
m
. . . Spir−1
r−1
×
α p1 ...pr−1
β q1 ...qs−1
q q p ... p α p ... p
× Tjq11 . . . Tjn−1
n−1
Tkβ Tjqnn . . . Tjs−1
s−1
F̃q11... qn−1
m−1
β qn ... qs−1 .
m r−1
In order to derive this formula from (1.7) we substitute the index k into the m-th
and n-th positions, then we shift all succeeding indices one position to the right.
In order to have more similarity of left and right hand sides of this formula we
shift summation indices as well. It is clear that such redesignation of summation
indices does not change the value of the sum.
Now in order to complete the contraction procedure we should produce the
summation with respect to the index k. In the right hand side of the formula the
sum over k can be calculated explicitly due to the formula
3
X
Sαk Tkβ = δαβ , (2.6)
k=1
§ 2. TENSOR PRODUCT AND CONTRACTION. 23
which means T = S −1 . Due to (2.6) upon calculating the sum over k one can
calculate the sums over β and α. Therein we take into account that
3
X
F̃qp11...
... pm−1 α pm ... pr−1
qn−1 α qn ... qs−1 = H̃qp11...
... pr−1
qs−1 .
α=1
which exactly coincides with the transformation rule (1.7) written with respect to
components of the field H. The correctness of the definition 2.2 is proved.
The operation of contraction introduced by the definition 2.2 implies that the
positions of two indices are specified. One of these indices should be an upper
index, the other index should be a lower index. The letter C is used as a
contraction sign. The formula (2.5) then is abbreviated as follows:
H = Cm,n(F) = C(F).
The numbers m and n are often omitted since they are usually known from the
context.
A tensorial field of the type (1, 1) can be contracted in the unique way. For a
tensorial field F of the type (2, 2) we have two ways of contracting. As a result
of these two contractions, in general, we obtain two different tensorial fields of the
type (1, 1). These tensorial fields can be contracted again. As a result we obtain
the complete contractions of the field F, they are scalar fields. A field of the type
(2, 2) can have two complete contractions. In general case a field of the type (n, n)
has n! complete contractions.
The operations of tensor product and contraction often arise in a natural way
without any special intension. For example, suppose that we are given a vector
field v and a covector field w in the space E. This means that at each point we
have a vector and a covector attached to this point. By calculating the scalar
products of these vectors and covectors we get a scalar field f = hw | vi. In
coordinate form such a scalar field is calculated by means of the formula
3
X
f= wi v i . (2.7)
k=1
From the formula (2.7), it is clear that f = C(w ⊗ v). The scalar product
f = hw | vi is the contraction of the tensor product of the fields w and v. In a
similar way, if an operator field F and a vector field v are given, then applying F
to v we get another vector field u = F v, where
3
X
ui = Fji vj .
j=1
In this case we can write: u = C(F ⊗ v); although this writing cannot be uniquely
interpreted. Apart from u = F v, it can mean the product of v by the trace of the
operator field F.
24 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
Cji11...
... ir i1 ... ir i1 ... ir
js = Aj1 ... js + Bj1 ... js . (3.1)
Definition 3.1. The tensor field C of the type (r, s) whose components are
calculated according to the formula (3.1) is called the sum of the fields A and B
of the type (r, s).
One can easily check up the transformation rule (1.7) for the components of the
field C. It is sufficient to write this rule (1.7) for the components of A and B then
add these two formulas. Therefore, the definition 3.1 is consistent.
The sum of tensor fields is commutative and associative. This fact follows
from the commutativity and associativity of the addition of numbers due to the
following obvious relationships:
Let’s denote by T(r,s) the set of tensor fields of the type (r, s). The tensor
multiplication introduced by the definition 2.1 is the following binary operation:
The operations of tensor addition and tensor multiplication (3.2) are related to
each other by the distributivity laws:
(A + B) ⊗ C = A ⊗ C + B ⊗ C,
(3.3)
C ⊗ (A + B) = C ⊗ A + C ⊗ B.
The distributivity laws (3.3) follow from the distributivity of the multiplication of
numbers. Their proof is given by the following obvious formulas:
ir+1 ... ir+m
Aij11... ir i1 ... ir
... js + Bj1 ... js Cjs+1 ... js+n =
i ... i i ... i
= Aij11... ir r+1 r+m i1 ... ir
... js Cjs+1 ... js+n + Bj1 ... js Cjs+1 ... js+n ,
r+1 r+m
ir+1 ... ir+m ir+1 ... ir+m
Cji11...
... ir
js Ajs+1 ... js+n + Bjs+1 ... js+n =
i ... i i ... i
= Cji11...
... ir r+1 r+m i1 ... ir
js Ajs+1 ... js+n + Cj1 ... js Bjs+1 ... js+n .
r+1 r+m
Due to (3.2) the set of scalar fields K = T(0,0) (which is simply the set
of numeric functions) is closed with respect to tensor multiplication ⊗, which
coincides here with the regular multiplication of numeric functions. The set K is
§ 3. THE ALGEBRA OF TENSOR FIELDS. 25
a commutative ring (see [3]) with the unity. The constant function equal to 1 at
each point of the space E plays the role of the unit element in this ring.
Let’s set m = n = 0 in the formula (3.2). In this case it describes the
multiplication of tensor fields from T(r,s) by numeric functions from the ring
K. The tensor product of a field A and a scalar filed ξ ∈ K is commutative:
A ⊗ ξ = ξ ⊗ A. Therefore, the multiplication of tensor fields by numeric functions
is denoted by standard sign of multiplication: ξ ⊗ A = ξ · A. The operation of
addition and the operation of multiplication by scalar fields in the set T(r,s) possess
the following properties:
(1) A + B = B + A;
(2) (A + B) + C = A + (B + C);
(3) there exists a field 0 ∈ T(r,s) such that A + 0 = A for an arbitrary tensor
field A ∈ T(r,s) ;
(4) for any tensor field A ∈ T(r,s) there exists an opposite field A0 such that
A + A0 = 0;
(5) ξ · (A + B) = ξ · A + ξ · B for any function ξ from the ring K and for any
two fields A, B ∈ T(r,s) ;
(6) (ξ + ζ) · A = ξ · A + ζ · A for any tensor field A ∈ T(r,s) and for any two
functions ξ, ζ ∈ K;
(7) (ξ ζ)·A = ξ ·(ζ ·A) for any tensor field A ∈ T(r,s) and for any two functions
ξ, ζ ∈ K;
(8) 1 · A = A for any field A ∈ T(r,s) .
The tensor field with identically zero components plays the role of zero element
in the property (3). The field A0 in the property (4) is defined as a field whose
components are obtained from the components of A by changing the sign.
The properties (1)-(8) listed above almost literally coincide with the axioms of
a linear vector space (see [1]). The only discrepancy is that the set of functions K
is a ring, not a numeric field as it should be in the case of a linear vector space.
The sets defined by the axioms (1)-(8) for some ring K are called modules over
the ring K or K-modules. Thus, each of the sets T(r,s) is a module over the ring of
scalar functions K = T(0,0) .
The ring K = T(0,0) comprises the subset of constant functions which is
naturally identified with the set of real numbers R. Therefore the set of tensor
fields T(r,s) in the space E is a linear vector space over the field of real numbers R.
If r > 1 and s > 1, then in the set T(r,s) the operation of contraction with
respect to various pairs of indices are defined. These operations are linear, i. e. the
following relationships are fulfilled:
C(A + B) = C(A) + C(B),
(3.4)
C(ξ · A) = ξ · C(A).
The relationships (3.4) are proved by direct calculations in coordinates. For the
field C = A + B from (2.5) we derive
3
i ... i i ... i k i ... i
X
Hj11... js−1
r−1
= Cj11... jn−1
m−1
k jn ... js−1 =
m r−1
k=1
3 3
i ... i k i ... i i ... i k i ... i
X X
= Aj11 ... jm−1 m r−1
n−1 k jn ... js−1
+ Bj11 ... jm−1 m r−1
n−1 k jn ... js−1
.
k=1 k=1
26 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
This equality proves the first relationship (3.4). In order to prove the second one
we take C = ξ · A. Then the second relationship (3.4) is derived as a result of the
following calculations:
3
i ... i i ... i k i ... i
X
Hj11... js−1
r−1
= Cj11... jn−1
m−1
k jn ... js−1 =
m r−1
k=1
3 3
i ... i k i ... i i ... i k i ... i
X X
= ξ Aj11 ... jm−1 m r−1
n−1 k jn ... js−1
=ξ Aj11 ... jm−1 m r−1
n−1 k jn ... js−1
.
k=1 k=1
The tensor product of two tensors from T(r,s) belongs to T(r,s) only if r = s = 0
(see formula (3.2)). In all other cases one cannot perform the tensor multiplication
staying within one K-module T(r,s) . In order to avoid this restriction the following
direct sum is usually considered:
∞ M
M ∞
T = T(r,s) . (3.5)
r=0 s=0
The set (3.5) consists of finite formal sums A(1) + . . . + A(k) , where each summand
belongs to some of the K-modules T(r,s) . The operation of tensor product is
extended to the K-module T by means of the formula:
q
k X
X
(A(1) + . . . + A(k) ) ⊗ (A(1) + . . . + A(q) ) = A(i) ⊗ A(j) .
i=1 j=1
Let’s rename the summation indices pm and pn in this formula: let’s denote pm
by pn and vice versa. As a result the S matrices will be arranged in the order
of increasing numbers of their upper and lower indices. However, the indices pm
and pn in Ãpq11...
... pr
qs will exchange their positions. It is clear that the procedure of
renaming summation indices does not change the value of the sum:
X
Bji11 ... ir
... js = Spi11 . . . Spirr Tjq11 . . . Tjqss Ãpq11...
... pn ... pm ... pr
qs .
p1 ...pr
q1 ...qs
where the umbers σ(1), . . . , σ(r) and τ (1), . . . , τ (s) are obtained by applying σ
and τ to the numbers 1, . . . , r and 1, . . . , s.
28 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
1 X
B= ε(A) (4.5)
|G|
ε∈G
1 X
B= (−1)ε · ε(A) (4.6)
|G|
ε∈G
CopyRight
c Sharipov R.A., 1996, 2004.
§ 5. DIFFERENTIATION OF TENSOR FIELDS. 29
field A in the definition 5.1. The components of a field of the class C m are the
functions of the class C m in any Cartesian coordinate system. This fact proves
that the definition 5.1 is consistent.
Let’s consider a differentiable tensor field of the type (r, s) and let’s consider all
of the partial derivatives of its components:
∂Aij11... ir
... js
Bji11 ... ir
... js js+1 = . (5.1)
∂xjs+1
The number of such partial derivatives (5.1) is the same in all Cartesian coordinate
systems. This number coincides with the number of components of a tensor field
of the type (r, s + 1). This coincidence is not accidental.
Theorem 5.1. The partial derivatives of the components of a differentiable
tensor field A of the type (r, s) calculated in an arbitrary Cartesian coordinate
system according to the formula (5.1) are the components of another tensor filed
B of the type (r, s + 1).
Proof. The proof consists in checking up the transformation rule (1.7) for the
quantities Bji11...
... ir 0
js js+1 in (5.1). Let O, e1 , e2 , e3 and O , ẽ1 , ẽ2 , ẽ3 be two Cartesian
coordinate systems. By tradition we denote by S and T the direct and inverse
transition matrices. Let’s write the first relationship (1.7) for the field A and let’s
differentiate both sides of it with respect to the variable xjs+1 :
∂Aij11... ir
... js (x)
X
i1 ir q1
p1 ... pr
qs ∂ Ãq1 ... qs (x̃)
j
= S p . . . S p T j . . . T j
∂x s+1 p1 ...pr
1 r 1 s
∂xjs+1
q1 ...qs
In order to calculate the derivative in the right hand side we apply the chain rule
that determines the derivatives of a composite function:
3
∂ Ãpq11...
... pr
qs (x̃)
X ∂ x̃qs+1 ∂ Ãpq11...
... pr
qs (x̃)
= . (5.2)
∂xjs+1 q =1
∂x js+1 ∂ x̃ qs+1
s+1
The variables x = (x1, x2, x3) and x̃ = (x̃1 , x̃2, x̃3) are related as follows:
3
X 3
X
xi = Sji x̃j + ai, x̃i = Tji xj + ãi.
j=1 j=1
One of these two relationships is included into (1.7), the second being the inversion
of the first one. The components of the transition matrices S and T in these
formulas are constants, therefore, we have
∂ x̃qs+1 qs+1
= Tjs+1 . (5.3)
∂xjs+1
Let’s substitute (5.3) into (5.2), then substitute the result into the above expression
for the derivatives ∂Aij11... ir
... js /∂x
js+1
. This yields the equality
q
X
Bji11 ... ir
... js js+1 = Spi11 . . . Spirr Tjq11 . . . Tjs+1
s+1
B̃qp11...
... pr
js+1
p1 ...pr
q1 ...qs+1
30 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
which coincides exactly with the transformation rule (1.7) applied to the quantities
(5.1). The theorem is proved.
The passage from A to B in (5.1) adds one covariant index js+1 . This is the
reason why the tensor field B is called the covariant differential of the field A.
The covariant differential is denoted as B = ∇A. The upside-down triangle ∇ is
a special symbol, it is called nabla. In writing the components of B the additional
covariant index is written beside the nabla sign:
Bji11 ... ir i1 ... ir
... js k = ∇k Aj1 ... js . (5.4)
Due to (5.1) the sign ∇k in the formula (5.4) replaces the differentiation operator:
∇k = ∂/∂xk . However, for ∇k the special name is reserved, it is called the operator
of covariant differentiation or the covariant derivative. Below (in Chapter III) we
shall see that the concept of covariant derivative can be extended so that it will
not coincide with the partial derivative any more.
Let A be a differentiable tensor field of the type (r, s) and let X be some
arbitrary vector field. Let’s consider the tensor product ∇A ⊗ X. This is the
tensor field of the type (r + 1, s + 1). The covariant differentiation adds one
covariant index, while the tensor multiplication add one contravariant index. We
denote by ∇X A = C(∇A ⊗ X) the contraction of the field ∇A ⊗ X with respect
to these two additional indices. The field B = ∇XA has the same type (r, s) as
the original field A. Upon choosing some Cartesian coordinate system we can
write the relationship B = ∇XA in coordinate form:
3
X
Bji11 ... ir
... js = X q ∇q Aij11... ir
... js . (5.5)
q=1
The tensor field B = ∇X A with components (5.5) is called the covariant derivative
of the field A along the vector field X.
Theorem 5.2. The operation of covariant differentiation of tensor fields posses-
ses the following properties
(1) ∇X (A + B) = ∇X A + ∇X B;
(2) ∇X+Y A = ∇X A + ∇Y A;
(3) ∇ξ·X A = ξ · ∇X A;
(4) ∇X (A ⊗ B) = ∇X A ⊗ B + A ⊗ ∇XB;
(5) ∇X C(A) = C(∇X A);
where A and B are arbitrary differentiable tensor fields, while X and Y are arbi-
trary vector fields and ξ is an arbitrary scalar field.
Proof. It is convenient to carry out the proof of the theorem in some Cartesian
coordinate system. Let C = A + B. The property (1) follows from the relationship
3 3 3
X ∂Cji11...
... ir
js
X ∂Aij11... ir
... js
X ∂Bji11 ... ir
... js
Xq = X q
+ X q
.
q=1
∂xq q=1
∂xq q=1
∂xq
Denote Z = X + Y and then we derive the property (2) from the relationship
3
X 3
X 3
X
Z q ∇q Aij11... ir
... js = X q ∇q Aij11... ir
... js + Y q ∇q Aij11... ir
... js .
q=1 q=1 q=1
§ 6. THE METRIC TENSOR AND THE VOLUME PSEUDOTENSOR. 31
This relationship is equivalent to the property (3) in the statement of the theorem.
In order to prove the fourth property in the theorem one should carry out the
following calculations with the components of A, B and X:
3 3
!
X
q q
i1 ... ir ir+1 ... ir+m
X
q
∂Aij11... ir
... js
X ∂/∂x Aj1 ... js Bjs+1 ... js+n = X ×
q=1 q=1
∂xq
3 i ... ir+m !
ir+1 ... ir+m i1 ... ir
X
q
∂Bjr+1
s+1 ... js+n
× Bjs+1 ... js+n + Aj1 ... js X .
q=1
∂xq
And finally, the following series of calculations
3 3
!
X
q∂ X i ... i k im ... ir−1
X Aj11 ... jm−1
n−1 k jn ... js−1
=
q=1
∂xq
k=1
3 X
3 i ... i k i ... i
X ∂Aj11 ... jm−1 m r−1
n−1 k jn ... js−1
= Xq
∂xq
k=1 q=1
proves the fifth property. This completes the proof of the theorem in whole.
Theorem 6.1. The components of the inverse Gram matrix ĝ are transformed
as the components of a tensor field of the type (2, 0) under a change of coordinates.
Proof. Let’s write the transformation rule (1.7) for the components of the
metric tensor g:
3 X
X 3
gij = Tip Tjq g̃pq .
p=1 q=1
g = T tr g̃ T. (6.4)
Since g, g̃, and T are non-degenerate, we can pass to the inverse matrices:
3 X
X 3
gij = Spi Sqj g̃pq . (6.6)
p=1 q=1
The relationship (6.6) is exactly the transformation rule (1.7) written for the
components of a tensor field of the type (2, 0). Thus, the theorem is proved.
The tensor field ĝ = g−1 with the components g ij is called the inverse metric
tensor or the dual metric tensor. The existence of the inverse metric tensor also
follows from the nature of the space E which has the pre-built scalar product.
Both tensor fields g and ĝ are symmetric. The symmetruy of gij with respect to
the indices i and j follows from (6.1) and from the properties of a scalar product.
The matrix inverse to the symmetric one is a symmetric matrix too. Therefore,
the components of the inverse metric tensor g ij are also symmetric with respect to
the indices i and j.
The components of the tensors g and ĝ in any Cartesian coordinate system are
constants. Therefore, we have
∇g = 0, ∇ĝ = 0. (6.7)
These relationships follow from the formula (5.1) for the components of the
covariant differential in Cartesian coordinates.
In the course of analytical geometry (see, for instance, [4]) the indexed object
εijk is usually considered, which is called the Levi-Civita symbol. Its nonzero
components are determined by the parity of the transposition of indices:
0 if i = j, i = k, or j = k,
ijk
εijk = ε = 1 if (ijk) is even, i. e. sign(ijk) = 1, (6.8)
−1 if (ijk) is odd, i. e. sign(ijk) = −1.
Recall that the Levi-Civita symbol (6.8) is used for calculating the vectorial prod-
§ 6. THE METRIC TENSOR AND THE VOLUME PSEUDOTENSOR. 33
uct1 and the mixed product2 through the coordinates of vectors in a rectangular
Cartesian coordinate system with a right orthonormal basis e1 , e2 , e3:
3 3 X 3
!
X X
j k
[X, Y] = ei εijk X Y ,
i=1 j=1 k=1
3 X
3 X
3
(6.9)
X
i j k
(X, Y, Z) = εijk X Y Z .
i=1 j=1 k=1
The usage of upper or lower indices in writing the components of the Levi-
Civita symbol in (6.8) and (6.9) makes no difference since they do not define
a tensor. However, there is a tensorial object associated with the Levi-Civita
symbol. In order to construct such an object we apply the relationship which is
usually proved in analytical geometry:
3 X
X 3 X
3
εpql Mip Mjq Mkl = det M · εijk (6.10)
p=1 q=1 l=1
(see proof in [4]). Here M is some square 3 × 3 matrix. The matrix M can be
the matrix of the components for some tensorial field of the type (2, 0), (1, 1), or
(0, 2). However, it can be a matrix without any tensorial interpretation as well.
The relationship (6.10) is valid for any square 3 × 3 matrix.
Using the Levi-Civita symbol and the matrix of the metric tensor g in some
Cartesian coordinate system, we construct the following quantities:
p
ωijk = det g εijk . (6.11)
Then we study how the quantities ωijk and ω̃pql constructed in two different
Cartesian coordinate systems O, e1 , e2 , e3 and O0, ẽ1, ẽ2, ẽ3 are related to each
other. From the identity (6.10) we derive
3 X
X 3 X
3
Tip Tjq Tkl ω̃pql =
p
det g̃ det T εijk . (6.12)
p=1 q=1 l=1
In order to transform further the sum (6.12) we use the relationship (6.4), as
an immediate consequence of it we obtain the formula det g = (det T )2 det g̃.
Applying this formula to (6.12), we derive
3 X
X 3 X
3
Tip Tjq Tkl ω̃pql = sign(det T )
p
det g εijk . (6.13)
p=1 q=1 l=1
Note that the right hand side of the relationship (6.13) differs from ωijk in (6.11)
only by the sign of the determinant: sign(det T ) = sign(det S) = ±1. Therefore,
we can write the relationship (6.13) as
3 X
X 3 X
3
ωijk = sign(det S) Tip Tjq Tkl ω̃pql . (6.14)
p=1 q=1 l=1
Though the difference is only in sign, the relationship (6.14) differs from the
transformation rule (1.6) for the components of a tensor of the type (0, 3). The
formula (6.14) gives the cause for modifying the transformation rule (1.6):
X
Fji11...
... ir
js = (−1)S Spi11 . . . Spirr Tjq11 . . . Tjqss F̃qp11...
... pr
qs . (6.15)
p1 ... pr
q1 ... qs
Here (−1)S = sign(det S) = ±1. The corresponding modification for the concept
of a tensor is given by the following definition.
Definition 6.1. A pseudotensor F of the type (r, s) is a geometric object
whose components in an arbitrary basis are enumerated by (r + s) indices and
obey the transformation rule (6.15) under a change of basis.
Once some pseudotensor of the type (r, s) is given at each point of the space E,
we have a pseudotensorial field of the type (r, s). Due to the above definition 6.1
and due to (6.14) the quantities ωijk from (6.11) define a pseudotensorial field ω
of the type (0, 3). This field is called the volume pseudotensor. Like metric tensors
g and ĝ, the volume pseudotensor is a special field pre-built into the space E. Its
existence is due to the existence of the pre-built scalar product in E.
The formula (2.5) defines the operation of contraction for a field F of the type
(r, s), where r > 1 and s > 1. The operation of contraction (2.5) is applicable to
tensorial and pseudotensorial fields. It has the following properties:
(1) the contraction of a tensorial field is a tensorial field;
(2) the contraction of a pseudotensorial field is a pseudotensorial field.
The operation of contraction extended to the case of pseudotensorial fields preserve
its linearity given by the equalities (3.4).
The covariant differentiation of pseudotensorial fields in a Cartesian coordinate
system is determined by the formula (5.1). The covariant differential ∇A of a
tensorial field is a tensorial field; the covariant differential of a pseudotensorial
field is a pseudotensorial field. It is convenient to express the properties of the
covariant differential through the properties of the covariant derivative ∇X in the
direction of a field X. Now X is either a vectorial or a pseudovectorial field. All
propositions of the theorem 5.2 for ∇X remain valid.
CopyRight
c Sharipov R.A., 1996, 2004.
36 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
the formula
Let X and Y be two vectorial fields. Then we can define the vectorial field Z
with the following components:
3 X
X 3 X
3
Zq = gki ωijk X j Y k . (8.2)
i=1 j=1 k=1
From (8.2), it is easy to see that Z is derived as the contraction of the field
ĝ ⊗ ω ⊗ a ⊗ b. In a rectangular Cartesian coordinate system with right-oriented
orthonormal basis the formula (8.2) takes the form of the well-known formula for
the components of the vector product Z = [X, Y] (see [4] and the formula (6.9)
above). In a space without preferable orientation, where ωijk is given by the
formula (6.11), the vector product of two vectors is a pseudovector.
Now let’s consider three vectorial fields X, Y, and Z and let’s construct the
scalar field u by means of the following formula:
3 X
X 3 X
3
u= ωijk X i Y j Z k . (8.3)
i=1 j=1 k=1
3
i ... i i ... i k i ... i
X
Bj11 ... jr−1
s+1
= Aj11 ... jm−1 m r−1
n−1 jn+1 ... js+1
gkjn . (9.1)
k=1
The passage from the field A to the field B according to the formula (9.1) is called
the index lowering procedure of the m-th upper index to the n-th lower position.
§ 9. RAISING AND LOWERING INDICES. 37
Using the inverse metric tensor, one can invert the operation (9.1). Let B be a
tensorial or a pseudotensorial field of the type (r, s) and let s > 1. Then we define
the field A = C(B ⊗ ĝ) of the type (r + 1, s − 1) according to the formula:
3
i ... i i ... i i ... i
X
Aj11 ... jr+1
s−1
= Bj11 ... jm−1 m+1 r+1
n−1 q jn ... js−1
gqim . (9.2)
q=1
The passage from the field B to the field A according to the formula (9.2) is called
the index raising procedure of the n-th lower index to the m-th upper position.
The operations of lowering and raising indices are inverse to each other. Indeed,
we can perform the following calculations:
3 X
3
i ... i ki ... ir
X
Cji11...
... ir
js = Aj11 ... jm−1 m+1
n−1 jn ... js
gkq gqim =
q=1 k=1
3
i ... i ki ... ir
X
= Aj11 ... jm−1 m+1
n−1 jn ... js
δkim = Aij11... ir
... js .
k=1
The above calculations show that applying the index lowering and the index
raising procedures successively A → B → C, we get the field C = A. Applying
the same procedures in the reverse order yields the same result. This follows from
the calculations just below:
3 X
3
i ... i i ... i
X
Cji11...
... ir
js = Aj11 ... jm−1 m r
n−1 q jn+1 ... js
gqk gkjn =
k=1 q=1
3
i ... i i ... i
X
= Aj11 ... jm−1 m r
n−1 q jn+1 ... js
δiqn = Aij11... ir
... js .
q=1
The existence of the index raising and index lowering procedures follows from
the very nature of the space E which is equipped with the scalar product and,
hence, with the metric tensors g and ĝ. Therefore, any tensorial (or pseudotenso-
rial) field of the type (r, s) in such a space can be understood as originated from
some purely covariant field of the type (0, r + s) as a result of raising a part of its
indices. Therein a little bit different way of setting indices is used. Let’s consider
a field A of the type (0, 4) as an example. We denote by Ai1 i2 i3i4 its components
in some Cartesian coordinate system. By raising one of the four indices in A one
can get the four fields of the type (1, 3). Their components are denoted as
Raising one of the indices in (9.3), we get an empty place underneath it in the
list of lower indices, while the numbering of the indices at that place remains
unbroken. In this way of writing indices, each index has «its fixed position» no
matter what index it is — a lower or an upper index. Therefore, in the writing
below we easily guess the way in which the components of tensors are derived:
Despite to some advantages of the above form of index setting in (9.3) and (9.4),
it is not commonly admitted. The matter is that it has a number of disadvantages
either. For example, the writing of general formulas (1.6), (2.2), (2.5), and some
others becomes huge and inconvenient for perception. In what follows we shall not
change the previous way of index setting.
Definition 10.1. The vector field F in the space E whose components are
calculated by the formula (10.1) is called the gradient of a function f. The
gradient is denoted as F = grad f.
Let X be a vectorial field in E. Let’s consider the scalar product of the vectorial
fields X and grad f. Due to the formula (10.1) such scalar product of two vectors
is reduced to the scalar product of the vector X and the covector ∇f:
3
X ∂f
(X | grad f) = Xk = h∇f | Xi. (10.2)
∂xk
k=1
The quantity (10.2) is a scalar quantity. It does not depend on the choice of a
coordinate system where the components of X and ∇f are given. Another form of
writing the formula (10.2) is due to the covariant differentiation along the vector
field X, it was introduced by the formula (5.5) above:
(X | grad f) = ∇X f. (10.3)
By analogy with the formula (10.3), the covariant differential ∇F of an arbitrary
tensorial field F is sometimes called the covariant gradient of the field F.
Let F be a vector field. Then its covariant differential ∇F is an operator field,
i. e. a field of the type (1, 1). Let’s denote by ϕ the contraction of the field ∇F:
3
X ∂F k
ϕ = C(∇F) = . (10.4)
∂xk
k=1
Definition 10.2. The scalar field ϕ in the space E determined by the formula
(10.4) is called the divergency of a vector field F. It is denoted ϕ = div F.
Apart from the scalar field div F, one can use ∇F in order to build a vector
field. Indeed, let’s consider the quantities
3 X
X 3 X
3 X
3
ρm = gmi ωijk gjq ∇q F k , (10.5)
i=1 j=1 k=1 q=1
§ 10. GRADIENT, DIVERGENCY AND ROTOR. 39
where ωijk are the components of the volume tensor given by the formula (8.1).
Definition 10.3. The vector field ρ in the space E determined by the formula
(10.5) is called the rotor 1 of a vector field F. It is denoted ρ = rot F.
Due to (10.5) the rotor or a vector field F is the contraction of the tensor field
ĝ ⊗ ω ⊗ ĝ ⊗ ∇F with respect to four pairs of indices: rot F = C(ĝ ⊗ ω ⊗ ĝ ⊗ ∇F).
Remark. If ωijk in (10.5) are understood as components of the volume pseu-
dotensor (6.11), then the rotor of a vector field should be understood as a
pseudovectorial field.
Suppose that O, e1, e2, e3 is a rectangular Cartesian coordinate system in E
with orthonormal right-oriented basis e1 , e2 , e3. The Gram matrix of the basis
e1, e2, e3 is the unit matrix. Therefore, we have
1 for i = j,
ij
gij = g = δji =
0 for i 6= j.
∂f
(grad f)i = , (10.6)
∂xi
3 X 3
X ∂F k
(rot F)i = εijk . (10.7)
∂xj
j=1 k=1
3
X ∂F k
div F = . (10.8)
∂xk
k=1
The formula (10.7) for the rotor has an elegant representation in form of the
determinant of a 3 × 3 matrix:
e1 e2 e3
∂ ∂ ∂
rot F = 1 . (10.9)
∂x ∂x2 ∂x3
1
F F2 F3
The formula (8.2) for the vector product in right-oriented rectangular Cartesian
coordinate system takes the form of (6.9). It can also be represented in the form
of the formal determinant of a 3 × 3 matrix:
e1 e2 e3
[X, Y] = X 1 X 2 X 3 . (10.10)
Y1 Y2 Y3
Due to the similarity of (10.9) and (10.10) one can formally represent the operator
of covariant differentiation ∇ as a vector with components ∂/∂x1 , ∂/∂x2 , ∂/∂x3 .
Then the divergency and rotor are represented as the scalar and vectorial products:
Theorem 10.1. For any scalar field ϕ of the smoothness class C 2 the equality
rot grad ϕ = 0 is identically fulfilled.
Proof. Let’s choose some right-oriented rectangular Cartesian coordinate sys-
tem and then use the formulas (10.6) and(10.7). Let F = rot grad ϕ. Then
3 X
3
X ∂2ϕ
Fi = εijk . (10.11)
j=1 k=1
∂xj ∂xk
Let’s rename the summation indices in (10.11). The index j is replaced by the
index k and vice versa. Such a swap of indices does not change the value of the
sum in (10.11). Therefore, we have
3 X
3 3 X3
X ∂2ϕ X ∂2ϕ
Fi = εikj = − ε ijk = −F i .
j=1 k=1
∂xk ∂xj j=1
∂x j ∂xk
k=1
Here we used the skew-symmetry of the Levi-Civuta symbol with respect to the
pair of indices j and k and the symmetry of the second order partial derivatives of
the function ϕ with respect to the same pair of indices:
∂2ϕ ∂2ϕ
j k
= . (10.12)
∂x ∂x ∂xk ∂xj
For C 2 class functions the value of second order partial derivatives (10.12) do not
depend on the order of differentiation. The equality F i = −F i now immediately
yields F i = 0. The theorem is proved.
Theorem 10.2. For any vector field F of the smoothness class C 2 the equality
div rot F = 0 is identically fulfilled.
Proof. Here, as in the case of the theorem 10.1, we choose a right-oriented
rectangular Cartesian coordinate system, then we use the formulas (10.7) and
(10.8). For the scalar field ϕ = div rot F from these formulas we derive
3 X
3 X
3
X ∂2F k
ϕ= εijk . (10.13)
∂xj ∂xi
i=1 j=1 k=1
∂2F k ∂2F k
j i
=
∂x ∂x ∂xi ∂xj
and using the skew-symmetry of εijk with respect to indices i and j, from the
formula (10.13) we easily derive ϕ = −ϕ. Hence, ϕ = 0.
§ 11. POTENTIAL AND VORTICULAR VECTOR FIELDS. 41
Let ϕ be a scalar field of the smoothness class C 2. The quantity div grad ϕ in
general case is nonzero. It is denoted 4ϕ = div grad ϕ. The sign 4 denotes the
differential operator of the second order that transforms a scalar field ϕ to another
scalar field div grad ϕ. It is called the Laplace operator or the laplacian. In a
rectangular Cartesian coordinate system it is given by the formula
2 2 2
∂ ∂ ∂
4= + + . (10.14)
∂x1 ∂x2 ∂x3
Using the formulas (10.6) and (10.8) one can calculate the Laplace operator in a
skew-angular Cartesian coordinate system:
3 X
3
X ∂2
4= gij . (10.15)
∂xi ∂xj
i=1 j=1
Using the signs of covariant derivatives ∇i = ∂/∂xi we can write the Laplace
operator (10.15) as follows:
3 X
X 3
4= gij ∇i∇j . (10.16)
i=1 j=1
The equality (10.16) differs from (10.15) not only in special notations for the
derivatives. The Laplace operator defined as 4ϕ = div grad ϕ can be applied only
to a scalar field ϕ. The formula (10.16) extends it, and now we can apply the
Laplace operator to any twice continuously differentiable tensor field F of any
type (r, s). Due to this formula 4F is the result of contracting the tensor product
ĝ ⊗ ∇∇F with respect to two pairs of indices: 4F = C(ĝ ⊗ ∇∇F). The resulting
field 4F has the same type (r, s) as the original field F. The laplace operator in
the form of (10.16) is sometimes called the Laplace-Beltrami operator.
potentiality condition rot F = 0 for the vector field F is equivalent tho the following
three relationships for its components:
∂F 1(x1 , x2, x3) ∂F 2(x1 , x2, x3)
= , (11.1)
∂x2 ∂x1
∂F 2(x1 , x2, x3) ∂F 3(x1 , x2, x3)
3
= , (11.2)
∂x ∂x2
∂F 3(x1 , x2, x3) ∂F 1(x1 , x2, x3)
= . (11.3)
∂x1 ∂x3
The relationships (11.1), (11.2), and (11.3) are easily derived from (10.7) or from
(10.9). Let’s define the function ϕ(x1, x2, x3) as the sum of three integrals:
Zx1
1 2 3
ϕ(x , x , x ) = c + F 1(x1, 0, 0) dx1+
0
(11.4)
Zx2 Zx3
2 1 2 2
+ F (x , x , 0) dx + F 3(x1 , x2, x3) dx3.
0 0
Here c is an arbitrary constant. Now we only have to check up that the function
(11.4) is that very scalar field for which F = grad ϕ.
Let’s differentiate the function ϕ with respect to the variable x3 . The constant
c and the first two integrals in (11.4) do not depend on x3 . Therefore, we have
Zx3
∂ϕ ∂
= F 3(x1 , x2, x3) dx3 = F 3(x1, x2, x3). (11.5)
∂x3 ∂x3
0
In order to transform the expression being integrated in (11.6) we use the formula
(11.2). This leads to the following result:
Zx3 2 1 2 3
∂ϕ 2 1 2 ∂F (x , x , x ) 3
= F (x , x , 0) + dx =
∂x2 ∂x3
0 (11.7)
x=x3
= F 2 (x1, x2, 0) + F 2(x1 , x2, x) = F 2(x1, x2, x3).
x=0
CopyRight
c Sharipov R.A., 1996, 2004.
§ 11. POTENTIAL AND VORTICULAR VECTOR FIELDS. 43
In calculating the derivative ∂ϕ/∂x1 we use that same tricks as in the case of the
other two derivatives ∂ϕ/∂x3 and ∂ϕ/∂x2 :
Zx1 Zx2
∂ϕ ∂ 1 1 1 ∂
= F (x , 0, 0) dx + F 2(x1 , x2, 0) dx2+
∂x1 ∂x1 ∂x1
0 0
Zx3
∂
+ F 3(x1, x2, x3) dx3 = F 1(x1 , 0, 0)+
∂x1
0
Zx2 Zx3
∂F 2(x1 , x2, 0) 2 ∂F 3 (x1, x2, x3) 3
+ dx + dx .
∂x1 ∂x1
0 0
To transform the last two integrals we use the relationships (11.1) and (11.3):
∂ϕ x=x2
= F 1(x1 , 0, 0) + F 1(x1, x, 0) +
∂x1 x=0
(11.8)
x=x3
1 1 2 1 1 2 3
+ F (x , x , x) = F (x , x , x ).
x=0
The relationships (11.5), (11.7), and (11.8) show that grad ϕ = F for the function
ϕ(x1 , x2, x3) given by the formula (11.4). The theorem is proved.
Theorem 11.2. Any vorticular vector field F in the space E is the rotor of
some other vector field A, i. e. F = rot A.
Proof. We perform the proof of this theorem in some rectangular Cartesian
coordinate system with the orthonormal basis e1, e2, e3. The condition of vorticity
for the field F in such a coordinate system is expressed by a single equation:
∂F 1(x) ∂F 2 (x) ∂F 3(x)
+ + = 0. (11.9)
∂x1 ∂x2 ∂x3
Let’s construct the vector field A defining its components in the chosen coordinate
system by the following three formulas:
Zx3 Zx2
1 2 1 2 3 3
A = F (x , x , x ) dx − F 3(x1, x2, 0) dx2,
0 0
Zx3
2
A =− F 1(x1 , x2, x3) dx3 , (11.10)
0
A3 = 0.
Let’s show that the field A with components (11.10) is that very field for which
rot A = F. We shall do it calculating directly the components of the rotor in the
chosen coordinate system. For the first component we have
Zx3
∂A3 ∂A2 ∂
− = F 1(x1, x2, x3) dx3 = F 1(x1 , x2, x3).
∂x2 ∂x3 ∂x3
0
44 CHAPTER II. ELEMENTS OF TENSORIAL ANALYSIS.
Here we used the rule of differentiation of an integral with variable upper limit. In
calculating the second component we take into account that the second integral in
the expression for the component A1 in (11.10) does not depend on x3:
Zx3
∂A1 ∂A3 ∂
− = F 2(x1 , x2, x3) dx3 = F 2(x1, x2, x3).
∂x3 ∂x1 ∂x3
0
Zx3
∂A2 ∂A1 ∂F 1(x1 , x2, x3) ∂F 2 (x1, x2, x3)
− =− + dx3+
∂x1 ∂x2 ∂x1 ∂x2
0
Zx2 Zx3 3 1 2 3
∂ 3 1 2 2 ∂F (x , x , x ) 3
+ 2 F (x , x , 0) dx = dx +
∂x ∂x3
0 0
x=x3
+ F 3(x1 , x2, 0) = F 3(x1 , x2, x) + F 3(x1 , x2, 0) = F 3(x1 , x2, x3).
x=0
In these calculations we used the relationship (11.9) in order to replace the sum of
two partial derivatives ∂F 1 /∂x1 + ∂F 2 /∂x2 by −∂F 3 /∂x3. Now, bringing together
the results of calculating all three components of the rotor, we see that rot A = F.
Hence, the required field A can indeed be chosen in the form of (11.10).
CHAPTER III
CURVILINEAR COORDINATES
−→
length of its radius-vector ρ = |OA| and the value of the angle ϕ between the ray
OX and the radius-vector of the point A. Certainly, one should also choose a
positive (counterclockwise) direction to which the angle ϕ is laid (this is equivalent
to choosing a preferable orientation on the plane). Angles laid to the opposite
direction are understood as negative angles. The numbers ρ and ϕ are called the
polar coordinates of the point A.
Let’s associate some Cartesian coordinate system with the polar coordinates as
shown on Fig. 1.2. We choose the point O as an origin, then direct the abscissa
axis along the ray OX and get the ordinate axis from the abscissa axis rotating it
by 90◦ . Then the Cartesian coordinates of the point A are derived from its polar
coordinates by means of the formulas
x1 = ρ cos(ϕ),
(1.1)
x2 = ρ sin(ϕ).
46 CHAPTER III. CURVILINEAR COORDINATES.
x2
tan(ϕ/2) = q .
x1 + (x1 )2 + (x2 )2
However, we prefer the not absolutely exact expression for ϕ from (1.2) since it is
relatively simple.
Let’s draw the series of equidistant straight lines parallel to the axes on the
map R2 of the polar coordinate system (see Fig. 1.4 below). The mapping (1.1)
takes them to the series of rays and concentric circles on the (x1 , x2) plane.
The straight lines on Fig. 1.4 and the rays and circles on Fig. 1.5 compose
the coordinate network of the polar coordinate system. By reducing the intervals
between the lines one can obtain a more dense coordinate network. This procedure
can be repeated infinitely many times producing more and more dense networks
in each step. Ultimately (in the continuum limit), one can think the coordinate
network to be maximally dense. Such a network consist of two families of lines: the
first family is given by the condition ϕ = const, the second one — by the similar
condition ρ = const.
§ 1. SOME EXAMPLES OF CURVILINEAR COORDINATE SYSTEMS. 47
On Fig. 1.4 exactly two coordinate lines pass through each point of the map:
one is from the first family and the other is from the second family. On the
(x1, x2) plane this condition is fulfilled at all points except for the origin O. Here
all coordinate lines of the first family are crossed. The origin O is the only singular
point of the polar coordinate system.
The cylindrical coordinate system in the space E is obtained from the polar
coordinates on a plane by adding the third coordinate h. As in the case of
polar coordinate system, we associate some
Cartesian coordinate system with the cylin-
drical coordinate system (see Fig. 1.6). Then
1
x = ρ cos(ϕ),
x2 = ρ sin(ϕ), (1.3)
3
x = h.
Coordinate lines of spherical coordinates form three families. The first family is
composed of the rays coming out from the point O; the second family is formed
by circles that lie in various vertical planes passing through the axix Ox3 ; and
the third family consists of horizontal circles whose centers are on the axis Ox3.
Exactly three coordinate lines pass through each regular point of the space E, one
line from each family.
The condition ρ = const specifies the sphere of the radius ρ in the space E. The
coordinate lines of the second and third families define the network of meridians
and parallels on this sphere exactly the same as used in geography to define the
coordinates on the Earth surface.
The matrix J of the form (2.2) is called the Jacobi matrix of the mapping
u : D → R3 given by the triple of the differentiable functions u1 , u2, u3 in the
domain D. It is obvious that the regularity of the functions u1, u2, u3 at a point
is equivalent to the non-degeneracy of the Jacobi matrix at that point: det J 6= 0.
Theorem 2.1. If continuously differentiable functions u1 , u2, u3 with the do-
main D are regular at a point A, then there exists some neighborhood O(A) of the
point A and some neighborhood O(u(A)) of the point u(A) in the space R3 such
that the following conditions are fulfilled:
(1) the mapping u : O(A) → O(u(A)) is bijective;
(2) the inverse mapping u−1 : O(u(A)) → O(A) is continuously differentiable.
The theorem 2.1 or propositions equivalent to it are usually proved in the course
of mathematical analysis (see [2]). They are known as the theorems on implicit
functions.
Definition 2.2. Say that an ordered triple of continuously differentiable func-
tions u1, u2 , u3 with the domain D ⊂ E define a curvilinear coordinate system in
D if it is regular at all points of D and if the mapping u determined by them is a
bijective mapping from D to some domain U ⊂ R3 .
The cylindrical coordinate system is given by three functions u1 = ρ(x),
u = ϕ(x), and u3 = h(x) from (1.4), while the spherical coordinate system is
2
given by the functions (1.6). However, the triples of functions (1.4) and (1.6)
satisfy the conditions from the definition 2.2 only after reducing somewhat their
domains. Upon proper choice of a domain D for (1.4) and (1.6) the inverse
mappings u−1 are given by the formulas (1.3) and (1.5).
Suppose that in a domain D ⊂ E a curvilinear coordinate system u1, u2 , u3 is
given. Let’s choose an auxiliary Cartesian coordinate system in E. Then u1 , u2 , u3
is a triple of functions defining a map u from D onto some domain U ⊂ R3 :
1 1 1 2 3
u = u (x , x , x ),
u2 = u2(x1 , x2, x3), (2.3)
3 3 1 2 3
u = u (x , x , x ).
CopyRight
c Sharipov R.A., 1996, 2004.
50 CHAPTER III. CURVILINEAR COORDINATES.
The domain D is called the domain being mapped, the domain U ⊂ R3 is called
the map or the chart, while u−1 : U → D is called the chart mapping. The chart
mapping is given by the following three functions:
1 1 1 2 3
x = x (u , u , u ),
x2 = x2(u1 , u2, u3), (2.4)
3
x = x3(u1 , u2, u3).
Denote by r the radius-vector r of the point with Cartesian coordinates x1 , x2, x3.
Then instead of three scalar functions (2.4) we can use one vectorial function
3
X
r(u1, u2, u3) = xq (u1 , u2, u3) · eq . (2.5)
q=1
Let’s fix some two of three coordinates u1, u2 , u3 and let’s vary the third of them.
Thus we get three families of straight lines within the domain U ⊂ R3 :
1 1 1
1 1
u = t,
u =c ,
u =c ,
u2 = c 2 , u2 = t, u2 = c 2 , (2.6)
3
u = c3 ,
3
u = c3 ,
3
u = t.
Here c1 , c2 , c3 are constants. The straight lines (2.6) form a rectangular coordinate
network within the chart U . Exactly one straight line from each of the families
(2.6) passes through each point of the chart. Substituting (2.6) into (2.5) we map
the rectangular network from U onto a curvilinear network in the domain D ⊂ E.
Such a network is called the coordinate network of a curvilinear coordinate system.
The coordinate network of a curvilinear coordinate system on the domain D
consists of three families of lines. Due to the bijectivity of the mapping u : D → U
exactly three coordinate lines pass through each point of the domain D — one line
from each family. Each coordinate line has its canonical parametrization: t = u1
is the parameter for the lines of the first family, t = u2 is the parameter for the
lines of the second family, and finally, t = u3 is the parameter for the lines of
the third family. At each point of the domain D we have three tangent vectors,
they are tangent to the coordinate lines of the three families passing through that
point. Let’s denote them E1 , E2 , E3 . The vectors E1 , E2 , E3 are obtained by
differentiating the radius-vector r(u1, u2, u3) with respect to the parameters u1,
u2 , u3 of coordinate lines. Therefore, we can write
Let’s substitute (2.5) into (2.7). The basis vectors e1 , e2, e3 do not depend on the
variables u1 , u2 , u3, hence, we get
3
X ∂xq (u1 , u2, u3)
Ej (u1 , u2, u3) = · eq . (2.8)
q=1
∂uj
§ 2. MOVING FRAME OF A CURVILINEAR COORDINATE SYSTEM. 51
The formula (2.8) determines the expansion of the vectors E1 , E2 , E3 in the basis
e1, e2, e3. The column-vectors of the coordinates of E1 , E2 , and E3 can be
concatenated into the following matrix:
1
∂x1 ∂x1
∂x
∂u1 ∂u2 ∂u3
2
∂x ∂x2 ∂x2
I=
. (2.9)
∂u1 ∂u2 ∂u3
∂x3 ∂x3 ∂x3
1
∂u ∂u2 ∂u3
Comparing (2.9) and (2.2), we see that (2.9) is the Jacobi matrix for the mapping
u−1 : U → D given by the functions (2.4). Let’s substitute (2.4) into (2.3):
ui(x1 (u1 , u2, u3), x2(u1 , u2, u3), x3(u1 , u2, u3)) = ui . (2.10)
The identity (2.10) follows from the fact that the functions (2.3) and (2.4) define
two mutually inverse mappings u and u−1. Let’s differentiate the identity (2.10)
with respect to the variable uj :
3
X ∂ui (x1, x2, x3) ∂xq (u1 , u2, u3)
q j
= δji . (2.11)
q=1
∂x ∂u
Here we used the chain rule for differentiating the composite function in (2.10).
The relationship (2.11) shows that the matrices (2.2) and (2.9) are inverse to each
other. More precisely, we have the following relationship
The transition matrix S in the formula (2.13) coincides with the Jacobi matrix
(2.9), therefore its components depend on u1, u2 , u3 . These are the natural
variables for the components of S.
52 CHAPTER III. CURVILINEAR COORDINATES.
The inverse transition from the basis E1 , E2 , E3 to the basis of the Cartesian
coordinate system is given by the inverse matrix T = S −1 . Due to (2.12)
the inverse transition matrix coincides with the Jacobi matrix (2.2). Therefore,
x1 , x2, x3 are the natural variables for the components of the matrix T :
3
X
eq = Tqi (x1 , x2, x3) · Ei . (2.14)
i=1
The mappings u and ũ inverse to the chart mappings are given similarly:
1 1 1 2 3
1 1 1 2 3
u = u (x , x , x ),
ũ = ũ (x , x , x ),
u2 = u2(x1 , x2, x3), ũ2 = ũ2 (x1, x2, x3), (3.2)
3
u = u3(x1 , x2, x3),
3
ũ = ũ3 (x1, x2, x3).
Let’s substitute the first set of functions (3.1) into the arguments of the second set
§ 3. CHANGE OF CURVILINEAR COORDINATES. 53
of functions (3.2). Similarly, we substitute the second set of functions (3.1) into
the arguments of the first set of functions in (3.2). As a result we get the functions
1 1 1 2 3 2 1 2 3 3 1 2 3
ũ (x (u , u , u ), x (u , u , u ), x (u , u , u )),
ũ2 (x1(u1 , u2, u3), x2(u1 , u2, u3), x3(u1, u2, u3)), (3.3)
3 1 1 2 3 2 1 2 3 3 1 2 3
ũ (x (u , u , u ), x (u , u , u ), x (u , u , u )),
1 1 1 2 3 2 1 2 3 3 1 2 3
u (x (ũ , ũ , ũ ), x (ũ , ũ , ũ ), x (ũ , ũ , ũ )),
u2 (x1(ũ1 , ũ2, ũ3), x2(ũ1 , ũ2, ũ3), x3(ũ1, ũ2, ũ3)), (3.4)
3 1 1 2 3 2 1 2 3 3 1 2 3
u (x (ũ , ũ , ũ ), x (ũ , ũ , ũ ), x (ũ , ũ , ũ ))
which define the pair of mutually inverse mappings ũ ◦ u−1 and u ◦ ũ−1. For the
sake of brevity we write these sets of functions as follows:
1 1 1 2 3
1 1 1 2 3
ũ = ũ (u , u , u ),
u = u (ũ , ũ , ũ ),
ũ2 = ũ2 (u1, u2, u3), u2 = u2(ũ1 , ũ2, ũ3), (3.5)
3
ũ = ũ3 (u1, u2, u3),
3
u = u3(ũ1 , ũ2, ũ3).
The formulas (3.5) express the coordinates of a point from the domain D in some
curvilinear coordinate system through its coordinates in some other coordinate
system. These formulas are called the transformation formulas or the formulas for
changing the curvilinear coordinates.
Each of the two curvilinear coordinate systems has its own moving frame within
the domain D = D1 ∩ D2 . Let’s denote by S and T the transition matrices relating
these two moving frames. Then we can write
3
X 3
X
Ẽj = Sji · Ei , Ei = Tik · Ẽk . (3.6)
i=1 k=1
Theorem 3.1. The components of the transition matrices S and T for the
moving frames of two curvilinear coordinate system in (3.6) are determined by the
partial derivatives of the functions (3.5):
∂ui ∂ ũk
Sji (ũ1, ũ2, ũ3) = , Tik (u1, u2, u3) = . (3.7)
∂ ũj ∂ui
Proof. We shall prove only the first formula in (3.7). The proof of the second
formula is absolutely analogous to the proof of the first one. Let’s choose some
auxiliary Cartesian coordinate system and then write the formula (2.8) applied to
the frame vectors of the second curvilinear coordinate system:
3
X ∂xq (ũ1 , ũ2, ũ3)
Ẽj (ũ1 , ũ2, ũ3) = · eq . (3.8)
q=1
∂ ũj
3
X ∂ui (x1, x2, x3)
eq = · Ei . (3.9)
∂xq
i=1
Now let’s substitute (3.9) into (3.8). As a result we get the formula relating the
frame vectors of two curvilinear coordinate systems:
3 3
!
X X ∂ui (x1, x2, x3) ∂xq (ũ1 , ũ2, ũ3)
Ẽj = · Ei . (3.10)
q=1
∂xq ∂ ũj
i=1
Comparing (3.10) and (3.6), from this comparison for the components of S we get
3
X ∂ui (x1 , x2, x3) ∂xq (ũ1, ũ2, ũ3)
Sji = . (3.11)
q=1
∂xq ∂ ũj
Remember that the Cartesian coordinates x1 , x2, x3 in the above formula (3.11)
are related to the curvilinear coordinates ũ1, ũ2, ũ3 by means of (3.1). Hence, the
sum in right hand side of (3.11) can be transformed to the partial derivative of the
composite function ui((x1 (ũ1 , ũ2, ũ3), x2(ũ1 , ũ2, ũ3), x3(ũ1 , ũ2, ũ3)) from (3.4):
∂ui
Sji = .
∂ ũj
Note that the functions (3.4) written in the form of (3.5) are that very functions
relating ũ1 , ũ2, ũ3 and u1 , u2, u3, and their derivatives are in formula (3.7). The
theorem is proved.
A remark on the orientation. From the definition 2.2 we derive that the
functions (2.3) are continuously differentiable. Due to the theorem 2.1 the func-
tions (2.4) representing the inverse mappings are also continuously differentiable.
Then the components of the matrix S in the formula (2.13) coinciding with the
components of the Jacobi matrix (2.9) are continuous functions within the domain
U . The same is true for the determinant of the matrix S: the determinant
det S(u1 , u2, u3) is a continuous function in the domain U which is nonzero at
all points of this domain. A nonzero continuous real function in a connected set
U cannot take the values of different signs in U . This means that det S > 0 or
det S < 0. This means that the orientation of the triple of vectors forming the
moving frame of a curvilinear coordinate system is the same for all points of a
domain where it is defined. Since the space E is equipped with the preferable
orientation, we can subdivide all curvilinear coordinates in E into right-oriented
and left-oriented coordinate systems.
A remark on the smoothness. The definition 2.2 yields the concept of a
continuously differentiable curvilinear coordinate system. However, the functions
(2.3) could belong to a higher smoothness class C m . In this case we say that we
have a curvilinear coordinate system of the smoothness class C m . The components
of the Jacobi matrix (2.2) for such a coordinate system are the functions of the
class C m−1 . Due to the relationship (2.12) the components of the Jacobi matrix
§ 4. VECTORIAL AND TENSORIAL FIELDS . . . 55
(2.9) belong to the same smoothness class C m−1. Hence, the functions (2.4)
belong to the smoothness class C m .
If we have two curvilinear coordinate systems of the smoothness class C m , then,
according to the above considerations, the transformation functions (3.5) belong
to the class C m , while the components of the transition matrices S and T given
by the formulas (3.7) belong to the smoothness class C m−1 .
The quantities F i(u1 , u2, u3) in such expansion are naturally called the components
of the vector field F in the curvilinear coordinates u1 , u2 , u3. If we have another
curvilinear coordinate system ũ1 , ũ2 , ũ3 in the domain D, then we have the other
expansion of the form (4.1):
3
X
F(ũ1 , ũ2, ũ3) = F̃ i(ũ1 , ũ2, ũ3) · Ẽi (ũ1, ũ2, ũ3). (4.2)
i=1
By means of the formulas (3.6) one can easily derive the relationships binding the
components of the field F in the expansions (4.1) and (4.2):
3
X
F i (u) = Sji (ũ) F̃ j (ũ),
j=1 (4.3)
ui = ui (ũ1, ũ2, ũ3).
The relationships (4.3) are naturally interpreted as the generalizations for the
relationships (1.2) from Chapter II for the case of curvilinear coordinates.
Note that Cartesian coordinate systems can be treated as a special case of
curvilinear coordinates. The transition functions ui = ui (ũ1 , ũ2, ũ3) in the case of
a pair of Cartesian coordinate systems are linear, therefore the matrix S calculated
according to the theorem 3.1 in this case is a constant matrix.
Now let F be either a field of covectors, a field of linear operators, or a field
of bilinear forms. In any case the components of the field F at some point are
determined by fixing some basis attached to that point. The vectors of the moving
frame of a curvilinear coordinate system at a point with coordinates u1 , u2 , u3
provide the required basis. The components of the field F determined by this
basis are called the components of the field F in that curvilinear coordinates. The
transformation rules for the components of the fields listed above under a change
of curvilinear coordinates generalize the formulas (1.3), (1.4), and (1.5) from
56 CHAPTER III. CURVILINEAR COORDINATES.
Chapter II. For a covectorial field F the transformation rule for its components
under a change of coordinates looks like
3
X
Fi (u) = Tij (u) F̃j (ũ),
j=1 (4.4)
ui = ui (ũ1, ũ2, ũ3).
In the case of a field of bilinear (quadratic) forms the generalization of the formula
(1.5) from Chapter II looks like
3 X
X 3
Fij (u) = Tip (u) Tjq (u) F̃pq (ũ),
p=1 q=1 (4.6)
ui = ui (ũ1, ũ2, ũ3).
Let F be a tensor field of the type (r, s). In contrast to a vectorial field,
the value of such a tensorial field at a point have no visual embodiment in
form of an arrowhead segment. Moreover, in general case there is no visually
explicit way of finding the numerical values for the components of such a field
in a given basis. However, according to the definition 1.1 from Chapter II, a
tensor is a geometric object that for each basis has an array of components
associated with this basis. Let’s denote by F(u1 , u2, u3) the value of the field F
at the point with coordinates u1, u2, u3. This is a tensor whose components in
the basis E1 (u1, u2, u3), E2 (u1 , u2, u3), E3 (u1 , u2, u3) are called components of the
field F in a given curvilinear coordinate system. The transformation rules for the
components of a tensor field under a change of a coordinate system follow from
the formula (1.6) in Chapter II. For a tensorial field of the type (r, s) it looks like
X
Fji11...
... ir
js (u) = Spi11 (ũ) . . . Spirr (ũ)×
p1 ... pr
q1 ... qs
× Tjq11 (u) . . . Tjqss (u) F̃qp11...
... pr
qs (ũ), (4.7)
The formula (4.7) has two important differences as compared to the corresponding
formula (1.7) in Chapter II. In the case of curvilinear coordinates
(1) the transition functions ui (ũ1, ũ2, ũ3) should not be linear functions;
(2) the transition matrices S(ũ) and T (u) are not necessarily constant matrices.
Note that these differences do not affect the algebraic operations with tensorial
fields. The operations of addition, tensor product, contraction, index permutation,
symmetrization, and alternation are implemented by the same formulas as in
CopyRight
c Sharipov R.A., 1996, 2004.
§ 5. DIFFERENTIATION OF TENSOR FIELDS . . . 57
Cartesian coordinates. The differences (1) and (2) reveal only in the operation of
covariant differentiation of tensor fields.
Any curvilinear coordinate system is naturally equipped with the the metric
tensor g. This is a tensor whose components are given by mutual scalar products
of the frame vectors for a given coordinate system:
gij = (Ei (u) | Ej (u)). (4.8)
The components of the inverse metric tensor ĝ are obtained by inverting the
matrix g. In a curvilinear coordinates the quantities gij and gij are not necessarily
constants any more.
We already know that the metric tensor g defines the volume pseudotensor
ω. As before, in curvilinear coordinates its components are given by the formula
(6.11) from Chapter II. Since the space E has the preferable orientation, the
volume pseudotensor can be transformed to the volume tensor ω. The formula
(8.1) from Chapter II for the components of this tensor remains valid in a
curvilinear coordinate system either.
∂ui ∂ x̃k
Sji (x̃) = , Tik (u) = . (5.2)
∂ x̃j ∂ui
58 CHAPTER III. CURVILINEAR COORDINATES.
Denote by Ãk (x̃1 , x̃2, x̃3) the components of the vector field A in the Cartesian
coordinate system x̃1, x̃2, x̃3. Then we get
3
X
Ãk = Tpk (u) Ap (u).
p=1
3
∂ Ãk X ∂
B̃qk = Tpk (u) Ap (u) .
= (5.3)
∂ x̃q p=1
∂ x̃ q
Let’s apply the Leibniz rule for calculating the partial derivative in (5.3). As a
result we get two sums. Then, substituting these sums into (5.4), we obtain
3 X
3 3
!
X X ∂Ap (u)
∇j A = i
Ski (x̃) Tpk (u) Tjq (u) +
q=1 p=1
∂ x̃q
k=1
3 3 X
3
!
X X ∂Tpk (u)
+ Ski (x̃) Tjq (u) Ap (u).
p=1 q=1 p=1
∂ x̃q
Note that the matrices S and T are inverse to each other. Therefore, we can
calculate the sums over k and p in the first summand. Moreover, we replace Tjq (u)
by the derivatives ∂ x̃q /∂uj due to the formula (5.2), and we get
3 3
X ∂ x̃q ∂
X ∂ ∂
Tjq (u) = = .
q=1
∂ x̃q q=1
∂u j ∂ x̃q ∂u j
Taking into account all the above arguments, we transform the formula for the
covariant derivative ∇j Ai into the following one:
3 3
!
i ∂Ai (u) X X ∂Tpk (u)
∇j A (u) = + Ski (x̃) Ap (u).
∂uj p=1
∂uj
k=1
We introduce the special notation for the sum enclosed into the round brackets in
the above formula, we denote it by Γijp :
3
X ∂Tpk (u)
Γijp (u) = Ski (x̃) . (5.5)
∂uj
k=1
§ 5. DIFFERENTIATION OF TENSOR FIELDS . . . 59
Taking into account the notations (5.5), now we can write the rule of covariant
differentiation of a vector field in curvilinear coordinates as follows:
3
∂Ai X i p
∇j A i = + Γ A . (5.6)
∂uj p=1 jp
The quantities Γijp calculated according to (5.5) are called the connection compo-
nents or the Christoffel symbols. These quantities are some inner characteristics of
a curvilinear coordinate system. This fact is supported by the following lemma.
Lemma 5.1. The connection components Γijp of a curvilinear coordinate system
u , u2, u3 given by the formula (5.5) do not depend on the choice of an auxiliary
1
The sum over i in right hand side of the equality (5.7) can be calculated explicitly
due to the first of the following two formulas:
3
X 3
X
ẽk = Ski Ei , Ep = Tpk ẽk . (5.8)
i=1 k=1
These formulas (5.8) relate the frame vectors E1 , E2 , E3 and the basis vectors
ẽ1, ẽ2, ẽ3 of the auxiliary Cartesian coordinate system. Now (5.7) is written as:
3 3 3
X X ∂Tpk (u) X ∂
Γijp Ei = Tpk (u) ẽk .
ẽk =
i=1
∂uj ∂u j
k=1 k=1
The basis vector ẽk does not depend on u1 , u2, u3. Therefore, it is brought into
the brackets under the differentiation with respect to uj . The sum over k in right
hand side of the above formula is calculated explicitly due to the second formula
(5.8). As a result the relationship (5.7) is transformed to the following one:
3
∂Ep X
j
= Γijp · Ei . (5.9)
∂u
i=1
The formula (5.9) expresses the partial derivatives of the frame vectors back
through these vectors. It can be understood as another one way for calculating
the connection components Γijp. This formula comprises nothing related to the
auxiliary Cartesian coordinates x̃1, x̃2 , x̃3. The vector Ep (u1, u2, u3) is determined
by the choice of curvilinear coordinates u1, u2 , u3 in the domain D. It is sufficient
to differentiate this vector with respect to uj and expand the resulting vector in
the basis of the frame vectors E1 , E2 , E3 . Then the coefficients of this expansion
60 CHAPTER III. CURVILINEAR COORDINATES.
yield the required values for Γijp. It is obvious that these values do not depend on
the choice of the auxiliary Cartesian coordinates x̃1, x̃2 , x̃3 above.
Now let’s proceed with deriving the rule for covariant differentiation of an
arbitrary tensor field A of the type (r, s) in curvilinear coordinates. For this
purpose we need another one expression for the connection components. It is
derived from (5.5). Let’s transform the formula (5.5) as follows:
3 3
X ∂ X ∂Ski (x̃)
Γijp(u) = S i
k (x̃) T k
p (u) − T k
p (u) .
∂uj ∂uj
k=1 k=1
The matrices S and T are inverse to each other. Therefore, upon performing the
summation over k in the first term we find that it vanishes. Hence, we get
3
X ∂Ski (x̃)
Γijp (u) = − Tpk (u) . (5.10)
∂uj
k=1
Let Aij11... ir
... js be the components of a tensor field A of the type (r, s) in curvilinear
coordinates. In order to calculate the components of B = ∇A we do the same
maneuver as above. First of all we transform the components of A to some
auxiliary Cartesian coordinate system:
X
Ãpq11...
... pr
qs = Tvp11 . . . Tvprr Sqw11 . . . Sqwss Avw11... vr
... ws .
v1 ... vr
w1 ... ws
q
X
Bji11 ... ir
... js+1 = Spi11 . . . Spirr Tjq11 . . . Tjs+1
s+1
B̃qp11...
... pr
qs+1 =
p1 ... pr
q1 ... qs+1
q
X X
= Spi11 . . . Spirr Tjq11 . . . Tjs+1
s+1
×
p1 ... pr v1 ... vr
q1 ... qs+1 w1 ... ws
(5.11)
∂ Tvp11 . . . Tvprr Sqw11 . . . Sqwss Avw11... vr
... ws
× .
∂ x̃qs+1
Applying the Leibniz rule for differentiating in (5.11), as a result we get three
groups of summands. The summands of the first group correspond to differenti-
ating the components of the matrix T , the summands of the second group arise
when we differentiate the components of the matrix S in (5.11), and finally, the
unique summand in the third group is produced by differentiating Avw11... vr
... ws . In
§ 5. DIFFERENTIATION OF TENSOR FIELDS . . . 61
any one of these summands if the term Tvpmm or the term Sqwnn is not differentiated,
then this term is built into a sum that can be evaluated explicitly:
3
X 3
X
Spim
m
Tvpmm = δvim
m
, Tjqnn Sqwnn = δjwnn .
pm =1 qn =1
Therefore, one can evaluate explicitly the most part of the sums in the formula
(5.11). Moreover, we have the following equality:
3 3
X ∂ x̃qs+1
X q ∂ ∂ ∂
Tjs+1
s+1
q
= j q
= .
qs+1 =1
∂ x̃ s+1
q =1
∂u s+1 ∂ x̃ s+1 ∂ujs+1
s+1
Taking into account all the above facts, we can bring (5.11) to
3 3
r
!
X X X ∂Tvpmm
∇js+1 Aij11... ir
... js = Spim Aij11... vm ... ir
... js +
m=1 vm =1 pm =1
m
∂ujs+1
3 3
s
!
X X X ∂Sqwnn ∂Aij11... ir
... js
+ Tjqnn Aij11... ir
... wn ... js + .
n=1 wn =1 qn =1
∂ujs+1 ∂ujs+1
Due to the formulas (5.5) and (5.10) one can express the sums enclosed into round
brackets in the above equality through the Christoffel symbols. Ultimately, the
formula (5.11) is brought to the following form:
∂Aij11... ir
... js
∇js+1 Aij11... ir
... js = +
∂ujs+1
r
X 3
X Xs X 3 (5.12)
+ Γijm
s+1 vm
Aij11... vm ... ir
... js − Γw i1 ... ir
js+1 jn Aj1 ... wn ... js .
n
m=1 vm =1 n=1 wn =1
The formula (5.12) is the rule for covariant differentiation of a tensorial field A of
the type (r, s) in an arbitrary curvilinear coordinate system. This formula can be
commented as follows: the covariant derivative ∇js+1 is obtained from the partial
derivative ∂/∂ujs+1 by adding r + s terms — one per each index in the components
of the field A. The terms associated with the upper indices enter with the positive
sign, the other terms associated with the lower indices enter with the negative
sign. In such additional terms each of the upper indices im and each of the lower
indices jn are sequentially moved to the Christoffel symbol, while in its place we
write the summation index vm or wn. The lower index js+1 added as a result
of covariant differentiation is always written as the first lower index in Christoffel
symbols. The position of the summation indices vm and wn in Christoffel symbols
is always complementary to their positions in the components of the field A so
that they always form a pair of upper and lower indices. Though the formula
(5.12) is rather huge, we hope that due to the above comments one can easily
remember it and reproduce it in any particular case.
62 CHAPTER III. CURVILINEAR COORDINATES.
3 X
3 X
3 3
X X ∂Tim
Γkij = k
Sm Tip Tjq Γ̃m
pq +
k
Sm . (6.1)
m=1 p=1 q=1 m=1
∂uj
Here S and T are the transition matrices given by the formulas (3.7).
A remark on the smoothness. The derivatives of the components of T in
(6.1) and the formulas (3.7), where the components of T are defined as the partial
derivatives of the transition functions (3.5), show that the connection components
can be correctly defined only for coordinate systems of the smoothness class not
lower than C 2. The same conclusion follows from the formula (5.5) for Γijp .
Proof. In order to prove the theorem 6.1 we apply the formula (5.9). Let’s
write it for the frame vectors E1 , E2 , E3 , then apply the formula (3.6) for to
express Ej through the vectors Ẽ1 , Ẽ2 , and Ẽ3 :
3 3
X ∂ Ẽj X ∂ m
Γkij Ek = = T j Ẽ m . (6.2)
∂ui m=1
∂ui
k=1
Applying the Leibniz rule to the right hand side of (6.2), we get two terms:
3 3 3
X X ∂Tjm X ∂ Ẽq
Γkij Ek = i
Ẽ m + Tjq . (6.3)
m=1
∂u q=1
∂ui
k=1
In the first term in the right hand side of (6.3) we express Ẽm through the vectors
E1 , E2 , and E3 . In the second term we apply the chain rule and express the
derivative with respect to ui through the derivatives with respect to ũ1, ũ2, ũ3:
3 3 X
3 3 X 3
X X ∂Tjm X ∂ ũp ∂ Ẽq
Γkij Ek = k
Sm i
E k + Tjq .
∂u ∂ui ∂ ũp
k=1 k=1 m=1 q=1 p=1
§ 7. CONCORDANCE OF METRIC AND CONNECTION. 63
Now let’s replace ∂ ũp /∂ui by Tip relying upon the formulas (3.7) and then apply
the relationship (5.9) once more in the form of
3
∂ Ẽq X
= Γ̃m
pq Ẽm .
∂ ũp m=1
As a result of the above transformations we can write the equality (6.3) as follows:
3 3 X
3 3 X 3 X 3
X X ∂Tjm X
Γkij Ek = k
Sm E k + Tip Tjq Γ̃m
pq Ẽm .
∂ui
k=1 k=1 m=1 q=1 p=1 m=1
Now we need only to express Ẽm through the frame vectors E1 , E2 , E3 and collect
the similar terms in the above formula:
3 3 X 3 X 3 3
!
X
k
X
k p q m
X
k
∂Tjm
Γij − Sm Ti Tj Γ̃pq − Sm Ek = 0.
m=1 q=1 p=1 m=1
∂ui
k=1
Since the frame vectors E1 , E2 , E3 are linearly independent, the expression en-
closed into round brackets should vanish. As a result we get the equality exactly
equivalent to the relationship (6.1) that we needed to prove.
∇k gij = 0. (7.1)
The relationship (7.1) is known as the concordance condition for a metric and a
connection. Taking into account (5.12), we can rewrite this condition as
3 3
gij X
r
X
− Γ ki g rj − Γrkj gir = 0. (7.2)
∂uk r=1 r=1
The formula (7.2) relates the connection components Γkij and the components
of the metric tensor gij . Due to this relationship we can express Γkij through
the components of the metric tensor provided we remember the following very
important property of the connection components (5.5).
Theorem 7.1. The connection given by the formula (5.5) is a symmetric con-
nection, i. e. Γkij = Γkj i .
Proof. From (5.2) and (5.5) for Γkij we derive the following expression:
3
X ∂Tjq (u) X 3
∂ 2 x̃q
Γkij (u) = Sqk i
= Sqk . (7.3)
q=1
∂u q=1
∂uj ∂ui
CopyRight
c Sharipov R.A., 1996, 2004.
64 CHAPTER III. CURVILINEAR COORDINATES.
For the functions of the smoothness class C 2 the mixed second order partial
derivatives do not depend on the order of differentiation:
∂ 2 x̃q ∂ 2 x̃q
= .
∂uj ∂ui ∂ui ∂uj
This fact immediately proves the symmetry of the Christoffel symbols given by
the formula (7.3). Thus, the proof is over.
Now, returning back to the formula (7.2) relating Γkij and gij , we introduce the
following notations that simplify the further calculations:
3
X
Γijk = Γrij gkr . (7.4)
r=1
It is clear that the quantities Γijk in (7.4) are produced from Γkij by means of
index lowering procedure described in Chapter II. Therefore, conversely, Γkij are
obtained from Γijk according to the following formula:
3
X
Γkij = gkr Γijr . (7.5)
r=1
From the symmetry of Γkij it follows that the quantities Γijk in (7.4) are also
symmetric with respect to the indices i and j, i. e. Γijk = Γjik . Using the
notations (7.4) and the symmetry of the metric tensor, the relationship (7.2) can
be rewritten in the following way:
∂gij
− Γkij − Γkji = 0. (7.6)
∂uk
Let’s complete (7.6) with two similar relationships applying two cyclic transposi-
tions of the indices i → j → k → i to the formula (7.6). As a result we obtain
∂gij
− Γkij − Γkji = 0,
∂uk
∂gjk
− Γijk − Γikj = 0, (7.7)
∂ui
∂gki
− Γjki − Γjik = 0.
∂uj
Let’s add the last two relationships (7.7) and subtract the first one from the sum.
Taking into account the symmetry of Γijk with respect to i and j, we get
∂gjk ∂gki ∂gij
i
+ − − 2 Γijk = 0.
∂u ∂uj ∂uk
Using this equality, one can easily express Γijk through the components of the
metric tensor. Then one can substitute this expression into (7.5) and derive
3
1 X kr ∂grj ∂gir ∂gij
Γkij = g + − . (7.8)
2 r=1 ∂ui ∂uj ∂ur
§ 8. PARALLEL TRANSLATION. 65
The relationship (7.8) is another formula for the Christoffel symbols Γkij , it follows
from the symmetry of Γkij and from the concordance condition for the metric
and connection. It is different from (5.5) and (5.10). The relationship (7.8) has
the important advantage as compared to (5.5): one should not use an auxiliary
Cartesian coordinate system for to apply it. As compared to (5.9), in (7.8) one
should not deal with vector-functions Ei (u1, u2, u3). All calculations in (7.8) are
performed within a fixed curvilinear coordinate system provided the components
of the metric tensor in this coordinate system are known.
in general case are different. If the points A and B are closed to each other,
then the triples of vectors E1 (A), E2 (A), E3 (A) and E1 (B), E2 (B), E3 (B) are
approximately the same. Hence, in this case the components of the vector a in
the expansions (8.1) are slightly different from each other. This consideration
shows that in curvilinear coordinates the parallel translation should be performed
gradually: one should first converge the point B with the point A, then slowly
move the point B toward its ultimate position and record the coordinates of the
vector a in the second expansion (8.1) at each intermediate position of the point
B. The most simple way to implement this plan is to link A and B with some
smooth parametric curve r = r(t), where t ∈ [0, 1]. In a curvilinear coordinate
system a parametric curve is given by three functions u1(t), u2(t), u3(t) that for
each t ∈ [0, 1] yield the coordinates of the corresponding point on the curve.
Theorem 8.1. For a parametric curve given by three functions u1(t), u2(t),
and u3 (t) in some curvilinear coordinate system the components of the tangent
vector τ (t) in the moving frame of that coordinate system are determined by the
derivatives u̇1(t), u̇2(t), u̇3(t).
66 CHAPTER III. CURVILINEAR COORDINATES.
Remember that due to the formula (2.7) the partial derivatives in (8.3) coincide
with the frame vectors of the curvilinear coordinate system. Therefore the formula
(8.3) itself can be rewritten as follows:
3
X
τ (t) = u̇j (t) · Ej (u1(t), u2(t), u3(t)). (8.4)
j=1
It is easy to see that (8.4) is the expansion of the tangent vector τ (t) in the basis
formed by the frame vectors of the curvilinear coordinate system. The components
of the vector τ (t) in the expansion (8.4) are the derivatives u̇1(t), u̇2(t), u̇3(t). The
theorem is proved.
Let’s apply the procedure of parallel translation to the vector a and translate
this vector to all points of the curve linking the points A and B (see Fig. 8.1).
Then we can write the following expansion for this vector
3
X
a= ai(t) · Ei (u1(t), u2 (t), u3(t)). (8.5)
i=1
This expansion is analogous to (8.4). Let’s differentiate the relationship (8.5) with
respect to the parameter t and take into account that a = const:
3 3 3
da X i X X ∂Ei
0= = ȧ · Ei + ai j
u̇j .
dt i=1 i=1 j=1
∂u
Now let’s use the formula (5.9) in order to differentiate the frame vectors of the
curvilinear coordinate system. As a result we derive
3 3 X3
!
X X
ȧi + Γijk u̇j ak · Ei = 0.
i=1 j=1 k=1
The equation (8.6) is called the differential equation of the parallel translation of
a vector along a curve. This is the system of three linear differential equations of
the first order with respect to the components of the vector a. Actually, in order
to perform the parallel translation of a vector a from the point A to the point B
in curvilinear coordinates one should set the initial data for the components of the
vector a at the point A (i. e. for t = 0) and then solve the Cauchy problem for the
equations (8.6).
The procedure of the parallel translation of vectors along curves leads us to the
situation where at each point of a curve in E we have some vector attached to
that point. The same situation arises in considering the vectors τ , n, and b that
form the Frenet frame of a curve in E (see Chapter I). Generalizing this situation
one can consider the set of tensors of the type (r, s) attached to the points of some
curve. Defining such a set of tensors differs from defining a tensorial field in E
since in order to define a tensor field in E one should attach a tensor to each point
of the space, not only to the points of a curve. In the case, where the tensors
of the type (r, s) are defined only at the points of a curve, we say that a tensor
field of the type (r, s) on a curve is given. In order to write the components of
such a tensor field A we can use the moving frame E1 , E2 , E3 of some curvilinear
coordinate system in some neighborhood of the curve. These components form a
set of functions of the scalar parameter t specifying the points of the curve:
Aij11... ir i1 ... ir
... js = Aj1 ... js (t). (8.7)
Under a change of curvilinear coordinate system the quantities (8.7) are transfor-
med according to the standard rule
X
Aij11... ir
... js (t) = Spi11 (t) . . . Spirr (t)×
p1 ... pr (8.8)
q1 ... qs
× Tjq11 (t) . . . Tjqss (t) Ãpq11...
... pr
qs (t),
where S(t) and T (t) are the values of the transition matrices at the points of the
curve. They are given by the following formulas:
We cannot use the formula (5.12) for differentiating the field A on the curve
since the only argument, which the functions (8.7) depend on, is the parameter t.
Therefore, we need to modify the formula (5.12) as follows:
dAij11... ir
... js
∇tAij11... ir
... js = +
dt
3 X
r X
X 3 s X
X 3 X 3 (8.10)
+ Γiqmvm u̇q Aij11... vm ... ir
... js − Γw q i1 ... ir
q jn u̇ Aj1 ... wn ... js .
n
The formula (8.10) expresses the rule for covariant differentiation of a tensor
field A with respect to the parameter t along a parametric curve in curvilinear
coordinates u1, u2, u3. Unlike (5.12), the index t beside the nabla sign is not an
68 CHAPTER III. CURVILINEAR COORDINATES.
additional index. It is set only for to denote the variable t with respect to which
the differentiation in the formula (8.10) is performed.
Theorem 8.2. Under a change of coordinates u1, u2 , u3 for other coordinates
ũ , ũ2, ũ3 the quantities Bji11 ...
1 ir i1 ... ir
... js = ∇t Aj1 ... js calculated by means of the formula
(8.10) are transformed according to the rule (8.8) and define a tensor field B = ∇t A
of the type (r, s) which is called the covariant derivative of the field A with respect
to the parameter t along a curve.
Proof. The proof of this theorem is pure calculations. Let’s begin with the
first term in (8.10). Let’s express Aij11... ir
... js through the components of the field A
in the other coordinates ũ1, ũ2, ũ3 by means of (8.8). In calculating dAij11... ir
... js /dt
this is equivalent to differentiating both sides of (8.8) with respect to t:
dAij11... ir
... js
X dÃpq11...
... pr
qs
= Spi11 . . . Spirr Tjq11 . . . Tjqss +
dt p1 ... pr dt
q1 ... qs
r
X X
+ Spi11 . . . Ṡpim
m
. . . Spirr Tjq11 . . . Tjqss Ãpq11...
... pr
qs + (8.11)
m=1 p1 ... pr
q1 ... qs
s
X X
+ Spi11 . . . Spirr Tjq11 . . . Ṫjqnn . . . Tjqss Ãpq11...
... pr
qs .
n=1 p1 ... pr
q1 ... qs
3 X
3 3 3
!
X X X dTjk
Ṫjqnn = Ṫjkn Skwn Twqnn = n
Skwn Twqnn .
dt
k=1 wn =1 wn =1 k=1
In order to transform further the above formulas for the derivatives Ṡpim
m
and Ṫjqnn
we use the second formula in (8.9):
3 3 X
3
!
X X ∂Tvkm q
Ṡpim =− Skim u̇ Spvmm
, (8.12)
m
vm =1
∂uq
k=1 q=1
3 3 X
3
!
X X ∂Tjkn q
Ṫjqnn = Skwn u̇ Twqnn . (8.13)
wn =1
∂uq
k=1 q=1
Let’s substitute (8.12) and (8.13) into (8.11). Then, taking into account the
relationship (8.8), we can perform the summation over p1 , . . . , pr and q1, . . . , qs
§ 8. PARALLEL TRANSLATION. 69
in the second and the third terms in (8.11) thus transforming (8.11) to
dAij11... ir
... js
X dÃpq11...
... pr
qs
= Spi11 . . . Spirr Tjq11 . . . Tjqss −
dt p1 ... pr dt
q1 ... qs
3 3 3
r
!
X X X X ∂Tvkm
− Skim u̇q Aij11... vm ... ir
... js + (8.14)
m=1 q=1 vm =1
∂uq
k=1
3 X
3 3
s X
!
X X ∂Tjkn
+ Skwn u̇q Aij11... ir
... wn ... js .
n=1 q=1 wn =1
∂uq
k=1
The second and the third terms in (8.10) and (8.14) are similar in their structure.
Therefore, one can collect the similar terms upon substituting (8.14) into (8.10).
Collecting these similar terms, we get the following two expressions
3 3
X ∂Tvkm X ∂Tjkn
Γiqmvm − Skim , Γw
q jn −
n
Skwn
∂uq ∂uq
k=1 k=1
If we take into account (8.15) when substituting (8.14) into (8.10), then the
equality (8.10) is written in the following form:
X dÃpq11...
... pr
qs
∇t Aij11... ir
... js = Spi11 . . . Spirr Tjq11 . . . Tjqss +
p1 ... pr dt
q1 ... qs
3 X
r X
X 3 3 X
X 3 X
3
+ Spim
m
Γ̃ppkm Tqp Tvkm u̇q Aij11... vm ... ir
... js −
m=1 q=1 vm =1 pm =1 p=1 k=1
3 X
s X
X 3 3 X
X 3 X
3
− Skwn Γ̃kp qn Tqp Tjqnn u̇q Aij11... ir
... wn ... js .
n=1 q=1 wn =1 qn =1 p=1 k=1
Note that the expression enclosed into round brackets in (8.16) is ∇t Ãpq11...
... pr
qs
exactly. Therefore, the formula (8.16) means that the components of the field
∇t A on a curve calculated according to the formula (8.10) obey the transformation
rule (8.8). Thus, the theorem 8.2 is proved.
Now let’s return to the formula (8.6). The left hand side of this formula
coincides with the expression (8.10) for the covariant derivative of the vector field
a with respect to the parameter t. Therefore, the equation of parallel translation
can be written as ∇ta = 0. In this form, the equation of parallel translation can
be easily generalized for the case of an arbitrary tensor A:
∇t A = 0. (8.17)
The equation (8.17) cannot be derived directly since the procedure of parallel
translation for arbitrary tensors has no visual representation like Fig. 8.1.
Let’s consider a segment of a straight line given parametrically by the functions
u1 (t), u2 (t), u3 (t) in a curvilinear coordinates. Let t = s be the natural parameter
on this straight line. Then the tangent vector τ (t) is a vector of the unit length at
all points of the line. Its direction is also unchanged. Therefore, its components u̇i
satisfy the equation of parallel translation. Substituting ai = u̇i into (8.6), we get
3 X
X 3
üi + Γijk u̇j u̇k = 0. (8.18)
j=1 k=1
cos(ϕ)
−ρ sin(ϕ)
E1 =
sin(ϕ)
,
E2 =
ρ cos(ϕ)
.
(9.1)
The column-vectors (9.1) are composed by the coordinates of the vectors E1 and
E2 in the orthonormal basis. Therefore, we can calculate their scalar products and
thus find the components of direct and inverse metric tensors g and ĝ:
1 0
1 0
gij =
gij =
0 ρ2
,
. (9.2)
0 ρ−2
Once the components of g and ĝ are known, we can calculate the Christoffel
symbols. For this purpose we apply the formula (7.8):
CopyRight
c Sharipov R.A., 1996, 2004.
§ 9. SOME CALCULATIONS . . . 71
Let’s apply the connection components (9.3) in order to calculate the Laplace
operator 4 in polar coordinates. Let ψ be some scalar field: ψ = ψ(ρ, ϕ). Then
2 X 2 2
!
X
ij ∂2ψ X
k ∂ψ
4ψ = g − Γij k . (9.4)
i=1 j=1
∂ui ∂uj ∂u
k=1
∂ 2 ψ 1 ∂ψ 1 ∂2ψ
4ψ = 2
+ + 2 . (9.5)
∂ρ ρ ∂ρ ρ ∂ϕ2
Now let’s consider the cylindrical coordinate system. For the components of
metric tensors g and ĝ in this case we have
1 0 0
1 0 0
gij =
0 ρ2 0
, gij =
0 ρ−2 0
. (9.6)
0 0 1
0 0 1
Let’s rewrite in the dimension 3 the relationship (9.4) for the Laplace operator
applied to a scalar field ψ:
3 X 3 3
!
X
ij ∂2ψ X
k ∂ψ
4ψ = g − Γij k . (9.10)
i=1 j=1
∂ui ∂uj ∂u
k=1
Substituting (9.7), (9.8), and (9.9) into the formula (9.10), we get
∂ 2 ψ 1 ∂ψ 1 ∂2ψ ∂2ψ
4ψ = 2
+ + 2 + . (9.11)
∂ρ ρ ∂ρ ρ ∂ϕ2 ∂h2
72 CHAPTER III. CURVILINEAR COORDINATES.
Now we derive the formula for the components of rotor in cylindrical coordina-
tes. Let A be a vector field and let A1 , A2 , A3 be its components in cylindrical
coordinates. In order to calculate the components of the field F = rot A we use
the formula (10.5) from Chapter II. This formula comprises the volume tensor
whose components are calculated by formula (8.1) from Chapter II. The sign
factor ξE in this formula is determined by the orientation of a coordinate system.
The cylindrical coordinate system can be either right-oriented or left-oriented. It
depends on the orientation of the auxiliary Cartesian coordinate system x1, x2, x3
which is related to the cylindrical coordinates by means of the relationships (1.3).
For the sake of certainty we assume that the right-oriented cylindrical coordinates
are chosen. Then ξE = 1 and for the components of the rotor F = rot A we derive
3 X
X 3 X
3 X
3
p
Fm = det g gmi εijk gjq ∇q F k . (9.12)
i=1 j=1 k=1 q=1
Taking into account (9.7), (9.8), (9.9), (9.6) and using (9.12), we get
1 ∂A3 ∂A2
F1 = −ρ ,
ρ ∂ϕ ∂h
1 ∂A1 1 ∂A3
F2 = − , (9.13)
ρ ∂h ρ ∂ρ
∂A2 1 ∂A1
F3 = ρ − + 2 A2 .
∂ρ ρ ∂ϕ
1 0 0
gij =
0 ρ2 0
.
(9.15)
0 0 ρ2 sin2(ϑ)
Then we calculate the connection components and write then in form of the array:
Substituting (9.16), (9.17), and (9.18) into the relationship (9.10), we get
∂A3 1 ∂A2
F 1 = sin(ϑ) − + 2 cos(ϑ) A3 ,
∂ϑ sin(ϑ) ∂ϕ
1 ∂A1 ∂A3 2 sin(ϑ) 3
F2 = − sin(ϑ) − A , (9.20)
ρ2 sin(ϑ) ∂ϕ ∂ρ ρ
1 ∂A2 1 ∂A1 2
F3 = − 2 + A2 .
sin(ϑ) ∂ρ ρ sin(ϑ) ∂ϑ ρ sin(ϑ)
Like (9.13), the formulas (9.20) can be written in form of the determinant:
E1 E2 E3
−2 ∂ ∂ ∂
ρ
rot A = . (9.21)
sin(ϑ) ∂ρ ∂ϑ ∂ϕ
A1 2
ρ2 A2 2
ρ sin (ϑ)A 3
The formulas (9.5), (9.11), and (9.19) for the Laplace operator and the formulas
(9.14) and (9.21) for the rotor is the main goal of the calculations performed just
above in this section. They are often used in applications and can be found in
some reference books for engineering computations.
The matrices g in all of the above coordinate systems are diagonal. Such
√
coordinate systems are called orthogonal, while the quantities Hi = gii are called
the Lame coefficients of orthogonal coordinates. Note that there is no orthonormal
curvilinear coordinate system. All such systems are necessarily Cartesian, this fact
follows from (7.8) and (5.9).
CHAPTER IV
GEOMETRY OF SURFACES.
§ 1. Parametric surfaces.
Curvilinear coordinates on a surface.
A surface is a two-dimensional spatially extended geometric object. There
are several ways for expressing quantitatively (mathematically) this fact of two-
dimensionality of surfaces. In the three-dimensional Euclidean space E the choice
of an arbitrary point implies three degrees of freedom: a point is determined by
three coordinates. In order to decrease this extent of arbitrariness we can bind
three coordinates of a point by an equation:
Then the choice of two coordinates determines the third coordinate of a point.
This means that we can define a surface by means of an equation in some
coordinate system (for the sake of simplicity we can choose a Cartesian coordinate
system). We have already used this method of defining surfaces (see formula (1.2)
in Chapter I) when we defined a curve as an intersection of two surfaces.
Another way of defining a surface is the parametric method. Unlike curves,
surfaces are parameterized by two parameters. Let’s denote them u1 and u2:
1 1 2
x (u , u )
r = r(u1, u2) =
x2(u1, u2)
. (1.2)
x3(u1, u2)
The formula (1.2) expresses the radius-vector of the points of a surface in some
Cartesian coordinate system as a function of two parameters u1, u2 . Usually, only
a part of a surface is represented in parametric form. Therefore, considering the
pair of numbers (u1 , u2 ) as a point of R2, we can assume that the point (u1 , u2)
runs over some domain U ⊂ R2 . Let’s denote by D the image of the domain U
under the mapping (1.2). Then D is the domain being mapped, U is the map or
the chart, and (1.2) is the chart mapping: it maps U onto D.
The smoothness class of the surface D is determined by the smoothness class
of the functions x1 (u1, u2), x2(u1 , u2), and x3(u1 , u2) in formula (1.2). In what
fallows we shall consider only those surfaces for which these functions are at least
continuously differentiable. Then, differentiating these functions, we can arrange
§ 1. PARAMETRIC SURFACES. 75
The Jacobi matrix (1.3) has three minors of the order 2. These are the
determinants of the following 2 × 2 matrices:
1
∂x1
2
∂x2
3
∂x3
∂x ∂x ∂x
∂u1 ∂u2 ∂u1 ∂u2 ∂u1 ∂u2
, , . (1.4)
2
∂x2
3
∂x3
1
∂x ∂x ∂x ∂x1
1
∂u2
1
∂u2
1
∂u ∂u ∂u ∂u2
In the case of regularity of the mapping (1.2) at least one of the determinants (1.4)
is nonzero. At the expense of renaming the variables x1 , x2, x3 we always can do
so that the first determinant will be nonzero:
1
∂x1
∂x
∂u1 ∂u2
6= 0. (1.5)
2
∂x ∂x2
1
∂u ∂u2
In this case we consider the first two functions x1(u2, u2) and x2(u2 , u2) in (1.2)
as a mapping and write them as follows:
x1 = x1 (u1, u2),
(1.6)
x2 = x2 (u1, u2).
Due to (1.5) the mapping (1.6) is locally invertible. Upon restricting (1.6) to some
sufficiently small neighborhood of an arbitrary preliminarily chosen point one can
construct two continuously differentiable functions
u1 = u1 (x1, x2),
(1.7)
u2 = u2 (x1, x2)
76 CHAPTER IV. GEOMETRY OF SURFACES.
that implement the inverse mapping for (1.6). This fact is well-known, it is a
version of the theorem on implicit functions (see [2], see also the theorem 2.1
in Chapter III). Let’s substitute u1 and u2 from (1.7) into the arguments of the
third function x3 (u1, u2) in the formula (1.2). As a result we obtain the function
F (x1, x2) = x3(u1 (x2, x2), u2(x2, x2)) such that each regular fragment of a surface
can locally (i. e. in some neighborhood of each its point) be presented as a graph
of a continuously differentiable function of two variables:
the surface. The conditions u1 = const and u2 = const determine two families
of coordinate lines on the plane of parameters u1, u2 . They form the coordinate
network in U . The mapping (1.2) maps it onto the coordinate network on the
surface D (see Fig. 1.1 and Fig. 1.2). Let’s consider the vectors E1 and E2 tangent
to the lines of the coordinate network on the surface D:
∂r(u1 , u2)
Ei (u1, u2) = . (1.10)
∂ui
§ 1. PARAMETRIC SURFACES. 77
The formula (1.10) defines a pair of tangent vectors E1 and E2 attached to each
point of the surface D.
The vector-function r(u1, u2) which defines the mapping (1.2) can be written in
form of the expansion in the basis of the auxiliary Cartesian coordinate system:
3
X
1 2
r(u , u ) = xq (u1, u2) · eq . (1.11)
q=1
Substituting the expansion (1.11) into (1.10) we can express the tangent vectors
E1 and E2 through the basis vectors e1, e2 , e3 :
3
X ∂xq (u1, u2)
Ei (u1 , u2) = · eq . (1.12)
q=1
∂ui
Note that the column-vectors (1.13) coincide with the columns in the Jacobi
matrix (1.3). However, from the regularity condition (see the definition 1.1) it
follows that the column of the Jacobi matrix (1.3) are linearly independent. This
consideration proves the following proposition.
Theorem 1.1. The tangent vectors E1 and E2 are linearly independent at each
point of a surface. Therefore, they form the frame of the tangent vector fields in D.
The frame vectors E1 and E2 attached to some point of a surface D define the
tangent plane at this point. Any vector tangent to the surface at this point lies
in the tangent plane, it can be expanded in the basis formed by the vectors E1
and E2 . Let’s consider some arbitrary curve γ lying completely on the surface (see
Fig. 1.1 and Fig. 1.2). In parametric form such a curve is given by two functions
of a parameter t. They define the curve as follows:
1
u = u1 (t),
(1.14)
u2 = u2 (t).
By substituting (1.14) into (1.11) or into (1.2) we find the radius-vector of a point
of the curve in the auxiliary Cartesian coordinate system r(t) = r(u1(t), u2(t)).
Let’s differentiate r(t) with respect to t and find the tangent vector of the curve
given by the above two functions (1.14):
2
dr X ∂r dui
τ (t) = = · .
dt ∂ui dt
i=1
CopyRight
c Sharipov R.A., 1996, 2004.
78 CHAPTER IV. GEOMETRY OF SURFACES.
2
X
τ (t) = u̇i · Ei . (1.15)
i=1
The mappings ũ ◦ u−1 and u ◦ ũ−1 in (2.1) are also bijective, they can be repre-
sented by the following pairs of functions:
Theorem 2.1. The functions (2.2) representing ũ ◦ u−1 and u ◦ ũ−1 are conti-
nuously differentiable.
Proof. We shall prove the continuous differentiability of the second pair of
functions (2.2). For the first pair the proof is analogous. Let’s choose some point
on the chart U and map it to D. Then we choose a suitable Cartesian coordinate
system in E such that the condition (1.5) is fulfilled and in some neighborhood of
the mapped point there exists the mapping (1.7) inverse for (1.6). The mapping
(1.7) is continuously differentiable.
The other curvilinear coordinate system in D induces the other pair of functions
that plays the same role as the functions (1.6):
x1 = x1(ũ1, ũ2),
(2.3)
x2 = x2(ũ1, ũ2).
These are two of three functions that determine the mapping ũ−1 in form of (1.2).
The functions u1 = u1 (ũ1, ũ2) and u2 = u2(ũ1 , ũ2) that determine the mapping
u ◦ ũ−1 in (2.2) are obtained by substituting (2.3) into the arguments of (1.7):
Differentiating the identity r(u1 , u2) = r(ũ1 (u1, u2), ũ2(u1 , u2)), we derive the
analogous relationship inverse to the previous one:
2
X ∂r ∂ ũk X ∂ ũk 2
∂r
Ei = = · = · Ẽk .
∂ui ∂ ũk ∂ui ∂ui
k=1 i=1
It is clear that the above relationships describe the direct and inverse transitions
from some tangent frame to another. Let’s write them as
2
X 2
X
Ẽj = Sji · Ei , Ei = Tik · Ẽk , (2.6)
i=1 k=1
where the components of the matrices S and T are given by the formulas
∂ui ∂ ũk
Sji (ũ1 , ũ2) = , Tik (u1 , u2) = . (2.7)
∂ ũj ∂ui
From (2.7), we see that the transition matrices S and T are the Jacobi matrices
for the mappings given by the transition functions (2.2). They are non-degenerate
and are inverse to each other.
The transformations (2.2) and the transition matrices S and T related to them
are used in order to construct the theory of tensors and tensor fields analogous to
that which we considered in Chapter II and Chapter III. Tensors and tensor fields
defined through the transformations (2.2) and transition matrices (2.7) are called
inner tensors and inner tensor fields on a surface:
X
Fji11...
... ir
js = Spi11 . . . Spirr Tjq11 . . . Tjqss F̃qp11...
... pr
qs . (2.8)
p1 ... pr
q1 ... qs
80 CHAPTER IV. GEOMETRY OF SURFACES.
Substituting (2.6) into (3.1), we find that under a change of a coordinate system
the quantities (3.1) are transformed as the components of an inner tensorial field of
the type (0, 2). The tensor g with the components (3.1) is called the metric tensor
of the surface. Note that the components of the metric tensor are determined by
means of the scalar product in the outer space E. Therefore, we say that the
tensor field g is induced by the outer scalar product. For this reason the tensor g
is called the metric tensor of the induced metric.
§ 3. THE METRIC TENSOR AND THE AREA TENSOR. 81
Symmetric tensors of the type (0, 2) are related to quadratic forms. This fact
yields another title for the tensor g. It is called the first quadratic form of a
surface. Sometimes, for the components of the first quadratic form the special
notations are used: g11 = E, g12 = g21 = F , g22 = G. These notations are
especially popular in the earlier publications on the differential geometry:
E F
gij =
(3.3)
F G
Since the Gram matrix g is non-degenerate, we can define the inverse matrix
ĝ = g−1. The components of such inverse matrix are denoted by g ij , setting the
indices i and j to the upper position:
3
X
gij gjk = δji . (3.4)
j=1
For the matrix ĝ the proposition analogous to the theorem 6.1 from Chapter II
is valid. The components of this matrix define an inner tensor field of the type
(2, 0) on a surface, this field is called the inverse metric tensor or the dual metric
tensor. The proof of this proposition is completely analogous to the proof of the
theorem 6.1 in Chapter II. Therefore, here we do not give this proof.
From the symmetry of the matrix g and from the relationships (3.4) it follows
that the components of the inverse matrix ĝ are symmetric. The direct and inverse
metric tensors are used in order to lower and raise indices of tensor fields. These
operations are defined by the formulas analogous to (9.1) and (9.2) in Chapter II:
2
i ... i i ... i k i ... i
X
Bj11 ... jr−1
s+1
= Aj11 ... jm−1 m r−1
n−1 jn+1 ... js+1
gkjn ,
k=1
(3.5)
2
i ... i i ... i im+1 ... ir+1
X
qim
Aj11 ... jr+1
s−1
= Bj11 ... jm−1
n−1 q jn ... js−1
g .
k=1
The only difference of the formulas (3.5) here is that the summation index k runs
over the range of two numbers 1 and 2. Due to (3.4) the operations of raising and
lowering indices (3.5) are inverse to each other.
In order to define the area tensor (or the area pseudotensor) we need the
following skew-symmetric 2 × 2 matrix:
ij
0 1
dij = d =
. (3.6)
−1 0
The quantities (3.6) form the two-dimensional analog of the Levi-Civita symbol
(see formula (6.8) in Chapter II). These quantities satisfy the relationship
2 X
X 2
dpq Mip Mjq = det M dij , (3.7)
p=1 q=1
Using the quantities dij and the matrix of the metric tensor g in some curvilin-
ear coordinate system, we construct the following quantities:
p
ωij = det g dij . (3.8)
From (3.7) one can derive the following relationship linking the quantities ωij and
ω̃pq defined according to the formula (3.8) in two different coordinate systems:
3 X
X 3
ωij = sign(det S) Tip Tjq ω̃pq . (3.9)
p=1 q=1
defines a tensorial field of the type (0, 2). It is called the area tensor. The formula
(3.10) differs from (3.8) only in sign factor ξD which is the unitary pseudoscalar
field defining the orientation (compare with the formula (8.1) in Chapter II). Here
one should note that not any surface admits some preferable orientation globally.
The Möbius strip is a well-known example of a non-orientable surface.
[E1, E2 ]
n= . (4.1)
|[E1, E2 ]|
The vector n determined by the formula (4.1) depends on the choice of a curvi-
linear coordinate system. Therefore, under a change of coordinate system it can
change its direction. Indeed, the relation of the frame vectors E1 , E2 and Ẽ1 , Ẽ2
is given by the formula (2.6). Therefore, we write
[E1 , E2 ] = (T11 T22 − T12 T21) · [Ẽ1, Ẽ2 ] = det T · [Ẽ1, Ẽ2 ].
Now we easily derive the transformation rule for the normal vector n:
The sign factor (−1)S = sign(det S) = ±1 here is the same as in the formula (2.8).
Another way of choosing the normal vector is possible if there is a preferable
orientation on a surface. Suppose that this orientation on D is given by the
unitary pseudoscalar field ξD . Then n is given by the formula
[E1, E2 ]
n = ξD · . (4.3)
|[E1, E2 ]|
In this case the transformation rule for the normal vector simplifies substantially:
n = ñ. (4.4)
The derivatives of the unit vector n are perpendicular to this vector (see lemma 3.1
in Chapter I). Hence, we have the equality
2
∂n X
i
= cki · Ek . (4.6)
∂u
k=1
84 CHAPTER IV. GEOMETRY OF SURFACES.
Let’s consider the scalar product of (4.5) and the vector n. We also consider the
scalar product of (4.6) and the vector Ej . Due to (Ek | n) = 0 we get
Let’s add the left hand sides of the above formulas (4.7) and (4.8). Upon rather
easy calculations we find that the sum is equal to zero:
From this equality we derive the relations of bij and ckj in (4.5) and (4.6):
2
X
bij = − cki gkj .
k=1
By means of the matrix of the inverse metric tensor ĝ we can invert this relation-
ship. Let’s introduce the following quite natural notation:
2
X
bki = bij gjk . (4.9)
j=1
Then the coefficients cki in (4.6) can be expressed through the coefficients bij in
(4.5) by means of the following formula:
Taking into account (4.10), we can rewrite (4.5) and (4.6) as follows:
2
∂Ej X
= Γkij · Ek + bij · n,
∂ui
k=1
(4.11)
2
∂n X
= − bki · Ek .
∂ui
k=1
The expansions (4.5) and (4.6) written in form of (4.11) are called the Veingarten’s
derivational formulas. They determine the dynamics of the moving frame and play
the central role in the theory of surfaces.
§ 5. Christoffel symbols
and the second quadratic form.
Let’s study the first Veingarten’s derivational formula in two different coordinate
systems u1 , u2 and ũ1, ũ2 on a surface. In the coordinates u1 , u2 it is written as
2
∂Ej X
i
= Γkij · Ek + bij · n. (5.1)
∂u
k=1
CopyRight
c Sharipov R.A., 1996, 2004.
§ 5. CHRISTOFFEL SYMBOLS AND THE SECOND QUADRATIC FORM. 85
Let’s express the vector Ej in the left hand side of the formula (5.1) through the
frame vectors of the second coordinate system. For this purpose we use (2.6):
∂Ej X2
∂(Tjq · Ẽq ) X 2
∂Tjq X2
∂ Ẽq
= = · Ẽ q + Tjq · .
∂ui q=1
∂u i
q=1
∂u i
q=1
∂ui
For the further transformation of the above expression we use the chain rule for
differentiating the composite function:
2 2 X 2
∂Tjm p
∂Ej X X q ∂ ũ ∂ Ẽq
= · Ẽ m + T j · .
∂ui m=1
∂u i
q=1 p=1
∂u i ∂ ũp
The values of the partial derivatives ∂ Ẽq /∂ ũp are determined by the formula (5.2).
Moreover, we should take into account (2.7) in form of the equality ∂ ũp /∂ui = Tip :
2 2 X 2 X 2
∂Ej X ∂Tjm X
= · Ẽ m + (Tjq Tip Γ̃m
pq ) · Ẽm +
∂ui m=1
∂u i
q=1 p=1 m=1
2 X
2 2 X
2
∂Tjm
X X
+ (Tjq Tip b̃pq ) · ñ = Sk · Ek +
q=1 p=1 m=1 k=1
∂ui m
2 X
X 2 X
2 X
2 2 X
X 2
+ (Tjq Tip Γ̃m k
pq Sm ) · Ek + (Tjq Tip b̃pq ) · ñ.
q=1 p=1 m=1 k=1 q=1 p=1
The unit normal vectors n and ñ can differ only in sign: n = ±ñ. Hence, the
above expansion for ∂Ej /∂ui and the expansion (5.1) both are the expansions in
the same basis E1 , E2 , n. Therefore, we have
2 2 X 2 X 2
X ∂Tjm X
Γkij = k
Sm + k
Sm Tip Tjq Γ̃m
pq , (5.3)
m=1
∂ui m=1 p=1 q=1
2 X
X 2
bij = ± Tip Tjq b̃pq . (5.4)
p=1 q=1
The formulas (5.3) and (5.4) express the transformation rules for the coefficients
Γkij and bij in Veingarten’s derivational formulas under a change of curvilinear
coordinates on a surface.
In order to make certain the choice of the sign in (5.4) one should fix some rule
for choosing the unit normal vector. If we choose n according to the formula (4.1),
then under a change of coordinate system it obeys the transformation formula
(4.2). In this case the equality (5.4) is written as
2 X
X 2
bij = (−1)S Tip Tjq b̃pq . (5.5)
p=1 q=1
86 CHAPTER IV. GEOMETRY OF SURFACES.
It is easy to see that in this case bij are the components of an inner pseudotensorial
field of the type (0, 2) on a surface.
Otherwise, if we use the formula (4.3) for choosing the normal vector n, then n
does not depend on the choice of a curvilinear coordinate system on a surface (see
formula (4.4)). In this case bij are transformed as the components of a tensorial
field of the type (0, 2). The formula (5.4) then takes the form
2 X
X 2
bij = Tip Tjq b̃pq . (5.6)
p=1 q=1
Tensors of the type (0, 2) correspond to bilinear and quadratic forms. Pseudo-
tensors of the type (0, 2) have no such interpretation. Despite to this fact the
quantities bij in Veingarten’s derivational formulas are called the components of
the second quadratic form b of a surface. The following theorem supports this
interpretation.
Theorem 5.1. The quantities Γkij and bij in Veingarten’s derivational formulas
(4.11) are symmetric with respect to the lower indices i and j.
Proof. In order to prove the theorem we apply the formula (1.10). Let’s write
this formula in the following way:
∂r(u1 , u2)
Ej (u1 , u2) = . (5.7)
∂uj
2
∂ 2 r(u1, u2) X k
= Γij · Ek + bij · n. (5.8)
∂ui ∂uj
k=1
The values of the mixed partial derivatives do not depend on the order of
differentiation. Therefore, the left hand side of (5.8) is a vector that does not
change if we transpose indices i and j. Hence, the coefficients Γkij and bij of its
expansion in the basis E1 , E2 , n do not change under the transposition of the
indices i and j. The theorem is proved.
Sometimes, for the components of the matrix of the second quadratic form the
notations similar to (3.3) are used:
L M
bij =
M
. (5.9)
N
upon such diagonalization are called the invariants of a pair of forms. In order to
calculate these invariants we consider the following contraction:
2
X
bki = bij gjk . (5.10)
j=1
The quantities bki enter the second Veingarten’s derivational formula (4.11). They
define a tensor field (or a pseudotensorial field) of the type (1, 1), i. e. an operator
field. The operator with the matrix (5.10) is called the Veingarten operator. The
matrix of this operator is diagonalized simultaneously with the matrices of the
first and the second quadratic forms, and its eigenvalues are exactly the invariants
of that pair of forms. Let’s denote them by k1 and k2.
Definition 5.1. The eigenvalues k1 (u1, u2) and k2 (u1, u2) for the matrix of
the Veingarten operator are called the principal curvatures of a surface at its point
with the coordinates u1 , u2 .
From the computational point of view the other two invariants are more
convenient. These are the following ones:
k1 + k 2
H= , K = k 1 k2 . (5.11)
2
The invariants (5.11) can be calculated without knowing the eigenvalues of the
matrix bki . It is sufficient to find the trace for the matrix of the Veingarten
operator and the determinant of this matrix:
1
H= tr(bki ), K = det(bki ). (5.12)
2
The quantity H in the formulas (5.11) and (5.12) is called the mean curvature,
while the quantity K is called the Gaussian curvature. There are formulas,
expressing the invariants H and K through the components of the first and the
second quadratic forms (3.3) and (5.9):
1 EN + GL − 2 F M LN − M 2
H= , K= . (5.13)
2 EG − F 2 EG − F 2
Let v(u1 , u2) and w(u1 , u2) be the vectors of the basis in which the matrix of
the first quadratic form is equal to the unit matrix, while the matrix of the second
quadratic form is a diagonal matrix:
1
2
v
w
v =
2
,
w=
w2
.
(5.14)
v
Then v and w are the eigenvectors of the Veingarten operator. The vectors (5.14)
have their three-dimensional realization in the space E:
v = v 1 · E1 + v 2 · E2 , w = w 1 · E1 + w 2 · E2 . (5.15)
This is the pair of the unit vectors lying on the tangent plane and being per-
pendicular to each other. The directions given by the vectors (5.15) are called
88 CHAPTER IV. GEOMETRY OF SURFACES.
the principal directions on a surface at the point with coordinates u1, u2 . If the
principal curvatures at this point are not equal to each other: k1 6= k2, then the
principal directions are determined uniquely. Otherwise, if k1 = k2 , then any
two mutually perpendicular directions on the tangent plane can be taken for the
principal directions. A point of a surface where the principal curvatures are equal
to each other (k1 = k2) is called an umbilical point.
A remark on the sign. Remember that depending on the way how we choose
the normal vector the second quadratic form is either a tensorial field or a
pseudotensorial field. The same is true for the Veingarten operator. Therefore,
in general, the principal curvatures k1 and k2 are determined only up to the sign.
The mean curvature H is also determined only up to the sign. As for the Gaussian
curvature, it has no uncertainty in sign. Moreover, the sign of the Gaussian
curvature divides the points of a surface into three subsets: for any point of a
surface if K > 0, the point is called an elliptic point; if K < 0, the point is called
a hyperbolic point; and finally, if K = 0, the point is called a parabolic point.
§ 6. Covariant differentiation
of inner tensorial fields of a surface.
Let’s consider the formula (5.3) and compare it with the formula (6.1) in
Chapter III. These two formulas differ only in the ranges over which the indices
run. Therefore the quantities Γkij , which appear as coefficients in the Veingarten’s
derivational formula, define a geometric object on a surface that is called a
connection. The connection components Γkij are called the Christoffel symbols.
The main purpose of the Christoffel symbols Γkij is their usage for the covariant
differentiation of tensor fields. Let’s reproduce here the formula (5.12) from
Chapter III for the covariant derivative modifying it for the two-dimensional case:
∂Aij11... ir
... js
∇js+1 Aij11... ir
... js = +
∂ujs+1
r
X 2
X Xs X 2 (6.1)
+ Γijm
s+1 vm
Aij11... vm ... ir
... js − Γw i1 ... ir
js+1 jn Aj1 ... wn ... js .
n
m=1 vm =1 n=1 wn =1
Theorem 6.1. The formula (6.1) correctly defines the covariant differentiation
of inner tensor fields on a surface that transforms a field of the type (r, s) into a
field of the type (r, s + 1) if and only if the quantities Γkij obey the transformation
rule (5.3) under a change of curvilinear coordinates on a surface.
Proof. Let’s begin with proving the necessity. For this purpose we choose
some arbitrary vector field A and produce the tensor field B = ∇A of the type
(1, 1) by means of the formula (6.1). The correctness of the formula (6.1) means
that the components of the field B are transformed according to the formula (2.8).
From this condition we should derive the transformation formula (5.3) for the
quantities Γkij in (6.1). Let’s write the formula (2.8) for the field B = ∇A:
2 2 X 2 2
!
∂Ak X k j X ∂ Ãm X m q
+ Γ ij A = k
Sm Tip + Γ̃pq à .
∂ui j=1 m=1 p=1
∂ ũp q=1
§ 6. COVARIANT DIFFERENTIATION . . . 89
Then we expand the brackets in the right hand side of this relationship. In the
first summand we replace Tip by ∂ ũp /∂ui according to the formula (2.7) and we
express Ãm through Aj according to the transformation rule for a vector field:
2 2
X∂ Ãm X ∂ ũp ∂ Ãm ∂ Ãm
Tip
p
= i p
= =
p=1
∂ ũ p=1
∂u ∂ ũ ∂ui
! (6.2)
2 2 2
∂ X
m k
X ∂Tjm j X ∂Aj
= i
T k A = i
A + Tjm .
∂u j=1
∂u j=1
∂ui
k=1
Taking into account (6.2), we can cancel the partial derivatives in the previous
equality and bring it to the following form:
2 2 X
2 2 X 2 X 2
X X ∂Tjm j X
Γkij Aj = k
Sm A + k
Sm Tip Γ̃m q
pq à .
j=1 j=1 m=1
∂ui m=1 p=1 q=1
Now we need only to express Ãq through Aj applying the transformation rule for
the components of a vectorial field and then extract the common factor Aj :
2 2 2 X 2 X 2
!
X X ∂Tjm X
Γkij − k
Sm − k
Sm Tip Tjq Γ̃m
pq Aj = 0.
m=1
∂ui m=1 p=1 q=1
j=1
Since A is an arbitrary vector field, each coefficient enclosed into round brackets
in the above sum should vanish separately. This condition coincides exactly with
the transformation rule (5.3). Thus, the necessity is proved.
Let’s proceed with proving the sufficiency. Suppose that the condition (5.3)
is fulfilled. Let’s choose some tensorial field A of the type (r, s) and prove that
the quantities ∇js+1 Aij11... ir
... js determined by the formula (6.1) are transformed as
the components of a tensorial field of the type (r, s + 1). Looking at the formula
(6.1) we see that it contains the partial derivative ∂Aij11... ir js+1
... js /∂u and other r + s
terms. Let’s denote these terms by Aj1 ... js+1 0 and Aj1 ... js+1 n0 . Then
i1 ... ir
m i1 ... ir
The tensorial nature of A means that its components are transformed according
to the formula (2.8). Therefore, in the first term of (6.3) we get:
∂Aij11... ir
... js ∂ X
= S i1 . . . Spirr Tjq11 . . . Tjqss Ãpq11...
... pr
=
∂ujs+1 ∂ujs+1 p1 ... pr p1 qs
q1 ... qs
2
X X ∂ ũqs+1 ∂ Ãpq11...
... pr
qs
= Spi11 . . . Spirr Tjq11 . . . Tjqss +
p1 ... pr qs+1 =1 ∂ujs+1 ∂ ũqs+1
q1 ... qs
90 CHAPTER IV. GEOMETRY OF SURFACES.
r
X X ∂Spim
+ Spi11 . . . m
. . . Spirr Tjq11 . . . Tjqss Ãpq11...
... pr
qs +
m=1 p1 ... pr
∂ujs+1
q1 ... qs
Xs X ∂Tjqnn
+ Spi11 . . . Spirr Tjq11 . . . . . . Tjqss Ãpq11...
... pr
qs .
n=1 p1 ... pr
∂ujs+1
q1 ... qs
Here we used the Leibniz rule for differentiating the product of multiple functions
and the chain rule in order to express the derivatives with respect to ujs+1 through
the derivatives with respect to ũqs+1 . For the further transformation of the above
qs+1
formula we replace ∂ ũqs+1 /∂ujs+1 by Tjs+1 according to (2.7) and we use the
following identities based on the fact that S and T are mutually inverse matrices:
2 X2
∂Spim X
im ∂Tk
vm
m
= − S vm Sk ,
js+1 pm
∂ujs+1 v =1
∂u
m k=1
(6.4)
∂Tjqnn 2 X
X 2
∂Tkqn
= Tjwnn k
Sw .
∂ujs+1 wn =1 k=1
n
∂ujs+1
∂Aij11... ir
... js
X
j
= Spi11 . . . Spirr Tjq11 . . . Tjqss ×
∂u s+1 p1 ... pr
q1 ... qs
(6.5)
2 r s
!
X q Ãpq11...
... pr
qs
X m X 0
× Tjs+1
s+1
− V 0 + W n ,
qs+1 =1
∂ ũqs+1 m=1 n=1
where the following notations are used for the sake of simplicity:
2 X
2
m X ∂Tkpm p1 ... vm ... pr
V = Svkm à ,
0
vm =1 k=1
∂ujs+1 q1 ... qs
(6.6)
2 X
2
0 X ∂Tkwn p1 ... pr
W = Sqkn à .
n
wn =1 k=1
∂ujs+1 q1 ... wn ... qs
2
X
Aij11... ir
m
... js+1 0 = Γijm
s+1 vm
Aij11... vm ... ir
... js .
vm =1
Applying the transformation rule (2.8) to the components of the field A, we get:
2
X X
Aij11... ir
m
... js+1 0
= Spi11 . . . Spvm
m
. . . Spirr ×
p1 ... pr vm =1
q1 ... qs (6.7)
For the further transformation of the above expression we use (5.3) written as
2 2 2 2
X ∂Tvkm XX X qs+1 q
Γijm
s+1 vm
= Skim j
+ Skim Tjs+1 Tvm Γ̃kqs+1 q .
∂u s+1
q=1 q =1
k=1 k=1 s+1
Immediately after substituting this expression into (6.7) we perform the cyclic
transposition of the summation indices: r → pm → vm → r. Some sums in the
resulting expression are evaluated explicitly if we take into account the fact that
the transition matrices S and T are inverse to each other:
X
Aij11... ir
Spi11 . . . Spirr Tjq11 . . . Tjqss ×
m
... js+1 0 =
p1 ... pr
q1 ... qs
2
! (6.8)
qs+1
X
m
m
Ãpq11...
... pr
× Tjs+1 qs+1 0 +V 0 .
qs+1 =1
By means of the analogous calculations one can derive the following formula:
0 X
Aij11... ir
... js+1 n = Spi11 . . . Spirr Tjq11 . . . Tjqss ×
p1 ... pr
q1 ... qs
2
! (6.9)
qs+1
X
0
0
Ãpq11...
... pr
× Tjs+1 qs+1 n +W n .
qs+1 =1
0 (6.5), (6.8), and (6.9) into the formula (6.3). Then the entries
Nowwe substitute
of V m0 and W n do cancel each other. A a residue, upon collecting the similar
terms and cancellations, we get the formula expressing the transformation rule
(2.8) applied to the components of the field ∇A. The theorem 6.1 is proved.
The theorem 6.1 yields a universal mechanism for constructing the covari-
ant differentiation. It is sufficient to have a connection whose components are
transformed according to the formula (5.3). We can compare two connections:
the Euclidean connection in the space E constructed by means of the Cartesian
coordinates and a connection on a surface whose components are given by the
Veingarten’s derivational formulas. Despite to the different origin of these two
connections, the covariant derivatives defined by them have many common prop-
erties. It is convenient to formulate these properties using covariant derivatives
along vector fields. Let X be a vector field on a surface. For any tensor field A of
the type (r, s) we define the tensor field B = ∇X A of the same type (r, s) given
by the following formula:
2
X
Bji11 ... ir
... js = X q ∇q Aij11... ir
... js . (6.10)
q=1
CopyRight
c Sharipov R.A., 1996, 2004.
92 CHAPTER IV. GEOMETRY OF SURFACES.
where A and B are arbitrary differentiable tensor fields, while X and Y are arbi-
trary vector fields and ξ is an arbitrary scalar field.
Looking attentively at the theorem 6.2 and at the formula (6.10), we see that
the theorem 6.2 is a copy of the theorem 5.2 from Chapter II, while the formula
(6.10) is a two-dimensional analog of the formula (5.5) from the same Chapter II.
However, the proof there is for the case of the Euclidean connection in the space
E. Therefore we need to give another proof.
Proof. Let’s choose some arbitrary curvilinear coordinate system on a surface
and prove the theorem by means of direct calculations in coordinates. Denote
C = A + B, where A and B are two tensorial fields of the type (r, s). Then
Cji11...
... ir i1 ... ir i1 ... ir
js = Aj1 ... js + Bj1 ... js .
Substituting Cji11...
... ir
js into (6.1), upon rather simple calculations we get
∇js+1 Cji11...
... ir i1 ... ir i1 ... ir
js = ∇js+1 Aj1 ... js + ∇js+1 Bj1 ... js .
The rest is to multiply both sides of the above equality by X js+1 and perform
summation over the index js+1 . Applying (6.10), we derive the formula of the
item (1) in the theorem.
Note that the quantities Bji11 ... ir
... js in the formula (6.10) are obtained as the linear
combinations of the components of X. The items (2) and (3) of the theorem follow
immediately from this fact.
Let’s proceed with the item (4). Denote C = A ⊗ B. Then for the components
of the tensor field C we have the equality
i ... i i ... i
Cj11... js+q
r+p
= Aij11... ir
... js Bjs+1 ... js+q .
r+1 r+p
(6.11)
i ... i
Let’s substitute the quantities Cj11... js+q
r+p
from (6.11) into the formula (6.1) defining
the covariant derivative. As a result for D = ∇C we derive
i ... ir+p ir+1 ... ir+p
Dj11... js+q+1 = ∂Aji11...
... ir
js /∂u
js+q+1
Bjs+1 ... js+q +
ir+1 ... ir+p
+ Aji11...
... ir js+q+1
js ∂Bjs+1 ... js+q /∂u +
r 2
i ... i
X X
+ Γijm
s+q+1 vm
Aij11... vm ... ir
... js Bjr+1 r+p
s+1 ... js+q
+
m=1 vm =1
r+p 2
i ... v ... ir+p
X X
+ Aji11...
... ir im
js Γjs+q+1 vm Bjs+1 ... js+q
r+1 m
−
m=r+1 vm =1
2
s X
i ... i
X
− Γw i1 ... ir
js+q+1 jn Aj1 ... wn ... js Bjs+1 ... js+q −
n r+1 r+p
n=1 wn =1
s+q 2
i ... i
X X
− Aji11...
... ir wn
js Γjs+q+1 jn Bjs+1 ... wn ... js+q .
r+1 r+p
n=s+1 wn =1
§ 6. COVARIANT DIFFERENTIATION . . . 93
Note that upon collecting the similar terms the above huge formula can be
transformed to the following shorter one:
ir+1 ... ir+p
∇js+q+1 Aij11... ir i1 ... ir
... js Bjs+1 ... js+q = ∇js+q+1 Aj1 ... js ×
(6.12)
i ... ir+p i ... ir+p
× Bjr+1
s+1 ... js+q
+ Aij11... ir
... js ∇js+q+1 Bjr+1
s+1 ... js+q
.
Now in order to prove the fourth item of the theorem it is sufficient to multiply
(6.12) by X js+q+1 and sum up over the index js+q+1 .
Proceeding with the last fifth item of the theorem, we consider two tensorial
fields A and B one of which is the contraction of another:
2
i ... i k i ... i
X
Bji11 ... ir
... js = Aj11 ... jp−1 p r
q−1 k jq ... js
. (6.13)
k=1
2 i ... i k ip ... ir
X ∂Aj11 ... jp−1
q−1 k jq ... js
∇js+1 Bji11 ... ir
... js = +
∂ujs+1
k=1
2 X
r X
X 2
+ Γijm
s+1 vm
Aij11... vm ... k ... ir
... jq−1 k jq ... js − (6.14)
m=1 k=1 vm =1
2 X
s X 2
i ... i k i ... i
X
− Γw n 1
js+1 jn Aj1 ... wn ... k ... js .
p−1 p r
n=1 k=1 wn =1
The index vm in (6.14) can be either to the left of the index k or to the right of it.
The same is true for the index wn. However, the formula (6.14) does not comprise
the terms where vm or wn replaces the index k. Such terms would have the form:
2 X
2
i ... i v i ... i
X
Γkjs+1 v Aj11 ... jp−1 p r
q−1 k jq ... js
, (6.15)
k=1 v=1
2 X
2
i ... i k i ... i
X
− Γw 1
js+1 k Aj1 ... jq−1 w jq ... js .
p−1 p r
(6.16)
k=1 w=1
It is easy to note that (6.15) and (6.16) differ only in sign. It is sufficient to
rename k to v and w to k in the formula (6.16). Adding both (6.15) and (6.16) to
(6.14) would not break the equality. But upon adding them one can rewrite the
equality (6.14) in the following form:
2
i ... i k i ... i
X
∇js+1 Bji11 ... ir
... js = ∇js+1 Aj11 ... jp−1 p r
q−1 k jq ... js
. (6.17)
k=1
No in order to complete the proof of the item (5), and thus prove the theorem in
whole, it is sufficient to multiply the equality (6.17) by X js+1 and sum up over the
index js+1 .
94 CHAPTER IV. GEOMETRY OF SURFACES.
Among the five properties of the covariant derivative listed in the theorem 6.2
the fourth property written as (6.12) is most often used in calculations. Let’s
rewrite the equality (6.12) as follows:
The formula (6.18) is produced from (6.12) simply by renaming the indices;
however, it is more convenient for reception.
2 2
∂gij X q X
∇k gij = − Γ gqj − Γqkj giq = 0 (7.1)
∂uk q=1 ki q=1
which expresses the concordance condition for the metric and connection.
Proof. Let’s consider the first Veingarten’s derivational formula in (4.11) and
let’s rewrite it renaming some indices:
2
∂Ei X q
k
= Γki · Eq + bki · n. (7.2)
∂u q=1
Let’s take the scalar products of both sides of (7.2) by Ej and remember that the
vectors Ej and n are perpendicular. The scalar product of Eq and Ej in the right
hand side yields the element gqj of the Gram matrix:
2
X
(∂Ei /∂uk | Ej ) = Γqki gqj . (7.3)
q=1
Now let’s transpose the indices i and j in (7.3) and take into account the symmetry
of the Gram matrix. As a result we obtain
2
X
(Ei | ∂Ej /∂uk ) = Γqkj giq . (7.4)
q=1
§ 7. CONCORDANCE OF METRIC AND CONNECTION ON A SURFACE. 95
Then let’s add (7.3) with (7.4) and remember the Leibniz rule as applied to the
differentiation of the scalar product in the space E:
Now it is easy to see that the equality just obtained coincides in essential with
(7.1). The theorem is proved.
As an immediate consequence of the theorems 7.1 and 5.1 we get the following
formula for the connection components:
2
1 X kr grj gir gij
Γkij = g + − . (7.5)
2 r=1 ∂ui ∂uj ∂ur
We do not give its prove here since, in essential, it is the same as in the case of
the formula (7.8) in Chapter III.
From the condition ∇q gij = 0 one can easily derive that the covariant deriva-
tives of the inverse metric tensor are also equal to zero. For this purpose one
should apply the formula (3.4). The covariant derivatives of the identical operator
field with the components δki are equal to zero. Indeed, we have
2 2
∂(δki ) X i r X r i
∇q δki = + Γ δ
qr k − Γqk δr = 0. (7.6)
∂uq r=1 r=1
Let’s differentiate both sides of (3.4) and take into account (7.6):
2 2
!
X X
ij
∇q g gjk = (∇q gij gjk + gij ∇q gjk ) =
j=1 j=1
2
X
= ∇q gij gjk = ∇q δki = 0. (7.7)
j=1
In deriving (7.7) we used the items (4) and (5) from the theorem 6.2. The
procedure of lowering j by means of the contraction with the metric tensor gjk is
an invertible operation. Therefore, (7.7) yields ∇q gij = 0. Now the concordance
condition for the metric and connection is written as a pair of two relationships
∇g = 0, ∇ĝ = 0, (7.8)
which look exactly like the relationships (6.7) in Chapter II for the case of metric
tensors in the space E.
Another consequence of the theorem 7.1 is that the index raising and the index
lowering operations performed by means of contractions with the metric tensors
96 CHAPTER IV. GEOMETRY OF SURFACES.
2 2
!
X X
∇q gik A... k ...
... ... = gik ∇q A... k ...
... ... ,
k=1 k=1
(7.9)
2 2
!
X X
ik
∇q g A... ...
... k ... = g ik
∇q A... ...
... k ... .
k=1 k=1
The relationship (7.9) is easily derived from (7.8) using the items (4) and (5) in
the theorem 6.2.
Theorem 7.2. The covariant differential of the area pseudotensor (3.8) on any
surface is equal to zero: ∇ω = 0.
In order to prove this theorem we need two auxiliary propositions which are
formulated as the following lemmas.
Lemma 7.1. For any square matrix M whose components are differentiable
functions of some parameter x the equality
d(ln det M )
= tr(M −1 M 0 ) (7.10)
dx
2
X
Miq dqj + Mjq diq = tr M dij (7.11)
q=1
is fulfilled, where dij are the components of the skew-symmetric matrix determined
by the relationship (3.6).
The proof of these two lemmas 7.1 and 7.2 as well as the proof of the above
formula (3.7) from § 3 can be found in [4].
Let’s apply the lemma 7.1 to the matrix of the metric tensor. Let x = uk . Then
we rewrite the relationship (7.10) as follows:
√ 2 2
1 ∂ det g 1 X X qp ∂gqp
√ = g . (7.12)
det g ∂uk 2 q=1 p=1 ∂uk
Note that in (7.11) any array of four numbers enumerated with two indices can
play the role of the matrix M . Having fixed the index k, one can use the
connection components Γjki as such an array. Then we obtain
2
X 2
X
Γqki dqj + Γqkj diq = Γqkq dij . (7.13)
q=1 q=1
§ 8. CURVATURE TENSOR. 97
Proof for the theorem 7.2. The components of the area pseudotensor ω
are determined by the formula (3.8). In order to find the components of the
pseudotensor ∇ω we apply the formula (6.1). It yields
√ 2 p
∂ det g X
det g Γqki dqj + Γqkj diq =
∇k ωij = k
d ij −
∂u q=1
√ 2
!
p 1 ∂ det g X q q
= det g √ dij − Γki dqj + Γkj diq .
det g ∂uk q=1
For the further transformation of this expression we apply (7.12) and (7.13):
2 2 2
!
p 1 X X qp ∂gqp X q
∇k ωij = det g g − Γkq dij . (7.14)
2 q=1 p=1 ∂uk q=1
Now let’s express Γqkq through the components of the metric tensor by means of
the formula (7.5). Taking into account the symmetry of g pq , we get
2 2 2 2 2
X 1 X X qp gpq gkp gkq 1 X X qp gqp
Γqkq = g + − = g .
q=1
2 q=1 p=1 ∂uk ∂uq ∂up 2 q=1 p=1 ∂uk
Substituting this expression into the formula (7.14), we find that it vanishes.
Hence, ∇k ωij = 0. The theorem is proved.
A remark on the sign. The area tensor differs from the area pseudotensor
only by the scalar sign factor ξD . Therefore, the proposition of the theorem 7.2
for the area tensor of an arbitrary surface is also valid.
A remark on the dimension. For the volume tensor (and for the volume
pseudotensor) in the Euclidean space E we have the analogous proposition: it
states that ∇ω = 0. The proof of this proposition is even more simple than
the proof of the theorem 7.2. The components of the field ω in any Cartesian
coordinate system in E are constants. Hence, their derivatives are zero.
§ 8. Curvature tensor.
The covariant derivatives in the Euclidean space E are reduced to the partial
derivatives in any Cartesian coordinates. Is there such a coordinate system for
covariant derivatives on a surface ? The answer to this question is related to the
commutators. Let’s choose a vector field X and calculate the tensor field Y of the
type (1, 2) with the following components:
In order to calculate the components of the field Y we apply the formula (6.1):
2 2
∂(∇j X k ) X k X q
∇i ∇j X k = i
+ Γiq ∇j X q − Γij ∇q X k ,
∂u q=1 q=1
(8.2)
2 2
∂(∇i X k ) X k
k
X q
∇j ∇i X = j
+ Γjq ∇i X q − Γji ∇q X k .
∂u q=1 q=1
98 CHAPTER IV. GEOMETRY OF SURFACES.
Let’s subtract the second relationship (8.2) from the first one. Then the last terms
in them do cancel each other due to the symmetry of Γkij :
2 2
! !
k ∂ ∂X k X k r ∂ ∂X k X k r
Yij = + Γjr X − j + Γir X +
∂ui ∂uj r=1
∂u ∂ui r=1
2 2
! 2 2
!
X
k ∂X q X q r X
k ∂X q X q r
+ Γiq + Γjr X − Γjq + Γir X .
q=1
∂uj r=1 q=1
∂ui r=1
It is important to note that the formula (8.3) does not contain the derivatives of
the components of X — they are canceled. Let’s denote
2 2
∂Γkjr ∂Γkir X k q X
Rkrij = − + Γ Γ
iq jr − Γkjq Γqir . (8.4)
∂ui ∂uj q=1 q=1
The formula (8.3) for the components of the field (8.1) then can be written as
2
X
(∇i ∇j − ∇j ∇i )X k = Rkrij X r . (8.5)
r=1
Let’s replace the vector field X by a covector field. Performing the similar
calculations, in this case we obtain
2
X
(∇i ∇j − ∇j ∇i)Xk = − Rrkij Xr . (8.6)
r=1
The formulas (8.5) and (8.6) can be generalized for the case of an arbitrary tensor
field X of the type (r, s):
r
X 2
X
(∇i∇j − ∇j ∇i )Xji11...
... ir
js = Rivmm ij Xji11...
... vm ... ir
js −
m=1 vm =1
(8.7)
2
s X
X
− Rw i1 ... ir
jn ij Xj1 ... wn ... js .
n
n=1 wn =1
Comparing (8.5), (8.6), and (8.7), we see that all of them contain the quantities
Rkrij given by the formula (8.4).
Theorem 8.1. The quantities Rkrij introduced by the formula (8.4) define a
tensor field of the type (1, 3). This tensor field is called the curvature tensor or the
Riemann tensor.
The theorem 8.1 can be proved directly on the base of the formula (5.3).
However, we give another proof which is more simple.
CopyRight
c Sharipov R.A., 1996, 2004.
§ 8. CURVATURE TENSOR. 99
2
X
Yijk = Rkqij X q (8.8)
q=1
is a tensor of the type (1, 2), then the object R itself is a tensor of the type (1, 3).
Proof of the lemma. Let u1 , u2 and ũ1, ũ2 be two curvilinear coordinate
systems on a surface. Let’s fix some numeric value of the index r (r = 1 or r = 2).
Since X is an arbitrary vector, we choose this vector so that its r-th component
in the coordinate system u1 , u2 is equal to unity, while all other components are
equal to zero. Then for Yijk in this coordinate system we get
2
X
Yijk = Rkqij X q = Rkrij . (8.9)
q=1
For the components of the vector X in the other coordinate system we derive
2
X
X̃ m = Tqm X q = Trm ,
q=1
then we apply the formula (8.8) on order to calculate the components of the tensor
Y in the second coordinate system:
2
X 2
X
n
Ỹpq = R̃n
mpq X̃
m
= R̃n m
mpq Tr . (8.10)
m=1 m=1
The rest is to relate the quantities Yijk from (8.9) and the quantities Ỹpq
n
from
(8.10). From the statement of the theorem we know that these quantities are the
components of the same tensor in two different coordinate systems. Hence, we get
2 X
X 2 X
2
Yijk = Snk Tip Tjq Ỹpq
n
. (8.11)
n=1 p=1 q=1
This formula exactly coincides with the transformation rule for the components of
a tensorial field of the type (1, 3) under a change of coordinates. Thus, the lemma
is proved.
The theorem 8.1 is an immediate consequence of the lemma 8.1. Indeed, the
left hand side of the formula (8.6) defines a tensor of the type (1, 2) for any choice
of the vector field X, while the right hand side is the contraction of R and X.
100 CHAPTER IV. GEOMETRY OF SURFACES.
The components of the curvature tensor given by the formula (8.4) are enumer-
ated by three lower indices and one upper index. Upon lowering by means of the
metric tensor the upper index is usually written in the first position:
2
X
Rqrij = Rkrij gkq . (8.12)
k=1
The tensor of the type (0, 4) given by the formula (8.12) is denoted by the same
letter R. Another tensor is derived from (8.4) by raising the first lower index:
2
X
Rkq
ij = Rkrij grq . (8.13)
r=1
The raised lower index is usually written as the second upper index. The tensors
of the type (0, 4) and (2, 2) with the components (8.12) and (8.13) are denoted by
the same letter R and called the curvature tensors.
2
X
Rkqij gkr + Rkrij gqk .
(∇i ∇j − ∇j ∇i) gqr =
k=1
Remember that due to the concordance of the metric and connection the covariant
derivatives of the metric tensor are equal to zero (see formula (7.1)). Hence, the
left hand side of the equality (8.14) is equal to zero, and as a consequence we get
the identity from the item (2) of the theorem.
Let’s drop for a while the third item of the theorem and prove the fourth item
by means of the direct calculations on the base of the formula (8.4). Let’s write
the relationship (8.4) and perform twice the cyclic transposition of the indices in
§ 8. CURVATURE TENSOR. 101
Let’s add all the three above equalities and take into account the symmetry of the
Christoffer symbols with respect to their lower indices. It is easy to see that the
sum in the right hand side will be zero. This proves the item (4) of the theorem.
The third item of the theorem follows from the first, the second, and the third
items. In the left hand side of the equality that we need to prove we have Rqrij .
The simultaneous transposition of the indices q ↔ r and i ↔ j does not change
this quantity, i. e. we have the equality
This equality follows from the item (1) and the item (2). Let’s apply the item (4)
to the quantities in both sides of the equality (8.15):
Now let’s perform the analogous manipulations with the quantity Rijqr :
Let’s add the equalities (8.16) and subtract from the sum the equalities (8.18). It
is easy to verify that due to the items (1) and (2) of the theorem the right hand
side of the resulting equality is zero. Then, using (8.15) and (8.17), we get
2 Rqrij − 2 Rijqr = 0.
Dividing by 2, now we get the identity that we needed to prove. Thus, the theorem
is completely proved.
The curvature tensor R given by its components (8.4) has the indices on both
levels. Therefore, we can consider the contraction:
2
X
Rrj = Rkrkj . (8.19)
k=1
102 CHAPTER IV. GEOMETRY OF SURFACES.
2 X
X 2
Rrj = gik Rirkj .
i=1 k=1
From this equality due to the symmetry g ik and due to the item (4) of the
theorem 8.2 we derive the symmetry of the tensor Rrj :
The symmetric tensor of the type (0, 2) with the components (8.19) is called the
Ricci tensor. It is denoted by the same letter R as the curvature tensor.
Note that there are other two contractions of the curvature tensor. However,
these contractions do not produce new tensors:
2
X 2
X
Rkkrj = 0, Rkrik = −Rri .
k=1 k=1
Using the Ricci tensor, one can construct a scalar field R by means of the formula
2 X
X 2
R= Rrj grj . (8.21)
r=1 j=1
The scalar R(u1 , u2) defined by the formula (8.21) is called the scalar curvature of
a surface at the point with the coordinats u1, u2. The scalar curvature is a result
of total contraction of the curvature tensor R given by the formula (8.13):
2 X
X 2
R= Rij
ij . (8.22)
i=1 j=1
The formula (8.22) is easily derived from (8.21). Any other ways of contracting
the curvature tensor do not give other scalars essentially different from (8.21).
In general, passing from the components of the curvature tensor Rkr ij to the
scalar curvature, we should lose a substantial part of the information contained in
the tensor R: this means that we replace 16 quantities by the only one. However,
due to the theorem 8.2 in two-dimensional case we do not lose the information at
all. Indeed, due to the theorem 8.2 the components of the curvature tensor Rkr ij
are skew-symmetric both with respect to upper and lower indices. If k = r or
i = j, they do vanish. Therefore, the only nonzero components are R12 21 12
12 , R12 , R21 ,
21 12 21 21 12
R21, and they satisfy the equalities R12 = R21 = −R12 = −R21. Hence, we get
R = R12 21 12
12 + R21 = 2 R12 .
kr R k r
δi δj − δjk δir .
Dij =
2
§ 9. GAUSS EQUATION AND PETERSON-CODAZZI EQUATION. 103
The tensor D is also skew-symmetric with respect to upper and lower indices and
12
D12 = R12
12 . Hence, these tensors do coincide: D = R. In coordinates we have
R k r
Rkr δi δj − δjk δir .
ij = (8.23)
2
R k
Rkrij = δi grj − δjk gri .
(8.24)
2
R
Rij = gij . (8.25)
2
The Ricci tensor of an arbitrary surface is proportional to the metric tensor.
A remark. The curvature tensor determined by the symmetric connection
(7.5) possesses another one (fifth) property expressed by the identity
The relationship (8.23) is known as the Bianchi identity. However, in the case of
surfaces (in the dimension 2) it appears to be a trivial consequence from the item
(1) of the theorem 8.2. Therefore, we do not give it here.
∂f ∂f
= a(x, y), = b(x, y). (9.1)
∂x ∂y
Let’s differentiate the first equation (9.1) with respect to y and the second equation
with respect to x. Then we subtract one from another:
∂a ∂b
− = 0. (9.2)
∂y ∂x
Similarly, one can derive the compatibility conditions for the system of Vein-
garten’s derivational equations (4.11). Let’s write the first of them as
2
∂Ek X q
= Γjk · Eq + bjk · n. (9.3)
∂uj q=1
Then we differentiate (9.3) with respect to ui and express the derivatives ∂Ek /∂ui
and ∂n/∂ui arising therein by means of the derivational formulas (4.11):
2
!
∂Ek ∂bjk X q
= + Γjk biq · n+
∂ui ∂uj ∂ui q=1
(9.4)
2
∂Γqjk X 2
!
X q q
s
+ + Γjk Γis − bjk bi · Eq .
q=1
∂ui s=1
Let’s transpose indices i and j in the formula (9.4). The value of the second order
mixed partial derivative does not depend on the order of differentiation. Therefore,
the value of the left hand side of (9.4) does not change under the transposition of
indices i and j. Let’s subtract from (9.4) the relationship obtained by transposing
the indices. As a result we get
2
∂Γqjk 2 2
!
X ∂Γqik X s q X
− + Γ jk Γ is − Γsik Γqjs + bik bqj − bjk bqi · Eq +
q=1
∂ui ∂uj s=1 s=1
2 2
!
∂bjk X q ∂bik X q
+ i
+ Γjk biq − − Γik bjq · n = 0.
∂u q=1
∂uj q=1
The vectors E1 , E2 , and n composing the moving frame are linearly independent.
Therefore the above equality can be broken into two separate equalities
∂Γqjk 2
∂Γqik X s q X2
− + Γ Γ
jk is − Γsik Γqjs = bjk bqi − bik bqj ,
∂ui ∂uj s=1 s=1
2 2
∂bjk X q ∂bik X q
i
− Γik bjq = − Γjk biq .
∂u q=1
∂uj q=1
Note that the left hand side of the first of these relationships coincides with the
formula for the components of the curvature tensor (see (8.4)). Therefore, we can
rewrite the first relationship as follows:
It is easy to verify this fact immediately by transforming (9.6) back to the initial
form applying the formula (6.1).
§ 9. GAUSS EQUATION AND PETERSON-CODAZZI EQUATION. 105
The equations (9.5) and (9.6) are differential consequences of the Veingarten’s
derivational formulas (4.11). The first of them is known as the Gauss equation
and the second one is known as the Peterson-Codazzi equation.
The tensorial Gauss equation (9.5) contains 16 separate equalities. However,
due to the relationship (8.24) not all of them are independent. In order to simplify
(9.5) let’s raise the index k in it. As a result we get
R q k
δi δj − δjq δik = bqi bkj − bqj bki .
(9.7)
2
The expression in right hand side of (9.7) is skew-symmetric both with respect to
upper and lower pairs of indices and each index in (9.7) runs over only two values.
Therefore the right hand side of the equation (9.7) can be transformed as
where K is the Gaussian curvature of a surface (see formula (5.12)). The above
considerations show that the Gauss equation (9.5) is equivalent to exactly one
scalar equation which is written as follows:
R = 2 K. (9.9)
This equation relates the scalar and Gaussian curvatures of a surface. It is also
called the Gauss equation.
CopyRight
c Sharipov R.A., 1996, 2004.
CHAPTER V
CURVES ON SURFACES
(compare with the formulas (1.14) from Chapter IV). The inverse mapping u−1 is
represented by the vector-function
This is the tangent vector of the curve (compare with the formulas (1.15) in
Chapter IV). The formula (1.4) shows that the vector τ lies in the tangent plane
of the surface. This is the consequence of the fact that the curve in whole lies on
the surface.
Under a change of curvilinear coordinates on the surface the derivatives u̇i are
transformed as the components of a tensor of the type (1, 0). They determine the
inner (two-dimensional) representation of the vector τ in the chart. The formula
(1.4) is used to pass from inner to outer (tree-dimensional) representation of this
vector. Our main goal in this chapter is to describe the geometry of curves lying
on a surface in terms of its two-dimensional representation in the chart.
The length integral is an important object in the theory of curves, see formula
(2.3) in Chapter I. Substituting (1.4) into this formula, we get
v
Zb u
uX 2 X 2
L= t gij u̇i u̇j dt. (1.5)
a i=1 j=1
§ 2. GEODESIC AND NORMAL CURVATURES OF A CURVE. 107
The expression under integration in (1.5) is the length of the vector τ in its inner
representation. If s = s(t) is the natural parameter of the curve, then, denoting
dui = u̇i dt, we can write the following formula:
2 X
X 2
ds2 = gij dui duj . (1.6)
i=1 j=1
The formula (1.6) approves the title «the first quadratic form» for the metric
tensor. Indeed, the square of the length differential ds2 is a quadratic form of
differentials of the coordinate functions in the chart. If t = s is the natural
parameter of the curve, then there is the equality
2 X
X 2
gij u̇i u̇j = 1 (1.7)
i=1 j=1
that expresses the fact that the length of the tangent vector τ of a curve in the
natural parametrization is equal to unity (see § 2 in Chapter I).
By n curv we denote the unit normal vector of the curve in order to distinguish
it from the unit normal vector n of the surface. For to calculate the derivatives
∂Ei /∂uj we apply the Veingarten’s derivational formulas (4.11):
2 2 X
2 2 X
2
! !
X X X
k
k · n curv = ü + Γkji u̇i u̇ j
· Ek + bij u̇ u̇ i j
· n. (2.2)
k=1 i=1 j=1 i=1 j=1
Let’s denote by k norm the coefficient of the vector n in the formula (2.2). This
quantity is called the normal curvature of a curve:
2 X
X 2
k norm = bij u̇i u̇j . (2.3)
i=1 j=1
The vector in the right hand side of (2.4) is a linear combination of the vectors E1
and E2 that compose a basis in the tangent plane. Therefore, this vector lies in
the tangent plane. Its length is called the geodesic curvature of a curve:
2 2 X 2
!
X X
k geod = ük + Γkji u̇i u̇j · Ek . (2.5)
k=1 i=1 j=1
Due to the formula (2.5) the geodesic curvature of a curve is always non-negative.
If k geod 6= 0, then, taking into account the relationship (2.5), one can define the
unit vector n inner and rewrite the formula (2.4) as follows:
The unit vector n inner in the formula (2.6) is called the inner normal vector of a
curve on a surface.
Due to (2.6) the vector n inner is a
linear combination of the vectors n curv
and n which are perpendicular to the
unit vector τ lying in the tangent plane.
Hence, n inner ⊥ τ . On the other hand,
being a linear combination of the vectors
E1 and E2 , the vector n inner itself lies
in the tangent plane. Therefore, it is
determined up to the sign:
The formula (2.3) determines the value of the normal curvature of a curve in
the natural parametrization t = s. Let’s rewrite it as follows:
2 X
X 2
bij u̇i u̇j
i=1 j=1
k norm = 2 X
2
. (2.10)
X
i j
gij u̇ u̇
i=1 j=1
§ 2. GEODESIC AND NORMAL CURVATURES OF A CURVE. 109
In the natural parametrization the formula (2.10) coincides with (2.3) because
of (1.7). When passing to an arbitrary parametrization all derivatives u̇i are
multiplied by the same factor. Indeed, we have
dui dui ds
= . (2.11)
dt ds dt
But the right hand side of (2.10) is insensitive to such a change of u̇i . Therefore,
(2.10) is a valid formula for the normal curvature in any parametrization.
The formula (2.10) shows that the normal curvature is a very rough characte-
ristic of a curve. It is determined only by the direction of its tangent vector τ in
the tangent plane. The components of the matrices gij and bij characterize not
the curve, but the point of the surface through which this curve passes.
Let a be some vector tangent to the surface. In curvilinear coordinates u1 , u2
it is given by two numbers a1 , a2 — they are the coefficients in its expansion in
the basis of two frame vectors E1 and E2 . Let’s consider the value of the second
quadratic form of the surface on this vector:
2 X
X 2
b(a, a) = bij ai aj . (2.12)
i=1 j=1
Comparing (2.13) and (2.10), we see that asymptotic lines are the lines with zero
normal curvature: k norm = 0. On the surfaces with negative Gaussian curvature
K < 0 at each point there are two asymptotic directions. Therefore, on such
surfaces always there are two families of asymptotic lines, they compose the
asymptotic network of such a surface. On any surface of the negative Gaussian
curvature there exists a curvilinear coordinate system u1 , u2 whose coordinate
network coincides with the asymptotic network of this surface. However, we shall
not prove this fact here.
The curvature lines are defined by analogy with the asymptotic lines. These
are the curves on a surface whose tangent vector lies in a principal direction at
each point (see formulas (5.14) and (5.15) in § 5 of Chapter IV). The curvature
lines do exist on any surface, there are no restrictions for the Gaussian curvature
of a surface in this case.
110 CHAPTER V. CURVES ON SURFACES.
dτ
= k norm · n. (2.14)
ds
In the other words, the derivative of the unit normal vector on a geodesic line is
directed along the unit normal vector of a surface. This is the external description
of geodesic lines. The inner description is derived from the formula (2.5):
2 X
X 2
ük + Γkji u̇i u̇j = 0. (2.15)
i=1 j=1
The equations (2.15) are the differential equations of geodesic lines in natural
parametrization. One can pass from the natural parametrization to an arbitrary
one by means of the formula (2.11).
these functions. Then we shall assume that these functions are sufficiently many
times differentiable with respect to all their arguments:
u1 = u1 (t, h),
(3.1)
u2 = u2 (t, h).
For each fixed h in (3.1) we have the functions of the parameter t, they define
a curve on the surface. Changing the parameter h, we deform the curve so
that in the process of this deformation its point are always on the surface. The
differentiability of the functions (3.1) guarantees that small deformations of the
curve correspond to small changes of the parameter h.
Let’s impose to the functions (3.1) a series of restrictions which are easy to
satisfy. Assume that the length of the initial geodesic line is equal to a and let the
parameter t run over the segment [0, a]. Let
The condition (3.2) means that under a change of the parameter h the initial point
A and the ending point B of the curve do not move.
For the sake of brevity let’s denote the partial derivatives of the functions
ui(t, h) with respect to t by setting the dot. Then the quantities u̇i = ∂ui /∂t
determine the inner representation of the tangent vector to the curve.
Assume that the initial line correspond to the value h = 0 of the parameter h.
Assume also that for h = 0 the parameter t coincides with the natural parameter
of the geodesic line. Then for h = 0 the functions (3.1) satisfy the equations (1.7)
and (2.15) simultaneously. For h 6= 0 the parameter t should not coincide with the
natural parameter on the deformed curve, and the deformed curve itself should
not be a geodesic line in this case.
Let’s calculate the lengths of the deformed curves. It is the function of the
parameter h determined by the length integral of the form (1.5):
v
Za u
uX2 X
2
L(h) = t gij u̇i u̇j dt. (3.3)
0 i=1 j=1
For h = 0 we have L(0) = a. The proposition of the theorem 3.1 on the extremity
of the length now is formulated as L(h) = a + O(h2 ) or, equivalently, as
dL(h)
= 0. (3.4)
dh h=0
Proof of the theorem 3.1. Let’s prove the equality (3.4) for the length
integral (3.3) under the deformations of the curve described just above. Denote by
λ(t, h) the expression under the square root in the formula (3.3). Then by direct
differentiation of (3.3) we obtain
Za
dL(h) ∂λ/∂h
= √ dt. (3.5)
dh 2 λ
0
112 CHAPTER V. CURVES ON SURFACES.
2 X
2 2 X 2 X2
!
∂λ ∂ X
i j
X ∂gij ∂uk i j
= gij u̇ u̇ = u̇ u̇ +
∂h ∂h i=1 j=1 i=1 j=1
∂uk ∂h
k=1
2 X
2 X
2 2 X
2 X
2
X ∂(u̇i u̇j ) ∂ u̇k X ∂gij ∂uk i j
+ gij = u̇ u̇ +
∂ u̇k ∂h ∂uk ∂h
k=1 i=1 j=1 i=1 j=1 k=1
2 X
2 X
2 2 2 2
X ∂ u̇k X X X ∂ u̇k
+ gij δki u̇j + gij u̇i δkj .
∂h ∂h
k=1 i=1 j=1 i=1 j=1 k=1
Due to the Kronecker symbols δki and δkj in the above expression we can perform
explicitly the summation over k in the last two terms. Moreover, due to the
symmetry of gij they are equal to each other:
2 2 2 2 X
2
∂λ X X X ∂gij ∂uk i j X
i ∂ u̇
j
= u̇ u̇ + 2 g ij u̇ .
∂h ∂uk ∂h ∂h
i=1 j=1 k=1 i=1 j=1
2 X 2 X 2 Z a
X ∂gij u̇i u̇j ∂uk
I1 = √ dt, (3.6)
i=1 j=1 k=1
∂uk 2 λ ∂h
0
2 Za
2 X
X gik u̇i ∂ u̇k
I2 = √ dt. (3.7)
i=1 j=1 0
λ ∂h
The integral (3.7) contain the second order mixed partial derivatives of (3.1):
∂ u̇k ∂ 2 uk
= .
∂h ∂t ∂h
Za a Za
gik u̇i ∂ u̇k gik u̇i ∂uk gik u̇i ∂uk
∂
√ dt = √ − √ dt.
λ ∂h λ ∂h ∂t λ ∂h
0 0 0
Let’s differentiate the equalities (3.2) with respect to h. As a result we find that
the derivatives ∂uk /∂h vanish at the ends of the integration segment over t. This
means that non-integral terms in the above formula do vanish. Hence, for the
integral I2 in (3.7) we obtain
2 X2 Z a
∂ gik u̇i ∂uk
X
I2 = − √ dt. (3.8)
i=1
∂t λ ∂h
k=1 0
CopyRight
c Sharipov R.A., 1996, 2004.
§ 3. EXTREMAL PROPERTY OF GEODESIC LINES. 113
Now let’s add the integrals I1 and I2 from (3.6) and (3.8). As a result for the
derivative dL/dh in (3.5) we derive the following equality:
2 2 Za 2 ! k
∂gij u̇i u̇j ∂ gik u̇i
dL(h) X X X ∂u
= √ − √ dt.
dh ∂uk 2 λ ∂t λ ∂h
i=1 k=1 0 j=1
In this equality the only derivatives with respect to the parameter h are ∂uk /∂h.
For their values at h = 0 we introduce the following notations:
∂uk
δuk = (3.9)
∂h h=0
The quantities δuk = δuk (t) in (3.9) are called the variations of the coordinates
on the initial curve. Note that under a change of curvilinear coordinates these
quantities are transformed as the components of a vector (although this fact does
not matter for proving the theorem).
Let’s substitute h = 0 into the above formula for the derivative dL/dh. When
substituted, the quantity λ in the denominators of the fractions becomes equal to
unity: λ(t, 0) = 1. This fact follows from (1.7) since t coincides with the natural
parameter on the initial geodesic line. Then
2 X
2 Z a 2
!
dL(h) X X ∂gij u̇i u̇j d(gik u̇i)
= − δuk dt.
dh h=0 i=1 k=1 0 j=1
∂uk 2 dt
Since the above equality does not depend on h any more, we replace the partial
derivative with respect to t by d/dt. All of the further calculations in the right
hand side are for the geodesic line where t is the natural parameter.
Let’s move the sums over i and k under the integration and let’s calculate the
coefficients of δuk denoting these coefficients by Uk :
2 2
!
X X ∂gij u̇i u̇j d(gik u̇i )
Uk = − =
i=1 j=1
∂uk 2 dt
2 X 2 2
(3.10)
X 1 ∂gij ∂gik i j
X
i
= − u̇ u̇ − gik ü .
i=1 j=1
2 ∂uk ∂uj i=1
Due to the symmetry of u̇i u̇j the second term within round brackets in the
formula (3.10) can be broken into two terms. This yields
2 X
2 2
X 1 ∂gij ∂gik ∂gjk X
Uk = − − u̇i u̇j − gik üi .
2 ∂uk ∂u j ∂ui
i=1 j=1 i=1
Let’s raise the index k in Uk , i. e. consider the quantities U q given by the formula
2
X
Uq = gqk Uk .
k=1
114 CHAPTER V. CURVES ON SURFACES.
2 X
2 X
2
gqk ∂gkj
X ∂gik ∂gij
−U q = üq + + − u̇i u̇j .
2 ∂ui ∂uj ∂uk
i=1 j=1 k=1
Let’s compare this formula with the formula (7.5) in Chapter IV that determines
the connection components. As a result we get:
2 X
X 2
−U q = üq + Γqij u̇i u̇j . (3.11)
i=1 j=1
Now it is sufficient to compare (3.11) with the equation of geodesic lines (2.15)
and derive U q = 0. The quantities Uk are obtained from U q by lowering the index:
2
X
Uk = gkq U q .
q=1
Therefore, the quantities Uk are also equal to zero. From this fact we immediately
derive the equality (3.4) which means exactly that the extremity condition for the
geodesic lines is fulfilled. The theorem is proved.
The equation (4.1) is called the equation of the inner parallel translation of vectors
along curves on a surface.
Suppose that we have a surface on some fragment of which the curvilinear
coordinates u1, u2 and a parametric curve (1.1) are given. Let’s consider some
tangent vector a to the surface at the initial point of the curve, i. e. at t = 0.
The vector a has the inner representation in form of two numbers a1 , a2 , they are
its components. Let’s set the Cauchy problem for the differential equations (4.1)
given by the following initial data at t = 0:
Solving the Cauchy problem (4.2), we get two functions a1 (t) and a2(t) which
determine the vectors a(t) at all points of the curve. The procedure described
just above is called the inner parallel translation of the vector a along a curve on
a surface.
§ 4. INNER PARALLEL TRANSLATION ON A SURFACE. 115
Let’s consider the inner parallel translation of the vector a from the outer point
of view, i. e. as a process in outer (three-dimensional) geometry of the space E
where the surface under consideration is embedded. The relation of inner and
outer representations of tangent vectors of the surface is given by the formula:
2
X
a= ai · Ei . (4.3)
i=1
Let’s differentiate the equality (4.3) with respect to t assuming that a1 and a2
depend on t as solutions of the differential equations (4.1):
2 2 2
da X i XX ∂Ei j
= ȧ · Ei + ai · · u̇ . (4.4)
dt i=1 i=1 j=1
∂uj
2 2 2 2 X
2
! !
da X i X X i j k X
j k
= ȧ + Γjk u̇ a · Ei + bjk u̇ a · n.
dt
i=1 j=1 k=1 j=1 k=1
Since the functions ai(t) satisfy the differential equations (4.1), the coefficients at
the vectors Ei in this formula do vanish:
2 X
2
!
da X
j k
= bjk u̇ a · n. (4.5)
dt
j=1 k=1
The coefficient at the normal vector n in the above formula (4.5) is determined by
the second quadratic form of the surface. This is the value of the corresponding
symmetric bilinear form on the pair of vectors a and τ . Therefore, the formula
(4.5) is rewritten in a vectorial form as follows:
da
= b(τ , a) · n. (4.6)
dt
The vectorial equation (4.6) is called the outer equation of the inner parallel
translation on surfaces.
The operation of parallel translation can be generalized to the case of inner
tensors of the arbitrary type (r, s). For this purpose we have introduced the
operation of covariant differentiation of tensorial function with respect to the
parameter t on curves (see formula (8.10) in Chapter III). Here is the two-
dimensional version of this formula:
dAij11... ir
... js
∇t Aij11... ir
... js = +
dt
2 X
r X
X 2 Xs X 2 X 2 (4.7)
+ Γiqmvm u̇q Aij11... vm ... ir
... js − Γw q i1 ... ir
q jn u̇ Aj1 ... wn ... js .
n
In terms of the covariant derivative (4.7) the equation of the inner parallel
translation for the tensorial field A is written as
∇t A = 0. (4.8)
The consistence of defining the inner parallel translation by means of the equation
(4.8) follows from the two-dimensional analog of the theorem 8.2 from Chapter III.
Theorem 4.1. For any inner tensorial function A(t) determined at the points
of a parametric curve on some surface the quantities Bji11 ... ir i1 ... ir
... js = ∇t Aj1 ... js calculated
according to the formula (4.7) define a tensorial function B(t) = ∇t A of the same
type (r, s) as the original function A(t).
The proof of this theorem almost literally coincides with the proof of the
theorem 8.2 in Chapter III. Therefore, we do not give it here.
The covariant differentiation ∇t defined by the formula (4.7) possesses a series
of properties similar to those of the covariant differentiation along a vector field
∇X (see formula (6.10) and theorem 6.2 in Chapter IV).
Theorem 4.2. The operation of covariant differentiation of tensor-valued func-
tions with respect to the parameter t along a curve on a surface possesses the
following properties:
(1) ∇t (A + B) = ∇t A + ∇tB;
(2) ∇t (A ⊗ B) = ∇t A ⊗ B + A ⊗ ∇t B;
(3) ∇t C(A) = C(∇tA).
Proof. Let’s choose some curvilinear coordinate system and prove the theorem
by means of direct calculations in coordinates. Let C = A + B. Then for the
components of the tensor-valued function C(t) we have
Cji11...
... ir i1 ... ir i1 ... ir
js = Aj1 ... js + Bj1 ... js .
Substituting Cji11...
... ir
js into (4.7), for the covariant derivative ∇t C we get
∇t Cji11...
... ir i1 ... ir i1 ... ir
js = ∇t Aj1 ... js + ∇t Bj1 ... js .
i ... i i ... i
Cj11... js+q
r+p
= Aij11... ir
... js Bjs+1 ... js+q .
r+1 r+p
(4.9)
i ... i
Let’s substitute the quantities Cj11... js+q
r+p
from (4.9) into the formula (4.8) for the
covariant derivative. As a result for the components of ∇t C we derive
2 X
r X 2
i ... i
X
+ Γivmm u̇q Aij11... vm ... ir
... js Bjr+1 r+p
s+1 ... js+q
+
m=1 q=1 vm =1
r+p
2 X
2
i ... v ... ir+p
X X
+ Aji11...
... ir im q
js Γ vm u̇ Bjs+1 ... js+q
r+1 m
−
m=r+1 q=1 vm =1
2 X
s X 2
i ... i
X
− Γwjnn u̇q Aij11... ir
... wn ... js Bjs+1 ... js+q −
r+1 r+p
n=1 q=1 wn =1
s+q X
2 X
2
i ... i
X
− Aji11...
... ir wn q
js Γ jn u̇ Bjs+1 ... wn ... js+q .
r+1 r+p
n=s+1 q=1 wn =1
Note that upon collecting the similar terms the above huge formula can be
transformed to the following one:
ir+1 ... ir+p
∇t Aij11... ir i1 ... ir
... js Bjs+1 ... js+q = ∇t Aj1 ... js ×
(4.10)
i ... ir+p i ... ir+p
× Bjr+1
s+1 ... js+q
+ Aij11... ir
... js ∇t Bjr+1
s+1 ... js+q
.
Now it is easy to see that the formula (4.10) proves the second item of the theorem.
Let’s choose two tensor-valued functions A(t) and B(t) one of which is the
contraction of another. In coordinates this fact looks like
2
i ... i k i ... i
X
Bji11 ... ir
... js = Aj11 ... jp−1 p r
q−1 k jq ... js
. (4.11)
k=1
Let’s substitute (4.11) into the formula (4.7). For ∇t Bji11 ... ir
... js we derive
2 i ... i k ip ... ir
X dAj11... jp−1
q−1 k jq ... js
∇t Bji11 ... ir
... js = +
dt
k=1
2 X
r X
X 2 X
2
+ Γiqmvm u̇q Aij11... vm ... k ... ir
... jq−1 k jq ... js − (4.12)
m=1 k=1 q=1 vm =1
2 X
s X 2 X
2
i ... i k i ... i
X
− Γw q 1
q jn u̇ Aj1 ... wn ... k ... js .
n p−1 p r
In the formula (4.12) the index vm sequentially occupies the positions to the left
of the index k and to the right of it. The same is true for the index wn. However,
the formula (4.12) has no terms where vm or wn replaces the index k. Such terms,
provided they would be present, according to (4.7), would have the form
2 X
2 X
2
i ... i v i ... i
X
Γkq v u̇q Aj11 ... jp−1 p r
q−1 k jq ... js
, (4.13)
k=1 q=1 v=1
2 X
2 X
2
i ... i k i ... i
X
− Γw q 1
q k u̇ Aj1 ... jq−1 w jq ... js .
p−1 p r
(4.14)
k=1 q=1 w=1
118 CHAPTER V. CURVES ON SURFACES.
It is easy to note that (4.13) and (4.14) differ only in sign. Indeed, it is sufficient
to rename k to v and w to k in the formula (4.14). If we add simultaneously
(4.13) and (4.14) to (4.12), their contributions cancel each other thus keeping the
equality valid. Therefore, (4.12) can be written as
2
i ... i k i ... i
X
∇js+1 Bji11 ...
... ir
js = ∇js+1 Aj11 ... jp−1 p r
q−1 k jq ... js
. (4.15)
k=1
The relationship (4.15) proves the third item of the theorem and completes the
proof in whole.
Under a reparametrization of a curve a new parameter t̃ should be a strictly
monotonic function of the old parameter t (see details in § 2 of Chapter I). Under
such a reparametrization ∇t̃ and ∇t are related to each other by the formula
dt̃(t)
∇tA = · ∇t̃A (4.16)
dt
for any tensor-valued function A on a curve. This relationship is a simple
consequence from (4.7) and from the chain rule for differentiating a composite
function. It is an analog of the item (3) in the theorem 6.2 of Chapter IV.
Let A be a tensor field of the type (r, s) on a surface. This means that at each
point of the surface some tensor of the type (r, s) is given. If we mark only those
points of the surface which belong to some curve, we get a tensor-valued function
A(t) on that curve. In coordinates this is written as
1 2
Aij11... ir i1 ... ir
... js (t) = Aj1 ... js (u (t), u (t)). (4.17)
The function A(t) constructed in this way is called the restriction of a tensor field
A to a curve. The specific feature of the restrictions of tensor fields on curves
expressed by the formula (4.17) reveals in differentiating them:
2
dAij11... ir
... js
X ∂Aij11... ir
... js q
= u̇ . (4.18)
dt q=1
∂uq
Substituting (4.18) into the formula (4.7), we can extract the common factor u̇q in
the sum over q. Upon extracting this common factor we find
2
X
∇t Aij11... ir
... js = u̇q ∇q Aij11... ir
... js . (4.19)
q=1
The formula (4.19) means that the covariant derivative of the restriction of a
tensor field A to a curve is the contraction of the covariant differential ∇A with
the tangent vector of the curve.
Assume that ∇A = 0. Then due to (4.19) the restriction of the field A to any
curve is a tensor-valued function satisfying the equation of the parallel translation
(4.8). The values of such a field A at various points are related to each other by
parallel translation along any curve connecting these points.
§ 4. INNER PARALLEL TRANSLATION ON A SURFACE. 119
Let’s perform the parallel translation of the vectors a and b along the curve
solving the equation (4.8) and using the components of a and b as initial data in
Cauchy problems. As a result we get two vector-valued functions a(t) and b(t) on
the curve. Let’s consider the function ψ(t) equal to their scalar product:
2 X
X 2
ψ(t) = (a(t) | b(t)) = gij (t) ai(t) bj (t). (4.20)
i=1 j=1
According to the formula (4.7) the covariant derivative ∇tψ coincides with the
regular derivative. Therefore, we have
2 2
dψ XX
∇tgij ai bj + gij ∇tai bj + gij ai ∇tbj .
= ∇tψ =
dt i=1 j=1
Here we used the items (2) and (3) of the theorem 4.2. But ∇t ai = 0 and
∇tbj = 0 since we a(t) and b(t) are obtained as a result of parallel translation of
the vectors a and b. Moreover, ∇t gij = 0 due to autoparallelism of the metric
tensor. For the scalar function ψ(t) defined by (4.20) this yields dψ/dt = 0 and
ψ(t) = (a | b) = const. As a result of these considerations we have proved the
following theorem.
Theorem 4.3. The operation of inner parallel translation of vectors along cur-
ves preserves the scalar product of vectors.
Preserving the scalar product, the operation of inner parallel translation pre-
serves the length of vectors and the angles between them.
From the autoparallelism of metric tensors g and ĝ we derive the following
formulas analogous to the formulas (7.9) in Chapter IV:
2
! 2
X X
... k ...
∇t gik A... ... = gik ∇t A... k ...
... ... ,
k=1 k=1
(4.21)
2 2
!
X X
∇t gik A... ...
... k ... = gik ∇t A... ...
... k ... .
k=1 k=1
CopyRight
c Sharipov R.A., 1996, 2004.
120 CHAPTER V. CURVES ON SURFACES.
The identity (5.1) is known as Green’s formula (see [2]). The equality (5.1) is an
equality for a plane. We need its generalization for the case of an arbitrary surface
in the space E. In such generalization the coordinate plane u1, u2 or some its
part plays the role of a chart, while the real geometric domain and its boundary
contour should be placed on a surface. Therefore, the integrals in both parts
of Green’s formula should be transformed so that one can easily write them for
any curvilinear coordinates on a surface and their values should not depend on a
particular choice of such coordinate system.
Let’s begin with the integral in the left hand side of (5.1). Such integrals are
called path integrals of the second kind. Let’s rename P to v1 and Q to v2. Then
the integral in the left hand side of (5.1) is written as
2
I X
I= vi (u1 , u2) dui. (5.2)
γ i=1
Zb 2
X
!
I=± vi u̇i dt. (5.3)
a i=1
This formula reducing the integral of the second kind to the regular integral over
the segment [a, b] on the real axis can be taken for the definition of the integral
§ 5. INTEGRATION ON SURFACES. GREEN’S FORMULA. 121
(5.2). The sign is chosen regarding to the direction of the contour on Fig. 5.1. If
a < b and if when t changes from a to b the corresponding point on the contour
moves along the arrow, we choose plus in (5.3). Otherwise, we choose minus.
Changing the variable t̃ = ϕ(t) in the integral (5.3) and choosing the proper
sign upon reparametrization of the contour, one can verify that the value of this
integral does not depend on the choice of the parametrization on the contour.
Now let’s change the curvilinear coordinate system on the surface. The
derivatives u̇i in the integral (5.3) under a change of curvilinear coordinates on the
surface are transformed as follows:
2 2
dui X ∂ui dũj X
u̇i = = = Sji ũ˙ j . (5.4)
dt ∂ ũj dt
j=1 j=1
Substituting (5.4) into the formula (5.3), for the integral I we derive:
Zb 2 2
! !
ũ˙ j
X X
I=± Sji vi dt. (5.5)
a j=1 i=1
Now let’s write the relationship (5.3) in coordinates ũ1, ũ2. For this purpose we
rename ui to ũi and vi to ṽi in the formula (5.3):
Zb 2
X
!
I=± ˙i
ṽi ũ dt. (5.6)
a i=1
Comparing the formulas (5.5) and (5.6), we see that these formulas are similar
in their structure. For the numeric values of the integrals (5.3) and (5.6) to be
always equal (irrespective to the form of the contour γ and its parametrization)
the quantities vi and ṽi should be related as follows:
2
X 2
X
ṽj = Sji vi , vi = Tij ṽj .
i=1 i=1
These formulas represent the transformation rule for the components of a covec-
torial field. Thus, we conclude that any path integral of the second kind on a
surface (5.2) is given by some inner covectorial field on this surface.
Now let’s proceed with the integral in the right hand side of the Green’s formula
(5.1). Distracting for a while from the particular integral in this formula, let’s
consider the following double integral:
ZZ
I= F du1du2. (5.7)
Ω
The Jacobi matrix (5.9) coincides with the transition matrix S (see formula (2.7)
in Chapter IV). Therefore, the function F being integrated in the formula (5.7)
should obey the transformation rule
F̃ = | det S| F (5.10)
where det g is the determinant of the first quadratic form. In this case the
quantity f in the formula (5.11) is a scalar. This fact follows from the equality
det g = (det T )2 det g̃ that represent the transformation rule for the determinant
of the metric tensor under a change of coordinate system.
Returning back to the integral in the right hand side of (5.1), we transform it
to the form (5.11). For this purpose we use the above notations P = v1 , Q = v2 ,
and remember that v1 and v2 are the components of the covectorial field. Then
∂Q ∂P ∂v2 ∂v1
− = − . (5.12)
∂u1 ∂u2 ∂u1 ∂u2
The right hand side of (5.12) can be represented in form of the contraction with
the unit skew-symmetric matrix dij (see formula (3.6) in Chapter IV):
2 2 2 2
!
∂v2 ∂v1 XX
ij ∂vj
X ∂ X
ij
− = d = d vj . (5.13)
∂u1 ∂u2 ∂ui ∂ui
i=1 j=1 i=1 j=1
Note that the quantities dij with lower indices enter the formula for the area
tensor ω (see (3.7) in Chapter IV). Let’s raise the indices of the area tensor by
means of the inverse metric tensor:
2 X
X 2 2 X
X 2 p
ωij = gip gjq ωpq = ξD det g gip gjq dpq .
p=1 q=1 p=1 q=1
Applying the formula (3.7) from Chapter IV, we can calculate the components of
the area tensor ω ij in the explicit form:
p
ωij = ξD det g−1 dij . (5.14)
§ 5. INTEGRATION ON SURFACES. GREEN’S FORMULA. 123
The formula (5.14) expresses ωij through dij . Now we use (5.14) in order to
express dij in the formula (5.13) back through the components of the area tensor:
2 2
!
∂v2 ∂v1 X ∂ X p
1
− 2 = ξD det g ωij vj .
∂u ∂u i=1
∂ui j=1
Taking into account (5.15), the formula (5.13) can be written as follows:
2 √
∂v2 ∂v1 X ∂ det g yi
1
− 2
= ξD =
∂u ∂u i=1
∂ui
(5.16)
2 !
p X ∂yi 1 ∂ ln det g i
= ξD det g + y .
∂ui 2 ∂ui
i=1
The logarithmic derivative for the determinant of the metric tensor is calculated
by means of the lemma 7.1 from Chapter IV. However, we need not repeat these
calculations here, since this derivative is already calculated (see (7.12) and the
proof of the theorem 7.2 in Chapter IV):
2 X 2 2
∂ ln det g X
pq ∂gpq
X
= g = 2 Γqiq . (5.17)
∂ui p=1 q=1
∂u i
q=1
In this formula one easily recognizes the contraction of the covariant differential of
the vector field y. Indeed, we have
2
∂v2 ∂v1 p X
− = ξD det g ∇i y i . (5.18)
∂u1 ∂u2
i=1
Using the formula (5.18), the notations (5.15), and the autoparallelism condition
for the area tensor ∇q ωij = 0, we can write the Green’s formula as
2
I X 2 X
ZZ X 2 p
i
vi du = ξD ωij ∇ivj det g du1du2. (5.19)
γ i=1 Ω i=1 j=1
The sign factor ξD in (5.19) should be especially commented. The condition that
the domain Ω should lie to the left of the contour γ when moving along the arrow
124 CHAPTER V. CURVES ON SURFACES.
is not invariant under an arbitrary change of coordinates u1, u2 by ũ1, ũ2. Indeed,
if we set ũ1 = −u1 and ũ2 = u2 , we would have the mirror image of the domain
Ω and the contour γ shown on Fig. 5.1. This means that the direction should be
assigned to the geometric contour γ lying on the surface, not to its image in a
chart. Then the sign factor ξD in (5.19) can be omitted.
The choice of the direction on a geometric contour outlining a domain on a
surface is closely related to the choice of the normal vector on that surface. The
normal vector n should be chosen so that when observing from the end of the
vector n and moving in the direction of the arrow along the contour γ the domain
Ω should lie to the left of the contour. The choice of the normal vector n defines
the orientation of the surface thus defining the unit pseudoscalar field ξD .
§ 6. Gauss-Bonnet theorem.
Let’s consider again the process of inner parallel translation of tangent vectors
along curves on surfaces. The equation (4.6) shows that from the outer (three-
dimensional) point of view this parallel translation differs substantially from the
regular parallel translation: the vectors being translated do not remain parallel
to the fixed direction in the space — they change. However, their lengths are
preserved, and, if we translate several vectors along the same curve, the angles
between vectors are preserved (see theorem 4.3).
From the above description, we see that in the process of parallel translation,
apart from the motion of the attachment point along the curve, the rotation of
the vectors about the normal vector n occurs. Therefore, we have the natural
problem — how to measure the angle of this rotation ? We consider this problem
just below.
Suppose that we have a surface equipped with the orientation. This means
that the orientation field ξD and the area tensor ω are defined (see formula (3.10)
in Chapter IV). We already know that ξD fixes one of the two possible normal
vectors n at each point of the surface (see formula (4.3) in Chapter IV).
Theorem 6.1. The inner tensor field Θ of the type (1, 1) with the components
2
X
θji = ωjk gki (6.1)
k=1
Let’s substitute the expression given by the formula (4.3) from Chapter IV for the
vector n into (6.2). Then let’s expand the vector a in the basis E1 , E2 :
a = a 1 · E1 + a 2 · E2 . (6.3)
§ 6. GAUSS-BONNET THEOREM. 125
2
X [[E1, E2 ], Ej ] j
b= ξD · ·a . (6.4)
j=1
|[E1 , E2 ]|
In order to calculate the denominator in the formula (6.4) we use the well-known
formula from the analytical geometry (see [4]):
2 (E1 | E1) (E1 | E2)
|[E1 , E2 ]| = det
= det g.
(E2 | E1) (E2 | E2)
As for the numerator in the formula (6.4), here we use the not less known formula
for the double vectorial product:
Taking into account these two formulas, we can write (6.4) as follows:
2
X g1j · E2 − g2j · E1 j
b= ξD · √ ·a . (6.5)
j=1
det g
Using the components of the area tensor (5.14), no we can rewrite (6.5) in a more
compact and substantially more elegant form:
2 2 X
2
!
X X
ki j
b= ω gkj a · Ei .
i=1 j=1 k=1
From this formula it is easy to extract the formula (6.1) for the components of the
linear operator Θ relating b and a. The theorem is proved.
The operator field Θ is the contraction of the tensor product of two fields ω
and g. The autoparallelism of the latter ones means that Θ is also an autoparallel
field, i. e. ∇Θ = 0.
We use the autoparallelism of Θ in the following way. Let’s choose some
parametric curve γ on a surface and perform the parallel translation of some unit
vector a along this curve. As a result we get the vector-valued function a(t) on the
curve satisfying the equation of parallel translation ∇ta = 0 (see formula (4.8)).
Then we define the vector-function b(t) on the curve as follows:
From (6.6) we derive ∇t(b) = ∇tΘ(a) + Θ(∇t a) = 0. This means that the
function (6.6) also satisfies the equation of parallel translation. It follows from
the autoparallelism of Θ and from the items (2) and (3) in the theorem 4.2. The
vector-functions a(t) and b(t) determine two mutually perpendicular unit vectors
at each point of the curve. There are the following obvious relationships for them:
Let’s remember for the further use that a(t) and b(t) are obtained by parallel
translation of the vectors a(0) and b(0) along the curve from its initial point.
Now let’s consider some inner vector field x on the surface (it is tangent to the
surface in the outer representation). If the field vectors x(u1 , u2) are nonzero at
each point of the surface, they can be normalized to the unit length: x → x/|x|.
Therefore, we shall assume x to be a field of unit vectors: |x| = 1. At the points
of the curve γ this field can be expanded in the basis of the vectors a and b:
The function ϕ(t) determines the angle between the vector a and the field vector x
measured from a to x in the counterclockwise direction. The change of ϕ describes
the rotation of the vectors during their parallel translation along the curve.
Let’s apply the covariant differentiation ∇t to the relationship (6.8) and take
into account that both vectors a and b satisfy the equation of parallel translation:
Here we used the fact that the covariant derivative ∇t for the scalar coincides with
the regular derivative with respect to t. In particular, we have ∇tϕ = ϕ̇. Now we
apply the operator Θ to both sides of (6.8) and take into account (6.7):
Now we calculate the scalar product of Θ(x) from (6.10) and ∇tx from (6.9).
Remembering that a and b are two mutually perpendicular unit vectors, we get
Let’s write the equality (6.11) in coordinate form. The vector-function x(t) on the
curve is the restriction of the vector field x, therefore, the covariant derivative ∇tx
is the contraction of the covariant differential ∇x with the tangent vector of the
curve (see formula (4.19)). Hence, we have
2 X
X 2 X
2
xi ωij ∇q xj u̇q .
ϕ̇ = (6.12)
q=1 i=1 j=1
Here in deriving (6.12) from (6.11) we used the formula (6.1) for the components
of the operator field Θ.
Let’s discuss the role of the field x in the construction described just above.
The vector field x is chosen as a reference mark relative to which the rotation
angle of the vector a is measured. This way of measuring the angle is relative.
Changing the field x, we would change the value of the angle ϕ. We have to admit
this inevitable fact since tangent planes to the surface at different points are not
parallel to each other and we have no preferable direction relative to which we
could measure the angles on all of them.
There is a case where we can exclude the above uncertainty of the angle. Let’s
consider a closed parametric contour γ on the surface. Let [0, 1] be the range over
CopyRight
c Sharipov R.A., 1996, 2004.
§ 6. GAUSS-BONNET THEOREM. 127
which the parameter t runs on such contour. Then x(0) and x(1) do coincide.
They represent the same field vector at the point with coordinates u1 (0), u2(0):
Unlike x(t), the function a(t) is not the restriction of a vector field to a curve γ.
Therefore, the vectors a(0) and a(1) can be different. This is an important feature
of the inner parallel translation that differs it from the parallel translation in the
Euclidean space E.
In the case of a closed contour γ the difference ϕ(1) − ϕ(0) characterizes the
angle to which the vector a turns a as a result of parallel translation along the
contour. Note that measuring the angle from x to a is opposite to measuring it
from a to x in the formula (6.8). Therefore, taking for positive the angle measured
from x in the counterclockwise direction, we should take for the increment of the
angle gained during the parallel translation along γ the following quantity:
Z1
4ϕ = ϕ(0) − ϕ(1) = − ϕ̇ dt.
0
Z1 X
2 X
2 X
2
!
i j q
4ϕ =− x ωij ∇q x u̇ dt. (6.13)
0 q=1 i=1 j=1
Comparing (6.13) with (5.3), we see that (6.13) now can be written in the form of
a path integral of the second kind:
2 X
I X 2 X
2
xi ωij ∇q xj duq .
4ϕ =− (6.14)
γ q=1 i=1 j=1
Assume that the contour γ outlines some connected and simply connected
fragment Ω on the surface. Then for this fragment Ω we can apply to (6.14) the
Green’s formula written in the form of (5.19):
2 X
ZZ X 2 X
2 X
2
p
4ϕ = −ξD ωij ∇i xp ωpq ∇j xq det g du1du2.
Ω i=1 j=1 p=1 q=1
If the direction of the contour is in agreement with the orientation of the surface,
then the sign factor ξD can be omitted:
2 X
ZZ X 2 X
2 X
2
4ϕ =− xp ωij ωpq ∇i ∇j xq +
Ω i=1 j=1 p=1 q=1 (6.15)
p
+ ∇ixp ωij ωpq ∇j xq det g du1du2.
128 CHAPTER V. CURVES ON SURFACES.
Let’s show that the term ∇i xp ωij ωpq ∇j xq in (6.15) yields zero contribution to
the value of the integral. This feature is specific to the two-dimensional case where
we have the following relationship:
The proof of the formula (6.16) is analogous to the proof of the formula (8.23) in
Chapter IV. It is based on the skew-symmetry of dij and dpq .
Let’s complete the inner vector field x of the surface by the other inner vector
field y = Θ(x). The vectors x and y form a pair of mutually perpendicular unit
vectors in the tangents plane. For their components we have
2
X 2
X 2
X
xq xq = 1, xi = gik xk , yi = gik yk , (6.17)
q=1 k=1 k=1
2
X 2
X 2
X
∇k xq xq = 0, yq = ωpq xp , yi = ωji xj . (6.18)
q=1 p=1 j=1
The first relationship (6.17) expresses the fact that |x| = 1, other two relationships
(6.17) determine the covariant components xi and yi of x and y. The first
relationship (6.18) is obtained by differentiating (6.17), the second and the third
relationships (6.18) express the vectorial relationship y = Θ(x).
Let’s multiply (6.16) by ∇k xq xj xp and then sum up over q, p, and j taking
into account the relationships (6.17) and (6.18):
2
!
X
∇k xi = y q ∇k xq y i = z k y i . (6.19)
q=1
Now we apply the relationship (8.5) from Chapter IV to the field x. Moreover, we
take into account the formulas (8.24) and (9.9) from Chapter IV:
2 X
ZZ X 2
p
4ϕ = K gij xi xj det g du1du2.
Ω i=1 j=1
Remember that the vector field x was chosen to be of the unit length from the
very beginning. Therefore, upon summing up over the indices i and j we shall
have only the Gaussian curvature under the integration:
ZZ p
4ϕ = K det g du1du2. (6.20)
Ω
§ 6. GAUSS-BONNET THEOREM. 129
Now let’s consider some surface on which a connected and simply connected
domain Ω outlined by a piecewise continuously differentiable contour γ is given
(see Fig. 6.1). In other words, we have
a polygon with curvilinear sides on the
surface. The Green’s formula (5.1) is
applicable to a a piecewise continuously
differentiable contour, therefore, the for-
mula (6.20) is valid in this case. The par-
allel translation of the vector a along a
piecewise continuously differentiable con-
tour is performed step by step. The
result of translating the vector a along
a side of the curvilinear polygon γ is
used as the initial data for the equations
of parallel translation on the succeeding
side. Hence, ϕ(t) is a continuous func-
tion, though its derivative can be discon-
tinuous at the corners of the polygon.
Let’s introduce the natural paramet-
rization t = s on the sides of the polygon
γ. Then we have the unit tangent vector τ on them. The vector-function τ (t) is a
continuous function on the sides, except for the corners, where τ (t) abruptly turns
to the angles 4ψ1, 4ψ2 , . . . , 4ψn (see Fig. 6.1). Denote by ψ(t) the angle between
the vector τ (t) and the vector a(t) being parallel translated along γ. We measure
this angle from a to τ taking for positive the counterclockwise direction. The
finction ψ(t) is a continuously differentiable function on γ except for the corners.
At these points it has jump discontinuities with jumps 4ψ1 , 4ψ2, . . . , 4ψn.
Let’s calculate the derivative of the function ψ(t) out of its discontinuity points.
Applying the considerations associated with the expansions (6.8) and (6.9) to the
vector τ (t), for such derivative we find:
Then let’s calculate the components of the vector ∇t τ in the inner representation
of the surface (i. e. in the basis of the frame vectors E1 and E2 ):
2 X
X 2
∇t τ k = ük + Γkji u̇i u̇j . (6.22)
i=1 j=1
Keeping in mind that t = s is the natural parameter on the sides of the polygon
γ, we compare (6.22) with the formula (2.5) for the geodesic curvature and with
the formula (2.4). As a result we get the equality
But n inner is a unit vector in the tangent plane perpendicular to the vector τ .
The same is true for the vector Θ(τ ) in the scalar product (6.21). Hence, the unit
130 CHAPTER V. CURVES ON SURFACES.
vectors n inner and Θ(τ ) are collinear. Let’s denote by ε(t) the sign factor equal to
the scalar product of these vectors:
ψ̇ = ε k geod. (6.25)
Let’s find the increment of the function ψ(t) gained as a result of round trip along
the whole contour. It is composed by two parts: the integral of (6.25) and the sum
jumps at the corners of the polygon γ:
I n
X
4ψ = ε k geod ds + 4ψi . (6.26)
γ i=1
4ϕ + 4ψ = 2π r. (6.27)
Practically, the value of the number r in the formula (6.27) is equal to unity.
Let’s prove this fact by means of the following considerations: we perform the
continuous deformation of the surface on Fig. 6.1 flattening it to a plain, then
we continuously deform the contour γ to a circle. During such a continuous
deformation the left hand side of the equality (6.27) changes continuously, while
the right hand side can change only in discrete jumps. Therefore, under the above
continuous deformation of the surface and the contour both sides of (6.27) do not
change at all. On a circle the total angle of rotation of the unit tangent vector is
calculated explicitly, it is equal to 2π. Hence, r = 1. We take into account this
circumstance when substituting (6.20) and (6.26) into the formula (6.27):
ZZ p I n
X
1 2
K det g du du + ε k geod ds + 4ψi = 2π. (6.28)
Ω γ i=1
The formula (6.28) is the content of the following theorem which is known as the
Gauss-Bonnet theorem.
α1 + α2 + α3 = π + K S,
where K S is the product of the Gaussian curvature of the surface and the area of
the triangle.
A philosophic remark. By measuring the sum of angles of some sufficiently
big triangle we can decide whether our world is flat or it is equipped with the
curvature. This is not a joke. The idea of a curved space became generally
accepted in the modern notions on the structure of the world.
REFERENCES.
AUXILIARY REFERENCES 1 .