0% found this document useful (0 votes)
62 views135 pages

Calculus Help

General guide to calsculas

Uploaded by

Joseph Josef
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
62 views135 pages

Calculus Help

General guide to calsculas

Uploaded by

Joseph Josef
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 135

CM111A Calculus I

Compact Lecture Notes


ACC Coolen
Department of Mathematics, Kings College London
Version of Sept 2011
2
1 Introduction 5
1.1 A bit of history ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1 Birth of modern science and of calculus
Stage I, 15001630: from speculation to science ... . . . . . . . . . . . . . 5
1.1.2 Birth of modern science and of calculus
Stage II, 16301680: science is written in the language of mathematics! . 8
1.1.3 Birth of modern science and of calculus
Stage III, around 1680: how to speak the language of mathematics! . . . 9
1.2 Style of the course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Revision of some elementary mathematics . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 Powers of real numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.3 Solving quadratic equations . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.4 Functions, inverse functions, and graphs . . . . . . . . . . . . . . . . . . 16
1.3.5 Exponential function, logarithm, laws for logarithms . . . . . . . . . . . 18
1.3.6 Trigonometric functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2 Proof by induction 22
3 Complex numbers 25
3.1 Introduction and denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Elementary properties of complex numbers . . . . . . . . . . . . . . . . . . . . . 26
3.3 Absolute value and division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 The complex plane (Argand diagram) . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.1 Complex numbers as points in a plane . . . . . . . . . . . . . . . . . . . 28
3.4.2 Polar coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.3 The exponential form of numbers on the unit circle . . . . . . . . . . . . 31
3.5 Complex numbers in exponential notation . . . . . . . . . . . . . . . . . . . . . 33
3.5.1 Denition and general properties . . . . . . . . . . . . . . . . . . . . . . 33
3.5.2 Multiplication and division in exponential notation . . . . . . . . . . . . 34
3.5.3 The argument of a complex number . . . . . . . . . . . . . . . . . . . . . 35
3.6 De Moivres Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.6.1 Statement and proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.6.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.7 Complex equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3
4 Trigonometric and hyperbolic functions 41
4.1 Denitions of trigonometric functions . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.1 Denition of sine and cosine . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.2 Elementary values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.3 Related functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.4 Inverse trigonometric functions . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 Elementary properties of trigonometric functions . . . . . . . . . . . . . . . . . . 48
4.2.1 Symmetry properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 Addition formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.3 Applications of addition formulae . . . . . . . . . . . . . . . . . . . . . . 51
4.2.4 The tan(/2) formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3 Denitions of hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.1 Denition of hyperbolic sine and hyperbolic cosine . . . . . . . . . . . . . 54
4.3.2 General properties and special values . . . . . . . . . . . . . . . . . . . . 55
4.3.3 Connection with trigonometric functions . . . . . . . . . . . . . . . . . . 57
4.3.4 Applications of connection with trigonometric functions . . . . . . . . . . 57
4.3.5 Inverse hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . 58
5 Functions, limits and dierentiation 62
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1.1 Rate of change, tangent of a curve . . . . . . . . . . . . . . . . . . . . . . 62
5.1.2 Finding tangents and velocities why we need limits . . . . . . . . . . . 63
5.2 The limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.2.1 Left and right limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.2.2 Asymptotics - limits involving innity . . . . . . . . . . . . . . . . . . . . 67
5.2.3 When left/right limits exists and are identical . . . . . . . . . . . . . . . 68
5.2.4 Rules for limits of composite expressions . . . . . . . . . . . . . . . . . . 69
5.2.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.3 Dierentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3.1 Derivatives of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3.2 Rules for derivatives of composite expressions . . . . . . . . . . . . . . . 73
5.3.3 Derivatives of implicit functions . . . . . . . . . . . . . . . . . . . . . . . 76
5.3.4 Applications of derivative: sketching graphs . . . . . . . . . . . . . . . . 79
6 Integration 80
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.1.1 Area under a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.1.2 Examples of integrals calculated via staircases . . . . . . . . . . . . . . . 83
6.1.3 Fundamental theorems of calculus: integration vs dierentiation . . . . . 88
4
6.1.4 Indenite and denite integrals, and other conventions . . . . . . . . . . 90
6.2 Techniques of integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.1 List of elementary integrals and general methods for reduction . . . . . . 92
6.2.2 Examples: integration by substitution . . . . . . . . . . . . . . . . . . . . 94
6.2.3 Examples: integration by parts . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.4 Further tricks: recursion formulae . . . . . . . . . . . . . . . . . . . . . . 98
6.2.5 Further tricks: dierentiation with respect to a parameter . . . . . . . . 100
6.2.6 Further tricks: partial fractions . . . . . . . . . . . . . . . . . . . . . . . 102
6.3 Some simple applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3.1 Calculation of surface areas . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3.2 Calculation of volumes of revolution . . . . . . . . . . . . . . . . . . . . . 107
6.3.3 Calculation of the length of curves . . . . . . . . . . . . . . . . . . . . . 109
7 Taylors theorem and series 112
7.1 Introduction to series and questions of convergence . . . . . . . . . . . . . . . . 112
7.1.1 Series notation and elementary properties . . . . . . . . . . . . . . . . 112
7.1.2 Series convergence criteria . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.1.3 Power series notation and elementary properties . . . . . . . . . . . . . 114
7.2 Taylors theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.2.1 Expression for the coecients of power series . . . . . . . . . . . . . . . . 117
7.2.2 Taylor series around x = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.2.3 Taylor series around x = a . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.3.1 Series expansions for standard functions . . . . . . . . . . . . . . . . . . 121
7.3.2 Indirect methods for nding Taylor series . . . . . . . . . . . . . . . . . . 122
7.4 LHopitals rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
8 Exercises 125
5
1. Introduction
1.1. A bit of history ...
1.1.1. Birth of modern science and of calculus
Stage I, 15001630: from speculation to science ...
Ptolemy of Alexandria , 2nd century AD:
Style of the ancient Greeks: no experiments,
just logical thought and elegance
published Almagest (summary of astronomy,
based on 500 years of Greek astronomical and
cosmological thinking)
earth is centre of the universe
complicated model of spheres carrying
heavenly bodies, moving themselves in circles
Nicolaus Copernicus , 14731543:
problems with the motion of the moon ...
published De Revolutionibus, sun-centred
universe, with moon orbiting around the earth
Catholic Church:
put De Revolutionibus on the
Index of banned books
(stayed on the Index until 1835!)
6
Tycho Brahe , 15461601:
The genius observer...
First systematic and comprehensive measurement
of the trajectories of the moon, the planets,
the comets, and the stars,
over many years and with unrivaled precision!
Compiled huge amounts of data
Did not himself believe Copernicus ideas ...
(lost his nose while a student in a duel in 1566)
Johannes Kepler , 15711630:
The genius in analyzing data ...
Believed Copernicus, but could not observe
anything himself (poor eyesight ...)
Developed further models of sun-centered
universe, with spheres within spheres
Became Brahes assistant in 1599,
discovered quantitative laws,
based on Tycho Brahes data
published Astronomia Nova in 1609,
Harmonice Mundi in 1619,
Epitome of Copernican Astronomy
(3 volumes) 16181621
7
Keplers First Law (1605):
the orbit of each planet is an ellipse, with the sun at one of the two foci
Keplers Second Law (1602):
a line joining the sun to an orbiting planet sweeps out equal areas in equal times
Keplers Third Law (1618):
the square of a planets orbit period is proportional to cube of its distance to the sun
8
1.1.2. Birth of modern science and of calculus
Stage II, 16301680: science is written in the language of mathematics!
Galileo Galilei , 15641642:
the wrangler, loved arguments ...
the rst real scientist:
(i) state a hypothesis,
(ii) devise an experiment to test it,
(iii) carry out the experiment,
(iv) accept or reject the hypothesis
always worried about money (sisters dowries ...)
worked on inventions to get rich
(thermometer, calculator)
interested in movement of objects
constructed improved telescope in 1609: new observations all supported Copernicus ...
published Dialogue on the Two Chief World Systems in 1632
(Salviati vs Simplicio, with Sagredo as impartial commentator)
it was suggested that Pope Urban VIII was the simpleton ...
1633: show trial by the Inquisition, Galileo (69 and fearing torture):
I abjure, curse and detest my errors
published Discourses and Mathematical Demonstrations Concerning Two New Sciences
(rst modern scientic textbook), smuggled out of Italy, published in 1638
Rene Descartes , 15961650:
1637: Discours de la Methode pour bien conduire
la raison et chercher la Verite dans les Sciences
invented Cartesian coordinates:
each position in space represented by three numbers
introduced letters x, y, z to denote
unknown quantities in mathematical problems
published Principia Philosophiae (1644)
9
1.1.3. Birth of modern science and of calculus
Stage III, around 1680: how to speak the language of mathematics!
(i) state a hypothesis,
(ii) devise an experiment to test it,
(iii) carry out the experiment,
(iv) accept or reject the hypothesis
The problem in making it work in practice:
To test hypotheses on forces and movements of objects, one needs to be
able to calculate the trajectories that would be caused by the assumed forces ...
1673
Christiaan Huygens: outward force on
object in circular orbit of radius R
is proportional to R
2
1674
Robert Hooke: object that feels no force
will move along a straight line
(Newtons rst law of motion ...)
Huygens Hooke
1684 at the Royal Society ...
Edmond Halley, Christopher Wren, Robert Hooke
hypothesis: the sun attracts planets
at distance R with a force proportional to R
2
Is it possible to derive the observed motion
of the planets from this inverse square law?
Halley Wren
1684 somewhat later ... Halley visits Isaac Newton
according to Newtons friend De Moivre:
Dr Halley came to visit him in Cambridge, after they had been some time together the
Dr asked him what he thought the Curve would be that would be described by the Planets
supposing the force of attraction towards the Sun to be reciprocal to the square of their
distance from it. Sir Isaac replied immediately it would be an Ellipsis, the Dr struck with
joy & amazement asked him how he knew it, why saith he, I have calculated it, whereupon
Dr Halley asked him for the calculation without any further delay, Sr Isaac looked among
his papers, but could not nd it, but he promised him to renew it, & then send it him.
10
The two parents of Calculus:
Isaac Newton , 16421727:
developed mechanics, calculus, theory of light
before the age of 30 ...
then spent 20 years of his life on alchemy ...
1687: publishes
Philosophiae Naturalis Principia Mathematica
1704: published Opticks
brilliant but obsessive and nasty piece of work ...
great re-writer of history (in his own favour ...)
e.g. Hooke
(no references in Principia or Opticks!
... by standing on the shoulders of Giants ...
move of the Royal Society and a missing portrait)
or Leibniz
(independent commission of the Royal Society)
Gottfried Leibniz , 16461716:
invented calculus independently of Newton
(although slightly later)
Leibniz notation more transparent
it is in fact what we use today!
11
Newton and his successors established the principles and the mode of work
for all quantitative sciences (physics, biology, economics, etc):
science: no longer descriptive, but aimed at nding the (usually mathematical) laws
underlying the observed phenomena
ones degree of understanding of an area of science is measured by the extent to which one
can predict new phenomena from the discovered laws
Galileos principles dene the procedure for nding the underlying laws.
They now (i.e. after Newton and Leibniz) take the form:
(i) state a hypothesis,
(ii) devise an experiment to test it,
(iii) calculate the predicted outcome of the experiment from the hypothesis
(iv) carry out the experiment,
(v) accept or reject the hypothesis
When there are several distinct hypotheses, that are all consistent with the available data:
select the simplest hypothesis (Occams Razor)
Side eects of the scientic revolution:
industrial revolution
mechanistic view of the universe: nature is governed by dierential equations
(i) solution depends only on initial conditions
(ii) no free will
(iii) no divine intervention required to keep the world going ...
of course that all changed around 1920 ...
12
1.2. Style of the course
Description of contents:
Calculus: the related areas of dierentiation, integration, sequences and series, that are
united in their reliance on the idea of limits.
Additional topics: complex numbers and trigonometric functions. This simplies
the discussion of the classical functions of calculus (i.e. trigonometric, hyperbolic,
exponential, logarithmic) and their relations.
Relation between calculus and analysis:
Calculus:
intuitive and operational ideas, no emphasis on strict step-by-step logical derivation
e.g. derivative as limit of a ratio, integral as limit of a sum
initially (Newton, Leibniz) without rigorous denition of limit.
Analysis:
logical, rigorous proofs of the intuitive ideas of calculus.
stage 1 (calculus): nd a method to crack the problem
stage 2 (analysis): determine carefully why and when the method works
(this order of developing maths continues today: see e.g. path integrals in particle physics)
Rationale behind this division of work:
Hard to prove a theorem without being already familiar with the unproven (but strongly
believed in) result.
Hard to understand the need for the rigorous style of analysis until one has sucient
experience with calculus to realize the need to prove theorems, and to appreciate the
beauty and elegance of such a logical formal treatment.
Too much initial attention to the details of proofs while learning a subject often conceals
the relative simplicity of the result.
Variation in approaches to calculus:
there are dierent but mathematically equivalent routes via which to develop calculus,
e.g. alternative denitions of trigonometric functions:
(i) introduce series,
(ii) dene sin and cos as power series
(iii) proof of the properties of sin cos through study of the series
since all are equivalent, we will jump between denitions, dependent on the problem
you should, however, never be satised by results without any derivation; we will try to
give as many (full or partial) proofs as feasible within the time constraints of the course
13
1.3. Revision of some elementary mathematics
1.3.1. Numbers
First we start with some terminology and denitions:
denition : set : nonordered collection of objects (elements)
e.g. : S = {a, b, c, d}
no ordering : {a, b, c, d} = {b, a, d, c, } etc.
set membership : a S (a belongs to the set S)
denition : = { } (the empty set)
denition : IN = {1, 2, 3, . . .} (natural numbers)
denition : ZZ = {. . . , 3, 2, 1, 0, 1, 2, 3, . . .} (integer numbers)
denition : ZZ
+
= {1, 2, 3, . . .} = IN (positive integers)
denition : ZZ

= {1, 2, 3, . . .} (negative integers, China 1500 BC)


denition : | Q = set of all numbers of the form p/q with p, q ZZ (rational numbers)
e.g.
1
2
| Q,
1
2
/ ZZ
27
3
| Q,
27
3
ZZ
denition : IR = set of all rational and irrational numbers (real numbers)
e.g. IR, / | Q

2 IR,

2 / | Q

64 IR,

64 IN
denition : subsets of sets : A B if and only if every a A obeys a B
note : IN ZZ | Q IR
Logical symbols: (AND), (OR)
S
1
S
2
: both statements S
1
and S
2
are true
S
1
S
2
: either S
1
is true, or S
2
is true, or both are true
Logical consequences: S
1
S
2
means if statement S1 is true then also statement S2 is true
14
Every element x IR can be represented by a point on the number line:
-
0 1 2 3 4 -1 -2 -3 -4
?
3/2
?
-
The set IR is an ordered set. Let x, y IR, then the ordering symbols are dened as:
x < y : x is smaller than y , i.e. x to the left of y on the number line
x > y : x is larger than y , i.e. x to the right of y on the number line
x y : x is smaller than or equal to y
x y : x is larger than or equal to y
Note (should be obvious):
x < y x > y x, y (i.e. no such x, y IR exist)
x < y x y x, y (i.e. no such x, y IR exist)
x > y x y x, y (i.e. no such x, y IR exist)
x y x y x = y
Interval: segment of the number line
Closed interval: line segment that includes both end points
[a, b] = {x IR | a x b}
Open interval: line segment that does not include end points
(a, b) = {x IR | a < x < b}
Semi-open (or semi-closed) interval, exactly one endpoint is included
[a, b) = {x IR | a x < b} or (a, b] = {x IR | a < x b}
Unions and intersections:
union of A and B : A B = {x | x A x B}
intersection of A and B : A B = {x | x A x B}
Example:
(5, 4) [2, 5] = (5, 5] (5, 4) [2, 5] = [2, 4)
15
1.3.2. Powers of real numbers
In an expression of the form x
n
we call x the base and n the power.
First dene natural powers of real numbers. Let n IN and x IR, x = 0:
denition x
0
= 1
denition x
n
= x.x. . . . .x (nfold product, for n > 0)
Generalize to integer powers, by giving a denition for negative powers. Let n IN, n > 0:
denition x
n
=
1
x.x. . . . .x
(nfold product in denominator)
Three basic laws of manipulation, let x IR and n, m ZZ:
rst law : x
m
.x
n
= x
n+m
second law : x
m
/x
n
= x
mn
third law : (x
m
)
n
= x
mn
(i) prove these laws from the denitions, by checking the dierent case for the signs of m and n
(ii) note that (ii) can be derived from (i) and (iii)
Generalize to fractional powers of positive real numbers x IR
+
, by giving a denition for
powers of the form 1/n with n IN, n > 0:
denition x
1/n
=
n

x (nth root)
Here
n

x is the number y IR
+
with the property that y
n
= x.
Verify that the above laws of manipulation still hold in the case of fractional powers.
For example, let m, n, IN
+
:
a
/n
= (
n

a)

a
q+/n
= a
q
.a
/n
= a
q
.(
n

a)

Generalize to real powers of positive real numbers x IR


+
. Each real number y IR can
be approximated to arbitrary accuracy by fractions n/m, where m IN
+
, n ZZ. One then
denes x
y
similarly by substituting for y this fraction approximation. Let m IN and n ZZ:
if
n
m
is the best approximation of y by a fraction with denominator m
then x
n/m
= (
m

x)
n
is our associated approximation of x
y
Negative real powers are again dened via x
y
= 1/x
y
, and our laws of manipulation still hold!
16
1.3.3. Solving quadratic equations
Quadratic equations are equations of the following form (or can be reduced to this form), where
x is the unknown quantity to be determined, and a, b, c IR (the coecients) are given:
ax
2
+ bx + c = 0
Assume a = 0, otherwise equation reduces to a linear one.
Note: solutions x IR do not always exist!
Methods for solution:
solution by factorization: nd d, f, g, h IR such that
ax
2
+ bx + c = (dx + f)(gx + h)
(not always possible!)
New problem involves two linear expressions: (dx + f)(gx + h) = 0.
Solutions: x = f/d and x = h/g
solution by completing a square: nd d, f IR such that
ax
2
+ bx + c = a[(x + d)
2
f]
(always possible!)
New problem is solved using square root: (x +d)
2
= f, so x +d =

f, so x = d

f.
No solution x IR exists if f < 0.
solution via a the general formula:
x =
b

b
2
4ac
2a
Solutions x IR exist only if b
2
4ac.
Try all three methods on the following quadratic equations:
x
2
+ 3x 10 = 0 6x
2
+ 5x = 4 x
2
8x = 0 x
2
= 4x 5
1.3.4. Functions, inverse functions, and graphs
A function f is a rule (or recipe) that assigns a unique output number f(x) to each input
number x. The set of input values x for which the function is dened is called the domain of f.
The full set of output values that the function can generate when we choose values of x from
its domain is called the range of f. Domain D and range R are indicated in our notation via
f : D R.
17
Example 1:
let the recipe of a function f be: take any input x IR, add 2 to this input number
We write
f : IR IR f(x) = x + 2
Example 2:
let the recipe of a function g be: take any input x [1, 1] and square it, then subtract 7
We write
g : [1, 1] [7, 6] g(x) = x
2
7
Note:
a recipe is allowed to take dierent forms on dierent intervals, e.g.
Example 3:
f : [0, ) [0, 12] (14, 16) {9} f(x) =
_

_
3x for x [0, 4]
2x + 6 for x (4, 5)
9 for x 5
denition:
The inverse f
1
of a function f : D R is dened by the following:
f
1
: R D f
1
(f(x)) = x for all x D
In words: f
1
restores the original number x after the action of the function f.
Claim:
f
1
can not not exist if there exist two dierent numbers x
1
, x
2
D with f(x
1
) = f(x
2
)
Intuitively: how would f
1
select from the two candidates x
1
, x
2
which one to restore?
Formal proof: call f(x
1
) = y and substitute x
1
and x
2
into the above denition of f
1
, one
then nds the simultaneous requirements
f
1
(f(x
1
)) = x
1
f
1
(y) = x
1
f
1
(f(x
2
)) = x
2
f
1
(y) = x
2
The assumption of the existence of f
1
would thus lead to x
1
= x
2
, in contradiction with the
starting point x
1
= x
2
. Hence f
1
cannot exist.
denition:
A function f : D R is invertible if and only if f(x
1
) = f(x
2
) for any two values x
1
, x
2
D
with x
1
= x
2
18
How to nd the inverse of a given function f?
(i) write y = f(x)
(ii) transpose this formula, to make x the subject (i.e. obtain x = some recipe on y)
(iii) interchange x and y
(iv) result: y = f
1
(x), then verify ...
Work out the inverse functions for
x IR, f(x) = x + 4 x IR, f(x) = 6x + 4 x IR
+
, f(x) =

x
1.3.5. Exponential function, logarithm, laws for logarithms
denition:
the exponential function is f(x) = e
x
, where e is a special irrational number
(exponential growth)
Properties:
(i) e
x
> 0 for all x IR
(ii) e
x
increases monotonically
(iii) from the value 0 as x , via e
0
= 1 at x = 0, to unbounded growth as x
Dene n! = n(n 1)(n 2) . . . .3.2.1 (n factorial)
Equivalent expressions for e = 2.71828182 . . . (more about this later):
e = 1 +
1
1!
+
1
2!
+
1
3!
+
1
4!
+ . . . e = lim
n
(1 +
1
n
)
n
Similarly, exponential decay is described by f(x) = e
x
= 1/e
x
(same shape of curve, just exchange x x)
Let a IR
+
:
denition:
the logarithmic function to the base a, written as log
a
(x), is the inverse of the function f(x) = a
x
in words: log
a
(y) gives the power to which I must raise a to get y
denition:
the natural logarithmic is dened as the logarithmic function to the base e, i.e. ln(x) = log
e
(x)
in words: ln(y) gives the power to which I must raise e to get y
Properties (direct consequences of the concept of inverse):
a
log
a
(y)
= y log
a
(a
x
) = x e
ln(y)
= y ln(e
x
) = x
19
Manipulation identities for logarithms:
switching base : log
a
(x) =
log
b
(x)
log
b
(a)
(1)
products, fractions : log
a
(x.y) = log
a
(x) + log
a
(y) (2)
log
a
(x/y) = log
a
(x) log
a
(y) (3)
powers : log
a
(x
y
) = y log
a
(x) (4)
Proof of (1):
Strategy: we prove the equivalent statement log
a
(x). log
b
(a) = log
b
(x)
using the manipulation identities for powers
Show that the left-hand side (LHS) of the latter equation has the property b
LHS
= x:
b
LHS
= b
log
a
(x). log
b
(a)
= (b
log
b
(a)
)
log
a
(x)
= a
log
a
(x)
= x
By the denition of log
b
(x) this implies that LHS= log
b
(x), which is exactly the right-hand
side. This completes the proof.
Proof of (2):
Strategy: we use the manipulation identities for powers
We show that the right-hand side (RHS) of (2) has the property a
RHS
= xy:
a
RHS
= a
log
a
(x)+log
a
(y)
= a
log
a
(x)
.a
log
a
(y)
= x.y
By the denition of log
a
(x) this implies that RHS= log
a
(xy), which is exactly the left-hand
side of (2). This completes the proof.
Proof of (3):
Strategy: we use the manipulation identities for powers
We show that the right-hand side (RHS) of (3) has the property a
RHS
= x/y:
a
RHS
= a
log
a
(x)log
a
(y)
= a
log
a
(x)
.a
log
a
(y)
= x/y
By the denition of log
a
(x) this implies that RHS= log
a
(x/y), which is exactly the left-hand
side of (3). This completes the proof.
Proof of (4):
Strategy: we use the various manipulation identities for powers
We show that the right-hand side (RHS) of (4) has the property a
RHS
= x
y
:
a
RHS
= a
y log
a
(x)
= (a
log
a
(x)
)
y
= x
y
By the denition of log
a
(x) this implies that RHS= log
a
(x
y
), which is exactly the left-hand
side of (4). This completes the proof.
20
-5
-4
-3
-2
-1
0
1
2
3
4
5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
e
x
ln(x)
1.3.6. Trigonometric functions
Many equivalent denitions possible (more follows in this course).
denition:
Consider rotations around the origin. 1 radian is the magnitude of a rotation angle such that
it cuts a segment of the unit circle of length 1.
Consequence: going round once implies an angle of 2, i.e. 2 radians = 360
0
(since circumference of a radius-R circle equals 2R)
denition:
Consider a half-line with its one end-point in the origin. Choose it initially to lie along the
positive x-axis, and then rotate it anti-clockwise around the origin; call the rotation angle
(measured in radians). Find the coordinates (X, Y ) of the point where the half-line intersects
the unit circle: now call cos() = X, sin() = Y .
denition: tan() = sin()/ cos()
21
-2
-1
0
1
2
-2 -1 0 1 2
x/
tan(x)
cos(x) sin(x)
Consequences:
(i) cos
2
(x) + sin
2
(x) = 1 for all x IR
(denition of the unit circle!)
(ii) cos(x) and sin(x) are periodic, with period 2
(since 2 rotation gives a complete turn)
(iii) special values of sin(x) and cos(x) follow immediately,
e.g. for x = 0,

4
,

2
,
3
4
, . . .
(iv) more general than denition in terms of ratios of sides in triangles
(the latter do come out for x [0,

2
], here we have a denition for any value of x)
(v) zero points:
sin(x) = 0 for x = n with n ZZ, cos(x) = 0 for x =

2
+ n with n ZZ
(vi) tan(x) = 0 for x = n with n ZZ, doesnt exist for x =

2
+ n with n ZZ
22
2. Proof by induction
Induction is a method that allows us to prove an innite number of statements, by proving just
two statements. The basic ideas are
Let a set S IN have the following properties:
(i) 1 S, (ii) if n S then also n + 1 S
this construction generates all natural numbers: S = IN
If we prove
(i) that a statement is true for n = 1 (the basis)
(ii) that if it is true for a given n, it must also be true for n + 1 (the induction step)
then we will have proven that the statement is true for all n IN
denition: the summation symbol

n

k=m
a
k
= a
m
+ a
m+1
+ a
m+2
+ . . . + a
n1
+ a
n
(n, m ZZ, n m)
denition: binomial coecients
_
n
k
_
=
n!
k!(n k)!
with the convention 0! = 1
meaning:
the number of distinct ways to select k elements from a set of n elements
(where permutations are not counted separately)
Example 1:
Let n IN. Use induction to prove that
n

k=1
k =
1
2
n(n + 1)
Proof:
We subtract the two sides and dene A
n
=

n
k=1
k
1
2
n(n + 1).
We must now prove that A
n
= 0 for all n IN
(i) the basis:
for n = 1 one has A
1
=

1
k=1
k
1
2
1.2 = 1 1 = 0 (so claim is true for n = 1)
(ii) induction step:
now suppose that A
n
= 0 for some n IN, i.e.

n
k=1
k =
1
2
n(n + 1)
It follows that
A
n+1
=
n+1

k=1
k
1
2
(n + 1)(n + 2)
23
=
n

k=1
k + (n + 1)
1
2
(n + 1)(n + 2) (next use A
n
= 0 !)
=
1
2
n(n + 1) + (n + 1)
1
2
(n + 1)(n + 2)
= (n + 1)[
1
2
n + 1
1
2
(n + 2)] = 0
We have shown: if A
n
= 0 then also A
n+1
= 0. Knowing already that A
1
= 0 (the basis), this
completes the proof that A
n
= 0 for all n IN.
Example 2:
use induction to prove Newtons binomial formula for all integer n 0:
(a + b)
n
=
n

k=0
_
n
k
_
a
nk
b
k
Proof:
We subtract the two sides and dene A
n
= (a + b)
n

n
k=0
(
n
k
)a
nk
b
k
.
We must now prove that A
n
= 0 for all integer n 0
(i) the basis:
for n = 0 one has A
0
= (a + b)
0

0
k=0
(
0
k
)a
k
b
k
= 1 (
0
0
) = 0, so claim is true for n = 0
(ii) induction step:
now suppose that A
n
= 0 for some n IN, i.e. (a + b)
n
=

n
k=0
(
n
k
)a
nk
b
k
It follows that
A
n+1
= (a + b)
n+1

n+1

k=0
(
n+1
k
)a
n+1k
b
k
= (a + b)(a + b)
n

n+1

k=0
(
n+1
k
)a
n+1k
b
k
(next use A
n
= 0)
= (a + b)
_
n

k=0
(
n
k
)a
nk
b
k
_

n+1

k=0
(
n+1
k
)a
n+1k
b
k
=
n

k=0
(
n
k
)a
n+1k
b
k
+
n

k=0
(
n
k
)a
nk
b
k+1

n+1

k=0
(
n+1
k
)a
n+1k
b
k
In the middle term we substitute k = 1, so = 1, . . . , n + 1. Then we separate terms with
b
0
and with b
n+1
from the rest. This gives
A
n+1
=
n

k=0
(
n
k
)a
n+1k
b
k
+
n+1

=1
(
n
1
)a
n+1
b

n+1

k=0
(
n+1
k
)a
n+1k
b
k
= (
n
0
)a
n+10
b
0
(
n+1
0
)a
n+10
b
0
+ (
n
n+11
)a
n+1n1
b
n+1
(
n+1
n+1
)a
n+1n1
b
n+1
+
n

k=1
a
n+1k
b
k
_
(
n
k
) + (
n
k1
) (
n+1
k
)
_
24
=
n

k=1
a
n+1k
b
k
_
(
n
k
) + (
n
k1
) (
n+1
k
)
_
Finally we must work out the combinatorial terms, for k 1, . . . , n}:
(
n
k
) + (
n
k1
) (
n+1
k
) =
n!
(n k)!k!
+
n!
(n k + 1)!(k 1)!

(n + 1)!
(n + 1 k)!k!
=
n!
(k 1)!(n k)!
_
1
k
+
1
n k + 1

n + 1
k(n + 1 k)
_
=
n!
(k 1)!(n k)!
_
(n + 1 k) + k (n + 1)
k(n + 1 k)
_
= 0
Insertion into our previous intermediate result gives A
n+1
= 0.
Thus we have shown: if A
n
= 0 then also A
n+1
= 0. Knowing already that A
0
= 0 (the basis),
this completes the proof that A
n
= 0 for all integer n 0.
Example 3:
Let x IR, x = 1 and n IN. Use induction to prove the following (geometric series)
n1

k=0
x
k
=
1 x
n
1 x
Proof:
We subtract the two sides and dene A
n
=

n1
k=0
x
k

1x
n
1x
.
We must now prove that A
n
= 0 for all integer n IN
(i) the basis:
for n = 1 one has A
1
=

0
k=0
x
k

1x
1x
= 1 1 = 0, so claim is true for n = 1
(ii) induction step:
now suppose that A
n
= 0 for some n IN, i.e.

n1
k=0
x
k
=
1x
n
1x
. It follows that
A
n+1
=
n

k=0
x
k

1 x
n+1
1 x
=
n1

k=0
x
k
+ x
n

1 x
n+1
1 x
(next use A
n
= 0)
=
1 x
n
1 x
+ x
n

1 x
n+1
1 x
=
=
1 x
n
+ (1 x)x
n
(1 x
n+1
)
1 x
= 0
Thus we have shown: if A
n
= 0 then also A
n+1
= 0. Knowing already that A
1
= 0 (the basis),
this completes the proof that A
n
= 0 for all n IN.
Exercises:
Similarly, prove the following statements for n IN by induction:
n

k=1
k
2
=
1
6
n(n + 1)(2n + 1)
n

k=1
k
3
=
1
4
n
2
(n + 1)
2
25
3. Complex numbers
3.1. Introduction and denition
denition:
The number i is dened as a solution of the equation z
2
+ 1 = 0
(note: this eqn had no solutions z IR)
denition:
The set | C of complex numbers consists of all expressions of the form a + ib with a, b IR,
| C = {a + ib | a, b IR}
denition:
Addition and multiplication of numbers in | C is dened as follows. Let a, b, c, d IR:
(a + bi) + (c + di) = (a + c) + (b + d)i
(a + bi)(c + di) = (ac bd) + (ad + bc)i
(i.e. calculate as if with real numbers, and put i
2
= 1)
note:
i
2
= 1 can alternatively be taken as a consequence of the multiplication denition
denition: let a, b IR
The real part of z = a + ib is dened as Re(z) = a
The imaginary part of z = a + ib (where a, b IR) is dened as Im(z) = b
denition: let a, b IR
The complex conjugate of a complex number z = a + ib is dened as z = a ib
(i.e. obtained from z by replacing i by i)
z and z are called a complex conjugate pair
Notes:
(i) also z = i is a solution of z
2
+ 1 = 0
proof: z
2
+ 1 = (i)
2
+ 1 = (1)
2
i
2
+ 1 = i
2
+ 1 = 0
(ii) sometimes i is written as i =

1
(iii) sometimes z is written as z

(iv) unlike IR, it is impossible to order the numbers of | C in terms of larger and smaller
(just postulate i < 0 or i > 0 and see what happens ...)
26
3.2. Elementary properties of complex numbers
Every quadratic equation az
2
+ bz + c = 0 can be solved in | C, giving the solutions
z

=
b

b
2
4ac
2a
(where for b
2
4ac < 0 one has

b
2
4ac =

4ac b
2
= i

4ac b
2
)
proof:
a(z z
+
)(z z

) = a
_
z
2
z(z
+
+ z

) + z
+
z

_
= az
2
az
_
b +

b
2
4ac
2a
+
b

b
2
4ac
2a
_
+ a
_
b +

b
2
4ac
2a
__
b

b
2
4ac
2a
_
= az
2

1
2
z (2b) +
1
4a
_
b +

b
2
4ac
_ _
b

b
2
4ac
_
= az
2
+ bz +
1
4a
_
b
2
(

b
2
4ac)
2
_
= az
2
+ bz +
1
4a
_
b
2
b
2
+ 4ac
_
= az
2
+ bz + c
It now follows immediately that az
2
+ bz + c = 0 for z = z

. This completes the proof.


Every n-th order polynomial with real-valued or complex coecients,
P(z) = z
n
+ a
n1
z
n1
+ . . . + a
1
z + a
0
, can be factorized into linear factors to give
P(z) = (z z
1
)(z z
2
) . . . (z z
n1
)(z z
n
)
with n complex numbers z
1
. . . z
n
(the zeros or roots of the polynomial)
(the proof will not be given here)
For all z, w | C: z + w = z + w (proof in tutorials)
For all z, w | C: zw = z.w (proof in tutorials)
For every z | C: zz IR, with zz 0 and where zz = 0 if and only if z = 0
(proof in tutorials)
If z is a root of a polynomial P(z) with real coecients, then also z will be a root
proof:
We know that P(z) = 0, i.e. z
n
+ a
n1
z
n1
+ . . . + a
1
z + a
0
= 0.
Now, since z + w = z + w and zw = z.w, also
0 = z
n
+ a
n1
z
n1
+ . . . + a
1
z + a
0
0 = z
n
+ a
n1
z
n1
+ . . . + a
1
z + a
0
0 = z
n
+ a
n1
z
n1
+ . . . + a
1
z + a
0
Thus P(z) = 0. This completes the proof.
27
3.3. Absolute value and division
denition:
The absolute value (or modulus) of a complex number is dened as |z| =

z.z
Consequences:
If z = a + ib with a = Re(z) and b = Im(z), then |z| =

a
2
+ b
2
proof:
|z| =

z.z =
_
(a + ib)(a ib) =
_
a
2
(ib)
2
=

a
2
+ b
2
(z) = z
proof: let z = a + ib,
(z) = (a ib) = a + i(b) = a i(b) = a + ib = z
|Re(z)| |z| and |Im(z)| |z|
(proof in tutorials)
|z.w| = |z||w|, |z| = |z|
(proof in tutorials)
The above denition of |z| obeys the triangular inequality |z + w| |z| +|w|
proof:
|z + w|
2
= (z + w)(z + w) = (z + w)(z + w)
= z.z + z.w + w.z + w.w
= |z|
2
+ 2Re(z.w) +|w|
2
(due to z + z = 2Re(z))
|z|
2
+ 2|z.w| +|w|
2
(due to |Re(z)| |z|)
= |z|
2
+ 2|z||w| +|w|
2
(due to |zw| = |z||w| and |w| = |w|)
= (|z| +|w|)
2
Taking the square root of both sides then completes the proof.
The property |z| IR makes it easy to work out the ratio of complex numbers, and write it in
the standard form v = Re(v) + iIm(v). Method: multiply numerator and denominator by the
complex conjugate of the denominator.
Let z = a + ib and w = c + id, with a, b, c, d IR and w = 0:
a + ib
c + id
=
a + ib
c + id
c id
c id
=
(a + ib)(c id)
|c + id|
2
=
ac + ibc iad + bd
c
2
+ d
2
=
ac + bd
c
2
+ d
2
+ i
bc ad
c
2
+ d
2
In particular: 1/i = i, 1/z = z/|z|
2
28
3.4. The complex plane (Argand diagram)
3.4.1. Complex numbers as points in a plane
underlying idea:
there is a one-to-one correspondence between complex numbers z | C
and points in the ordinary plane IR
2
, namely
z = a + ib | C (a, b) IR
2
where a is taken as the x-coordinate and b is taken as the y-coordinate.
one-to-one:
(i) with every z | C corresponds exactly one point (a, b) in the plane
(ii) with every point (a, b) in the plane corresponds exactly one z | C
examples:
z | C : (a, b) IR
2
:
1 = 1 + 0.i (1, 0)
i = 0 + 1.i (0, 1)
1 = 1 + 0.i (1, 0)
i = 0 1.i (0, 1)
2 + 3i (2, 3)
1 i

2 (1,

2)
1 + 2i (1, 2)
3
3
2
i (3,
3
2
)
2 + i (2, )
denition:
The complex plane (or Argand diagram) is dened as the plane in which complex numbers
z = a + ib (with a, b IR) are represented by points with coordinates (a, b) = (Re(z), Im(z)).
The horizontal axis is called the real axis, and the vertical axis the imaginary axis.
notes:
(i) The real axis is the set of all real numbers,
(as it contains all z = a + ib | C for which b = 0, i.e. of the form z = a)
(ii) The imaginary axis is the set of all purely imaginary numbers,
(it contains all z = a + ib | C for which a = 0, i.e. of the form z = ib)
29
-
0 1 2 3 4 -1 -2 -3 -4
6
i
2i
3i
4i
-i
-2i
-3i
-4i
Re(z)
Im(z)

2+3i

1i

1 + 2i

3
3
2
i

2 + i
3.4.2. Polar coordinates
Recall the denition of the trigonometric functions:
(i) each point on the unit circle around the origin,
with Cartesian coordinates (x, y) such that x
2
+ y
2
= 1,
can be written in the form (x, y) = (cos(), sin())
(ii) Here denotes the angle with the x-axis of a half-line through the origin and (x, y)
This can be generalized easily to any circle around the origin:
(i) each point on the circle of radius r around the origin,
with Cartesian coordinates (x, y) such that x
2
+ y
2
= r
2
,
can be written in the form (x, y) = (r cos(), r sin())
(ii) Here denotes the angle with the x-axis of a half-line through the origin and (x, y)
30
-
0 1 2 3 4 -1 -2 -3 -4
6
1
2
3
4
-1
-2
-3
-4
x-axis
y-axis

3, 1) = 2(cos(

6
), sin(

6
))

(
3
4

2,
3
4

2) =
3
2
(cos(
3
4
), sin(
3
4
))
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
underlying idea:
We can represent each point in the plane with Cartesian coordinates (x, y)
alternatively by so-called polar coordinates (r, ).
The same is then also true for each complex number z | C.
examples:
polar coordinates: Cartesian coordinates: complex number:
(r, ) (x, y) = (r cos(), r sin()) z = r cos + ir sin
(r, ) = (1, 0) (x, y) = (1, 0) z = 1 + 0.i = 1
(r, ) = (1,

2
) (x, y) = (0, 1) z = 0 + 1.i = i
(r, ) = (1, ) (x, y) = (1, 0) z = 1 + 0.i = 1
(r, ) = (1,

2
) (x, y) = (0, 1) z = 0 1.i = i
(r, ) = (
3
2
,
3
4
) (x, y) = (
3
4

2,
3
4

2) z =
3
4

2 +
3
4

2i
(r, ) = (2,

6
) (x, y) = (

3, 1) z =

3 + 1.i =

3 + i
31
Notes:
(i) each complex number can thus be written as z = r(cos() + i sin()),
with r 0 and IR
(ii) due to periodicity of sin() and cos():
for all n IN also r(cos( + 2n) + i sin( + 2n) = z
(iii) If z = r(cos() + i sin() then r = |z|
Proof:
|z|
2
= z = r
2
(cos() + i sin()(cos() i sin()
= r
2
(cos
2
() i
2
sin
2
()) = r
2
3.4.3. The exponential form of numbers on the unit circle
denition:
The unit circle in the complex plane is the set {z | C | Re
2
(z) + Im
2
(z) = 1}
Alternatively, using polar coordinates: {z | C | z = cos() + i sin() for some IR}
We now proceed, for numbers on the unit circle, to one of the main statements in complex
number theory. It impacts on all complex numbers. Whether it is a denition or a theorem
depends on ones starting point. We have so far only dened exponential functions with real
arguments, so here it has the status of an expansion of the denition of the exponential function:
denition:
e
i
= cos() + i sin()
rationale:
Let us show why this is the natural extension of the exponential function. Dene f() = cos()+
i sin(), then (recall denition of derivative from school, and remember that
d
d
cos() = sin()
and
d
d
sin() = cos()):
d
d
f() =
d
d
cos() + i
d
d
sin() = sin() + i cos()
= i
_
cos() + i sin()
_
= ie
i
= if()
f(0) = 1
Whereas for real numbers a we would have had, with f() = e
b
:
d
d
f() =
d
d
e
b
= be
b
= bf()
f(0) = 1
We see that the two situations (real exponentials versus complex exponentials) connect if and
only if we choose b = i, i.e. if we dene f() = cos() + i sin() = e
i
32
Some examples:
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
z = e
i0
= cos(0) + i sin(0) = 1 + 0.i = 1
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
z = e
i/2
= cos(

2
) + i sin(

2
) = 0 + 1.i = i
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
z = e
i
= cos() + i sin() = 1 + 0.i = 1
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
z = e
3i/2
= cos(
3
2
) + i sin(
3
2
) = 0 1.i = i
33
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
z = e
2i/3
= cos(
2
3
) + i sin(
2
3
) =
1
2
+
1
2

3i
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
z = e
i/4
= cos(

4
) + i sin(

4
) =
1
2

2 +
1
2

2i
3.5. Complex numbers in exponential notation
3.5.1. Denition and general properties
Upon combining the polar coordinate representation z = r cos() + ir sin()
with the new identity e
i
= cos() + i sin():
claim:
Each complex number can be written in the form z = re
i
with r, IR and r 0
Notes:
(i) |z| = r
proof: |z| =

zz =

re
i
.re
i
=

r
2
= r
(ii) Re(z) = Re(r cos() + ir sin()) = r cos()
Im(z) = Im(r cos() + ir sin()) = r sin()
34
(iii) z = re
i
proof: z = r cos() + i sin() = r cos() i sin() = re
i
(iv) 1/z =
1
r
e
i
proof: multiply numerator and denominator by z:
1
z
=
1
re
i
=
1
re
i
re
i
re
i
=
re
i
r
2
=
1
r
e
i
(v) 1/z =
1
r
e
i
proof: multiply numerator and denominator by z:
1
z
=
1
re
i
=
1
re
i
re
i
re
i
=
re
i
r
2
=
1
r
e
i
(vi) The angle in z = re
i
is as yet not uniquely dened,
changing + 2n with n ZZ leaves one with the same number z
proof:
re
i(+2n)
= re
i(+2n)
= re
i
e
2ni
= re
i
(cos(2n) + i sin(2n)) = re
i
.1 = re
i
3.5.2. Multiplication and division in exponential notation
claim:
Multiplication of two complex numbers z = re
i
and w = e
i
,
with real r, 0 and real , , implies
(i) multiplication of the absolute values, and
(ii) addition of the arguments
proof:
z.w = re
i
.e
i
= r e
i+i
= r e
i(+)
claim:
Division of two complex numbers z = re
i
and w = e
i
,
with real r, 0 and real , , implies
(i) division of the absolute values, and
(ii) subtraction of the arguments
proof:
z/w =
re
i
e
i
=
r

e
i
e
i
=
r

e
i()
35
Examples:
1
2e
i/4
=
1
2
e
i/4
3e
3i/2
.
1
2
e
i/2
=
3
2
e
i
6e
i/6
3e
i/4
= 2e
i(1/41/6)
= 2e
i/12
i.re
i
= e
i/2
.re
i
= re
i(+/2)
In a nutshell:
adding or subtracting complex numbers is easier in standard notation z = a + ib
multiplying or dividing is easier in exponential notation z = re
i
3.5.3. The argument of a complex number
motivation
The angle in z = re
i
is not unique, we could add multiples of 2
This also makes it impossible to dene ln(z) in | C as the inverse of e
z
(see conditions for inverse: demand e
z
= e
z

if z = z

)
denition:
The argument arg(z) of a complex number z is the angle such that
(i) z = re
i
with r, IR and r 0
(ii) <
Notes:
One always has, by construction: z = |z| e
iarg(z)
The new condition that (, ] removes the previous ambiguity,
leaving only one unique angle = arg(z) to represent z
denition:
The natural logarithm ln(z) of a complex number z | C can now be dened as follows
ln(z) = ln
_
|z|e
iarg(z)
_
= ln(|z|) + ln
_
e
iarg(z)
_
= ln(|z|) + i arg(z)
A common task is to write a complex number from standard into exponential form,
i.e. to nd r = |z| and arg(z) when z = a + ib is given
36
Three-step method for nding |z| and arg(z) when z is given:
(i) calculate |z| = r using r
2
= zz
(ii) calculate e
i
using e
i
= z/r,
and nd all solutions using e
i
= cos() + i sin() (draw diagram)
(iii) determine which one obeys < : this must be arg(z)
Examples:
z = i : r
2
= zz = i.(i) = i
2
= 1 hence r =

1 = 1
e
i
= z/r = i/1 = i, hence
cos() + i sin() = i cos() = 0 and sin() = 1
solutions : = /2 + 2n (n ZZ)
(, ] : = /2 (i.e. n = 0), thus arg(z) = /2
z = 3 + 3i : r
2
= (3 + 3i)(3 3i) = 9
2
+ 9
2
= 18 hence r =

18 = 3

2
e
i
= z/r =
1
3

2
(3 + 3i) =
1

2
+
i

2
, hence
cos() + i sin() =
1

2
+
i

2
cos() =
1

2
, sin() =
1

2
solutions : = 3/4 + 2n (n ZZ)
(, ] : = 3/4 (i.e. n = 0), thus arg(z) = 3/4
z =

3 i : r
2
= (

3 i)(

3 + i) = 3 + 1 = 4 hence r =

4 = 2
e
i
= z/r =
1
2

3
1
2
i, hence
cos() + i sin() =

3
2

i
2
cos() =

3
2
, sin() =
1
2
solutions : = 7/6 + 2n (n ZZ)
(, ] : = 5/6 (i.e. n = 1), thus arg(z) = 5/6
z = e
2i/3
: r
2
= (e
2i/3
)(e
2i/3
) = 1, hence r =

1 = 1
e
i
= z/r = e
2i/3
= cos(2/3) i sin(2/3) =
1
2

i

3
2
, hence
cos() + i sin() =
1
2

i

3
2
cos() =
1
2
, sin() =

3
2
37
solutions : = /3 + 2n (n ZZ)
(pi, ] : = /3 (i.e. n = 0), thus arg(z) = /3
common pitfalls
(see last example):
If z = e
2i/3
this does not imply that r = |z| = 1 and arg(z) = 2/3
note that always |z| 0
If you arrive at e
i
= e
2i/3
this does not imply that there has been a mistake,
note that 1 = e
i
, so we may write e
2i/3
= e
i
e
2i/3
= e
5i/3
3.6. De Moivres Theorem
3.6.1. Statement and proof
theorem:
For all IR and n IN:
_
cos() + i sin()
_
n
= cos(n) + i sin(n)
proof 1: (via induction)
We will use cos(a+b) = cos(a) cos(b)sin(a) sin(b) and sin(a+b) = sin(a) cos(b)+cos(a) sin(b).
Dene A
n
=
_
cos() + i sin()
_
n
cos(n) i sin(n).
We have to prove that A
n
= 0 for all n IN.
(i) Induction basis: A
1
= cos() + i sin() cos() i sin() = 0, so claim is true for n = 1
(ii) Induction step. We assume that A
n
= 0 for some n IN. Now
A
n+1
=
_
cos() + i sin()
_
n+1
cos(n + ) i sin(n + ) use A
n
= 0 :
= (cos() + i sin()
__
cos(n) + i sin(n)
_
cos(n + ) i sin(n + )
= cos() cos(n) sin() sin(n) + i sin() cos(n) + i cos() sin(n)

_
cos(n) cos() sin(n) sin()
_
i
_
sin(n) cos() + cos(n) sin()
_
= 0
If A
n
= 0 for some n, then also A
n+1
= 0. Hence, in combination with the basis,
we have now shown that A
n
= 0 for all n IN. This completes the proof.
proof 2:
_
cos() + i sin()
_
n
= (e
i
)
n
= e
in
= cos(n) + i sin(n)
38
3.6.2. Applications
First application:
easy derivation of identities for trigonometric functions of multiple angles
n = 2:
cos(2) + i sin(2) = [cos() + i sin()]
2
= cos
2
() sin
2
() + 2i sin() cos()
Real and imaginary parts on both sides must be equal, so
cos(2) = cos
2
() sin
2
()
sin(2) = 2 sin() cos()
n = 3:
cos(3) + i sin(3) = [cos() + i sin()]
3
=
_
cos
2
() sin
2
() + 2i sin() cos()
_
(cos() + i sin())
= cos
3
() cos() sin
2
() + 2i sin() cos
2
()
+ i
_
cos
2
() sin() sin
3
() + 2i cos() sin
2
()
_
= cos
3
() 3 cos() sin
2
() + 3i sin() cos
2
() i sin
3
()
Real and imaginary parts on both sides must be equal, so
cos(3) = cos
3
() 3 cos() sin
2
()
sin(3) = 3 sin() cos
2
() sin
3
()
Using sin
2
() + cos
2
() = 1, this can also be written as
cos(3) = cos()
_
4 cos
2
() 3
_
sin(3) = sin()
_
4 cos
2
() 1
_
Second application:
nding the roots of unity, i.e. the n solutions of z
n
= 1 (integer n)
We know that |z| = 1, due to 1 = |z
n
|
n
= |z|
n
, so we may put z = e
i
For any integer m = 0, 1, 2, . . . we may write 1 = cos(2m) + i sin(2m)
Our equation then becomes (e
i
)
n
= cos(2m) + i sin(2m), or e
in
= e
2mi
It follows that for every integer m we have a solution = 2m/n,
i.e. a complex root z = e
2im/n
Note, nally: for m n we will generate solutions already found earlier
Hence: the n solutions of z
n
= 1 are given by
z = e
2im/n
for m = 0, 1, 2, . . . , n 1
39
For example:
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
Re(z)
Im(z)
Re(z)
Im(z)
z
2
= 1 z
3
= 1 z
4
= 1
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Re(z)
Im(z)
Re(z)
Im(z)
Re(z)
Im(z)
z
5
= 1 z
6
= 1 z
7
= 1
3.7. Complex equations
Complex equations are equations that can be reduced to the general form F(z, z) = 0,
where F denotes some function of z and z. Note: F can involve |z|, since |z|
2
= zz.
Solving a complex equation means nding all z | C with the property F(z, z) = 0.
We have already encountered examples:
(i) zz 1 = 0:
Here the solution set is the unit circle in the complex plane (i.e. innitely many)
(ii) z
n
1 = 0:
Here the solution set is a discrete set of n points (see previous section)
Note:
the solution sets in the complex plane of complex equations
can be more diverse than those of real equations, or than the previous examples
40
Lines in the complex plane:
these are found as solutions of linear equations, i.e.
uz + vz + w = 0 with u, v, w | C (check this)
Discrete sets of points that are not arranged around the origin:
-2.0
-1.0
0.0
1.0
2.0
3.0
4.0
5.0
-2.0 -1.0 0.0 1.0 2.0 3.0 4.0 5.0
-3
-2
-1
0
1
2
3
-3 -2 -1 0 1 2 3
Re(z)
Im(z)
Re(z)
Im(z)
(z 2 i)
6
2 = 0 z
5
2(1 + i)z
4
+ (1 + i)z
3
+(i 1)z
2
(2 + i)z + i + 3 = 0
Ellipses:
these are solutions of equations of the type
|z u| +|z w| R = 0
with u, w | C and R IR
+
(u and w will be the foci of the ellipse)
-2
-1
0
1
2
-2 -1 0 1 2
Re(z)
Im(z)
|z 1| +|z + 1| 3 = 0
But also unions of isolated points
and curves
-3
-2
-1
0
1
2
3
-3 -2 -1 0 1 2 3
Re(z)
Im(z)
(zz 1)(z
3
2iz
2
2(2i + 1)z 20 8i) = 0
And more ...
41
4. Trigonometric and hyperbolic functions
4.1. Denitions of trigonometric functions
4.1.1. Denition of sine and cosine
We need only dene sin() and cos(), since all other trigonometric functions are simply
combinations of these elementary two.
There are dierent but mathematically equivalent options:
Option I. geometric denition:
Consider a half-line with its one end-point in the origin. Choose it rst to lie along the positive
x-axis, then rotate it anti-clockwise around the origin; call the rotation angle (in radians).
Find the coordinates (X(), Y ()) of the point where the half-line intersects the unit circle.
Now dene
cos() = X()
sin() = Y ()
-2
-1
0
1
2
-2 -1 0 1 2
/
tan()
cos() sin()
Option II. denition via dierential equations:
We could also dene trigonometric functions
as the solutions of the following equations,
with specic initial values:
d
d
sin() = cos()
d
d
cos() = sin()
cos(0) = 1, sin(0) = 0
Option III. analytic denition:
Here we start by dening the function e
z
for any z | C via a series,
and subsequently dene trigonometric functions via
cos(z) =
1
2
(e
iz
+ e
iz
) sin(z) =
1
2i
(e
iz
e
iz
)
(note: this will also generalize trigonometric functions to complex arguments!)
The exponential function takes the form
e
z
=

n=0
z
n
n!
= 1 +
z
1!
+
z
2
2!
+
z
3
3!
+
z
4
4!
+ . . .
42
(note: one denes 0
0
= 1). We turn later in detail to existence and convergence questions for
innite series. For now we assume (rightly, as will turn out) that this innite sum is always
nite and well-behaved.
One then nds
sin(z) =

k=0
(1)
k
z
2k+1
(2k + 1)!
= z
z
3
3!
+
z
5
5!

z
7
7!
+ . . .
cos(z) =

k=0
(1)
k
z
2k
(2k)!
= 1
z
2
2!
+
z
4
4!

z
6
6!
+ . . .
proof:
Let us start with sin(z):
sin(z) =
1
2i

n=0
(iz)
n
n!

1
2i

n=0
(iz)
n
n!
=
1
i

n=0
i
n
z
n
n!
1
2
(1 (1)
n
)
Note:
(i) that
1
2
(1 (1)
n
) = 0 for even n, and
1
2
(1 (1)
n
) = 1 for odd n
hence in the sum we only retain term with odd n
(ii) that the odd values of n can be written as n = 2k + 1, with k = 0, 1, 2, 3, . . .
sin(z) =
1
i

n odd
i
n
z
n
n!
=
1
i

k=0
i
2k+1
z
2k+1
(2k + 1)!
=

k=0
(i
2
)
k
z
2k+1
(2k + 1)!
=

k=0
(1)
k
z
2k+1
(2k + 1)!
Next we turn to cos(z):
cos(z) =
1
2

n=0
(iz)
n
n!
+
1
2i

n=0
(iz)
n
n!
=

n=0
i
n
z
n
n!
1
2
(1 + (1)
n
)
Note:
(i) that
1
2
(1 + (1)
n
) = 1 for even n, and
1
2
(1 + (1)
n
) = 0 for odd n
hence in the sum we only retain term with even n
(ii) that the even values of n can be written as n = 2k, with k = 0, 1, 2, 3, . . .
cos(z) =

n even
i
n
z
n
n!
=

k=0
i
2k
z
2k
(2k)!
=

k=0
(i
2
)
k
z
2k
(2k)!
=

k=0
(1)
k
z
2k
(2k)!
43
-3
-2
-1
0
1
2
3
-10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10
x
sin(x)
0 2 4 6
1 3 5 7
Figure 1. Building sin(x) as a power series, by taking more and more terms in the summation.
Dashed: sin(x). Solid: f(x) =

N
k=0
(1)
k
x
2k+1
/(2k + 1)! for dierent choices of N (values of
N are indicated in italics). As N increases: f(x) starts to resemble sin(x) more and more, but
f(x) is only fully identical to sin(x) for N .
-3
-2
-1
0
1
2
3
-10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10
x
cos(x)
0
2 4 6
1 3 5 7
Figure 2. Building cos(x) as a power series, by taking more and more terms in the summation.
Dashed: cos(x). Solid: f(x) =

N
k=0
(1)
k
x
2k
/(2k)! for dierent choices of N (values of N are
indicated in italics). As N increases: f(x) starts to resemble cos(x) more and more, but f(x)
is only fully identical to sin(x) for N .
44
4.1.2. Elementary values
These can all be extracted from either (i) suitably chosen triangles, and/or (ii) projections of
special points on the unit circle onto (x, y)-axes, or (iii) simple transformations to reduce to
one of the previous two classes:
= 0, inspect projections of point on unit circle,
cos(0) = 1, sin(0) = 0
= /4, inspect inspect projections of point on unit circle, use sin
2
() + cos
2
() = 1
cos(/4) = sin(/4) = 1/

2
= /6: cut a triangle with three equal sides in half and pick a suitable corner,
sin(/6) =
1
2
, cos(/6) =
1
2

3
= /3: cut a triangle with three equal sides in half and pick a suitable corner,
sin(/3) =
1
2

3, cos(/3) = 1/2
= /2: inspect projections of point on unit circle,
cos(/2) = 0, sin(/2) = 1
4.1.3. Related functions
These are just short-hands for frequently occurring combinations of sine and cosine:
tangent: tan() = sin()/ cos()
(i) not dened for = /2 + n with n ZZ, where cos() = 0
(ii) tan( + ) = tan() for all IR
since sin( + ) = sin() and cos( + ) = cos()
cotangent: cot() = cos()/ sin()
(i) not dened for = n with n ZZ, where sin() = 0
(ii) cot( + ) = cot() for all IR
since sin( + ) = sin() and cos( + ) = cos()
secant: sec() = 1/ cos()
(i) not dened for = /2 + n with n ZZ, where cos() = 0
(ii) sec( + 2) = sec() for all IR
since cos( + 2) = cos()
cosecant: cosec() = 1/ sin()
(i) not dened for = n with n ZZ, where sin() = 0
(ii) cosec( + 2) = cosec() for all IR
since sin( + 2) = sin()
45
4.1.4. Inverse trigonometric functions
-2
-1
0
1
2
-2 -1 0 1 2
/
tan()
cos() sin()
Recall our denitions:
The inverse f
1
of a function f : D R
is dened by:
f
1
: R D
f
1
(f(x)) = x for all x D
f(f
1
(x)) = x for all x R
A function f : D R is invertible
if and only if:
f(x
1
) = f(x
2
) for any two x
1
, x
2
D
with x
1
= x
2
Inspect graphs of trigonometric functions:
(i) problem:
there are many
1
,
2
IR with
1
=
2
such that sin(
1
) = sin(
2
) ...
(same is true for cosine and tangent)
(ii) hence:
one can only dene inverse trigonometric functions by limiting their domains
to sets where any two distinct angles will give dierent function values!
denition of the inverse of sin(): arcsin(x)
need interval D that satises
(i) sin(
1
) = sin(
2
) for all
1
,
2
D with
1
=
2
(ii) the range corresponding to D covers all possible values of sine, R = [1, 1]
Answer: D = [/2, /2], so
arcsin : [1, 1] [/2, /2]
arcsin(sin()) = for all [/2, /2]
sin(arcsin(x)) = x for all x [1, 1]
arcsin(x) in words:
gives the angle [/2, /2] such that sin() = x
46
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0
x
sin(x)
sin(x)
arcsin(x)
arcsin(x)
/2
/2
1
1
Special values:
arcsin(0) = 0, arcsin(1/

2) = /4, arcsin(
1
2
) = /6, arcsin(
1
2

3) = /3, arcsin(1) = /4
negative values via: arcsin(x) = arcsin(x)
denition of the inverse of cos(): arccos(x)
need interval D that satises
(i) cos(
1
) = cos(
2
) for all
1
,
2
D with
1
=
2
(ii) the range corresponding to D covers all possible values of cosine: R = [1, 1]
Answer: D = [0, ], so
arccos : [1, 1] [0, ]
arccos(cos()) = for all [0, ]
cos(arccos(x)) = x for all x [1, 1]
arccos(x) in words:
gives the angle [0, ] such that cos() = x
47
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5
x
cos(x)
cos(x)
arccos(x)
arccos(x)

Special values:
arccos(0) = /2, arccos(
1
2
) = /3, arccos(1/

2) = /4, arccos(
1
2

3) = /6, arccos(1) = 0
negative values of x via: arccos(x) = arccos(x)
denition of the inverse of tan(): arctan(x)
need interval D that satises
(i) tan(
1
) = tan(
2
) for all
1
,
2
D with
1
=
2
(ii) the range corresponding to D covers all possible values of tangent, R = IR
Answer: D = (/2, /2), so
arctan : IR (/2, /2)
arctan(tan()) = for all (/2, /2)
tan(arctan(x)) = x for all x IR
arctan(x) in words:
gives the angle (/2, /2) such that tan() = x
48
-4.0
-3.5
-3.0
-2.5
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
-4.0-3.5-3.0-2.5-2.0-1.5-1.0-0.50.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
x
arctan(x)
arctan(x)
tan(x)
tan(x)
/2
/2
Special values:
arctan(0) = 0, arctan(1/

3) = /6, arctan(1) = /4, arctan(

3) = /3
negative values via: arctan(x) = arctan(x)
4.2. Elementary properties of trigonometric functions
4.2.1. Symmetry properties
The unit circle is invariant under:
(i) reection in the y-axis, i.e. (x, y) (x, y)
(ii) reection in the x-axis, i.e. (x, y) (x, y)
(iii) reection in the origin, i.e. (x, y) (x, y)
(iv) reection in the line x = y, i.e. (x, y) (y, x)
Symmetries have implications for the values of trigonometric functions
(note the geometric denition of sine and cosine):
49
reection in x-axis, i.e. :
-1
0
-1 0 1
x
y

We see that:
sin() = sin()
cos() = cos()
reection in y-axis, i.e. :
-1
0
-1 0 1
x
y

We see that:
sin( ) = sin()
cos( ) = cos()
reection in origin, i.e. + :
-1
0
-1 0 1
x
y

+
We see that:
sin( + ) = sin()
cos( + ) = cos()
reection in line x = y, i.e. /2 :
-1
0
-1 0 1
x
y

We see that:
sin(/2 ) = cos()
cos(/2 ) = sin()
50
4.2.2. Addition formulae
Trigonometric functions of sums or dierences of angles:
claim: for all , IR one has
sin( + ) = sin() cos() + cos() sin()
cos( + ) = cos() cos() sin() sin()
proofs:
subtract left- and right-hand sides of the two identities,
and show that the result is zero
use denitions in terms of exponentials:
LHS1 RHS1 = sin( + ) sin() cos() cos() sin()
=
1
2i
(e
i(+)
e
i(+)
)
1
4i
(e
i
e
i
)(e
i
+ e
i
)
1
4i
(e
i
+ e
i
)(e
i
e
i
)
=
1
2i
e
i(+)

1
2i
e
i(+)

1
4i
_
e
i(+)
+ e
i()
e
i()
e
i(+
_

1
4i
_
e
i(+)
e
i()
+ e
i()
e
i(+)
_
=
1
2i
e
i(+)

1
2i
e
i(+)

1
4i
_
2e
i(+)
2e
i(+
_
= 0
LHS2 RHS2 = cos( + ) cos() cos() + sin() sin()
=
1
2
(e
i(+)
+ e
i(+)

1
4
(e
i
+ e
i
)(e
i
+ e
i
)
1
4
(e
i
e
i
)(e
i
e
i
)
=
1
2
e
i(+)
+
1
2
e
i(+)

1
4
_
e
i(+)
+ e
i()
+ e
i()
+ e
i(+)
_

1
4
_
e
i(+)
e
i()
e
i()
+ e
i(+)
_
=
1
2
e
i(+)
+
1
2
e
i(+)

1
4
_
2e
i(+)
+ 2e
i(+)
_
= 0
This completes the proofs.
From the formulae for sine and cosine follow also:
(dont memorize, but derive when needed!)
tan( + ) =
sin( + )
cos( + )
=
sin() cos() + cos() sin()
cos() cos() sin() sin()
=
sin()/ cos() + sin()/ cos()
1 sin() sin()/ cos() cos()
=
tan() + tan()
1 tan() tan()
51
observation:
we rely on the property e
z+w
= e
z
.e
w
interesting to prove this property
from the series representation of e
z
alone!
use Newtons binomial formula and (

m
a
m
)(

n
b
n
) =

n,m
a
m
b
n
:
e
z+w
=

n=0
(z + w)
n
n!
=

n=0
1
n!
n

k=0
_
n
k
_
z
k
w
nk
Inspect these summations closer:
we have ultimately n = 0, 1, 2, . . . and k = 0, 1, 2, . . . ,,
but we restrict ourselves to those combinations (n, k) with k n
Hence we may also write :
e
z+w
=

k=0

n=k
1
n!
_
n
k
_
z
k
w
nk
=

k=0

n=k
1
n!
n!
k!(n k)!
z
k
w
nk
=

k=0

n=k
1
k!(n k)!
z
k
w
nk
Finally, switch from the index n to = n k, so = 0, 1, 2, . . .:
e
z+w
=

k=0

=0
1
k!
1
!
z
k
w

=
_

k=0
z
k
k!
__

=1
w

!
_
= e
z
.e
w
4.2.3. Applications of addition formulae
writing products of trigonometric functions as sums:
cos() cos() =
1
2
_
cos( + ) + cos( )
_
sin() sin() =
1
2
_
cos( ) cos( + )
_
sin() cos() =
1
2
_
sin( + ) + sin( )
_
proofs: trivial
just insert the appropriate addition formulae in the right-hand sides
recovering formulae for double angles:
just choose = in the addition formulae
sin(2) = 2 sin() cos()
cos(2) = cos
2
() sin
2
()
tan(2) = 2 tan()/[1 tan
2
()]
52
half-angle formulae:
cos() + cos() = cos(
1
2
( + ) +
1
2
( )) + cos(
1
2
( + )
1
2
( ))
=
_
cos(
1
2
( + )) cos(
1
2
( )) sin(
1
2
( + )) sin(
1
2
( ))
_
+
_
cos(
1
2
( + )) cos(
1
2
( )) + sin(
1
2
( + )) sin(
1
2
( ))
_
= 2 cos(
1
2
( + )) cos(
1
2
( ))
sin() + sin() = sin(
1
2
( + ) +
1
2
( )) + sin(
1
2
( + )
1
2
( ))
=
_
sin(
1
2
( + )) cos(
1
2
( )) + cos(
1
2
( + )) sin(
1
2
( ))
_
+
_
sin(
1
2
( + )) cos(
1
2
( )) cos(
1
2
( + )) sin(
1
2
( ))
_
= 2 sin(
1
2
( + )) cos(
1
2
( ))
cos() cos() = cos(
1
2
( + ) +
1
2
( )) cos(
1
2
( + )
1
2
( ))
=
_
cos(
1
2
( + )) cos(
1
2
( )) sin(
1
2
( + )) sin(
1
2
( ))
_

_
cos(
1
2
( + )) cos(
1
2
( )) + sin(
1
2
( + )) sin(
1
2
( ))
_
= 2 sin(
1
2
( + )) sin(
1
2
( ))
sin() sin() = sin(
1
2
( + ) +
1
2
( )) sin(
1
2
( + )
1
2
( ))
=
_
sin(
1
2
( + )) cos(
1
2
( )) + cos(
1
2
( + )) sin(
1
2
( ))
_

_
sin(
1
2
( + )) cos(
1
2
( )) cos(
1
2
( + )) sin(
1
2
( ))
_
= 2 sin(
1
2
( )) cos(
1
2
( + ))
Claim: one can always write linear combinations a cos() +b sin() in the form c sin( +)
with some suitable c, IR and with c 0
(lets disregard the trivial case a = b = 0)
Proof & construction in three steps:
53
(i) First note that
c sin( + ) = c sin() cos() + c cos() sin()
Hence we seek c and such that
a/c = sin() b/c = cos()
Use sin
2
() + cos
2
() = 1: a
2
+ b
2
= c
2
, so c =

a
2
+ b
2
(ii) Next: nd from
a/c = sin() b/c = cos()
hence tan() = a/b, so we nd = arctan(a/b) + n with n ZZ
(note: arctan() must be in (/2, /2), but there is no reason why should be there!)
(iii) Finally: determine n from a/c = sin() and b/c = cos()
Just check the quadrant of the solution in the plane, by inspecting signs.
Note: arctan() (/2, /2) is always in quadrant 1 or 4:
b/c = 0 : arctan() doesn

t exist, here cos() = 0 so


= /2 + 2n (n ZZ) if a > 0
= 3/2 + 2n (n ZZ) if a < 0
b/c > 0 : quadrant 1 or 4, so = arctan(a/b) + 2n (n ZZ)
b/c < 0 : quadrant 2 or 3, so = arctan(a/b) + + 2n (n ZZ)
4.2.4. The tan(/2) formulae
Objective:
write all trigonometric functions in terms of t = tan(
1
2
)
(useful later in integrals)
tan() =
2 tan(
1
2
)
1 tan
2
(
1
2
)
=
2t
1 t
2
cos() = cos
2
(
1
2
) sin
2
(
1
2
) = 2 cos
2
(
1
2
) 1 =
2
cos
2
(
1
2
)
1
=
2
[sin
2
(
1
2
) + cos
2
(
1
2
)] cos
2
(
1
2
)
1 =
2
tan
2
(
1
2
) + 1
1
=
2
1 + t
2

1 + t
2
1 + t
2
=
1 t
2
1 + t
2
sin() = cos() tan() =
1 t
2
1 + t
2
2t
1 t
2
=
2t
1 + t
2
54
4.3. Denitions of hyperbolic functions
4.3.1. Denition of hyperbolic sine and hyperbolic cosine
Option I. denition via dierential equations:
We can dene the hyperbolic sine sinh(z) and hyperbolic cosine cosh(z) as the solutions of the
following equations, with specic initial values:
d
dz
sinh(z) = cosh(z),
d
dz
cosh(z) = sinh(z), cosh(0) = 1, sinh(0) = 0
(note:
dierence with previous eqns dening sine and cosine
is only in a minus sign in the second eqn!)
-5
-4
-3
-2
-1
0
1
2
3
4
5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
cosh(x) cosh(x)
sinh(x)
sinh(x)
tanh(x)
tanh(x)
Option II. direct analytic denition
Having already dened e
z
by the series e
z
=

n=0
z
n
/n!
we dene hyperbolic functions via
cosh(z) =
1
2
(e
z
+ e
z
)
sinh(z) =
1
2
(e
z
e
z
)
(this also generalizes
hyperbolic functions
to complex arguments)
Related functions:
hyperbolic tangent : tanh(z) = sinh(z)/ cosh(z)
hyperbolic cotangent : coth(z) = cosh(z)/ sinh(z)
hyperbolic secant : sech(z) = 1/ cosh(z)
hyperbolic cosecant : cosech(z) = 1/ sinh(z)
55
4.3.2. General properties and special values
Properties involving both sinh and cosh:
One immediately conrms from the direct analytic denition:
d
dz
sinh(z) = cosh(z)
d
dz
cosh(z) = sinh(z)
For any z | C: cosh
2
(z) sinh
2
(z) = 1
proof:
cosh
2
(z) sinh
2
(z) =
_
1
2
(e
z
+ e
z
)
_
2

_
1
2
(e
z
e
z
)
_
2
=
1
4
_
(e
2z
+ 2 + e
2z
)
_

1
4
_
e
2z
2 + e
2z
)
_
=
1
4
_
(e
2z
+ 2 + e
2z
e
2z
+ 2 e
2z
)
_
= 1
Consequence:
if for z IR the two are regarded as coordinates (X, Y ) in a plane,
i.e. X = cosh(z) and Y = sinh(z), then the possible points (X, Y )
dene the branches of a hyperbole X
2
Y
2
= 1 (hence the name!)
Properties of sinh:
sinh(z) = sin(z)
proof: follows directly from denition,
sinh(z) =
1
2
(e
z
e
z
) =
1
2
(e
z
e
z
) = sinh(z)
for z IR: sinh(z) increases monotonically
proof: dierentiate the analytic denition,
using
d
dz
e
az
= ae
az
,
d
dz
sinh(z) = cosh(z) =
1
2
(e
z
+ e
z
) > 0
consider z IR:
as z : e
z
0 and e
z
= 1/e
z
, hence: sin(z) =
1
2
(e
z
e
z
)
at z = 0: sinh(0) =
1
2
(e
0
e
0
) = 0
as z : e
z
and e
z
= 1/e
z
0, hence: sin(z) =
1
2
(e
z
e
z
)
56
Properties of cosh:
cosh(z) = cosh(z)
proof: follows directly from denition,
cosh(z) =
1
2
(e
z
+ e
z
) =
1
2
(e
z
+ e
z
) = cosh(z)
for z IR
+
: cosh(z) increases monotonically
for z IR

: cosh(z) decreases monotonically


proof: dierentiate the analytic denition,
using
d
dz
e
az
= ae
az
,
d
dz
cosh(z) = sinh(z)
_

_
> 0 for z > 0
= 0 for z = 0
< 0 for z < 0
consider z IR:
as z : e
z
0 and e
z
= 1/e
z
, hence: cosh(z) =
1
2
(e
z
+ e
z
)
at z = 0: cosh(0) =
1
2
(e
0
+ e
0
) = 0 = 1
as z : e
z
and e
z
= 1/e
z
0, hence: cosh(z) =
1
2
(e
z
+ e
z
)
Properties of tanh:
tanh(z) = tanh(z)
proof: follows directly from denition,
tanh(z) = sinh(z)/ cosh(z) = sinh(z)/ cosh(z) = tanh(z)
for z IR: tanh(z) increases monotonically
proof: dierentiate the denition,
using
d
dz
sinh(z) = cosh(z) and
d
dz
sinh(z) = cosh(z),
d
dz
tanh(z) =
d
dz
_
sinh(z)
cosh(z)
_
=
cosh(z)
d
dz
sinh(z) sinh(z)
d
dz
cosh(z)
cosh
2
(z)
=
cosh
2
(z) sinh
2
(z)
cosh
2
(z)
=
1
cosh
2
(z)
> 0
consider z IR, rewrite tanh(z) in two distinct ways:
tanh(z) =
sinh(z)
cosh(z)
=
e
z
e
z
e
z
+ e
z
=
_

_
(1 e
2z
)/(1 + e
2z
) so tanh(z) 1 if z
0 for z = 0
(e
2z
1)/(e
2z
+ 1) so tanh(z) 1 if z
57
4.3.3. Connection with trigonometric functions
More than just similarity between trigonometric and hyperbolic functions,
When dened for complex numbers they can be expressed in terms of each other!
Let x IR:
trigonometric functions are hyperbolic functions of imaginary arguments:
sin(x) = i sinh(ix)
cos(x) = cosh(ix)
tan(x) = i tanh(ix)
proofs:
just write RHS in terms of exponentials ...
hyperbolic functions are trigonometric functions of imaginary arguments:
sinh(x) = i sin(ix)
cosh(x) = cos(ix)
tanh(x) = i tan(ix)
proofs:
just write RHS in terms of exponentials ...
4.3.4. Applications of connection with trigonometric functions
All previous identities for trigonometric functions
whose derivation did not rely on argument being real (e.g. addition formulae),
translate via the above into identities for hyperbolic functions!
Addition formulae:
sin( + ) = sin() cos() + cos() sin()
cos( + ) = cos() cos() sin() sin()
tan( + ) =
tan() + tan()
1 tan() tan()
give:
sinh( + ) = i sin(i) cos(i) i cos(i) sin(i)
= sinh() cosh() + cosh() sinh()
cosh( + ) = cos(i) cos(i) sin(i) sin(i)
58
= cosh() cosh() + sinh() sinh()
tanh( + ) =
i tan(i) i tan(i)
1 tan(i) tan(i)
=
tanh() + tanh()
1 + tanh() tanh()
Formulae for double angles:
sin(2) = 2 sin() cos()
cos(2) = cos
2
() sin
2
()
tan(2) = 2 tan()/[1 tan
2
()]
give:
sinh(2) = 2i sin(i) cos(i) = 2 sinh() cosh()
cosh(2) = cos
2
(i) sin
2
(i) = cosh
2
() + sinh
2
()
tanh(2) = 2i tan(i)/[1 tan
2
(i)] = 2 tanh()/[1 + tanh
2
()]
4.3.5. Inverse hyperbolic functions
-5
-4
-3
-2
-1
0
1
2
3
4
5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
cosh(x) cosh(x)
sinh(x)
sinh(x)
tanh(x)
tanh(x)
Recall our denitions:
The inverse f
1
of a function f : D R
is dened by:
f
1
: R D
f
1
(f(x)) = x for all x D
f(f
1
(x)) = x for all x R
A function f : D R is invertible
if and only if:
f(x
1
) = f(x
2
) for any two x
1
, x
2
D
with x
1
= x
2
Inspect graphs of hyperbolic functions:
(i) there are generally two x
1
, x
2
IR with x
1
= x
2
such that cosh(x
1
) = cosh(x
2
),
namely x
1
= x
2
(in contrast to sinh : IR IR and tanh : IR (1, 1), which are invertible)
(ii) hence: we must limit the domain of cosh(x) to a set where
any two distinct angles will give dierent function values
59
denition of the inverse of sinh(x): arcsinh(y)
arcsinh : IR IR
arcsinh(sinh(x)) = x for all x IR
sinh(arcsinh(y)) = y for all y IR
arcsinh(y) in words:
gives the value x IR such that sinh(x) = y
In fact: we can produce a simple formula:
y = sinh(x) : y =
1
2
(e
x
e
x
) e
x
e
x
2y = 0 e
2x
2ye
x
1 = 0
(e
x
)
2
2y(e
x
) 1 = 0 e
x
=
1
2
(2y
_
4y
2
+ 4 = y
_
y
2
+ 1
since e
x
> 0: e
x
= y +

y
2
+ 1, so x = ln(y +

y
2
+ 1)
hence:
arcsinh(y) = ln(y +
_
y
2
+ 1) for all y IR
-4
-3
-2
-1
0
1
2
3
4
-4 -3 -2 -1 0 1 2 3 4
x
arcsinh(x)
arcsinh(x)
sinh(x)
sinh(x)
60
denition of the inverse of tanh(x): arctanh(y)
arctanh : (1, 1) IR
arctanh(tanh(x)) = x for all x IR
tanh(arctanh(y)) = y for all y (1, 1)
arctanh(y) in words:
gives the value x IR such that tanh(x) = y
In fact: we can produce a simple formula:
y = tanh(x) : y =
e
x
e
x
e
x
+ e
x
y(e
x
+ e
x
) = e
x
e
x
y(e
2x
+ 1) = e
2x
1
1 + y = e
2x
(1 y) e
2x
=
1 + y
1 y
2x = ln[(1 + y)/(1 y)]
hence:
arctanh(y) =
1
2
ln
_
1 + y
1 y
_
for all y (1, 1)
-4
-3
-2
-1
0
1
2
3
4
-4 -3 -2 -1 0 1 2 3 4
x
tanh(x)
tanh(x)
arctanh(x)
arctanh(x)
61
denition of the inverse of cosh(x): arccosh(y)
need interval D that satises
(i) cosh(x
1
) = cosh(x
2
) for all x
1
, x
2
D with x
1
= x
2
(ii) the range corresponding to D covers all possible values of cosh, R = [1, )
Answer: D = [0, ), so
arccosh : [1, ) [0, )
arccosh(cosh(x)) = x for all x [0, )
cosh(arccosh(y)) = y for all y [1, )
arccosh(y) in words: gives the value x [0, ) such that cosh(x) = y
In fact: we can produce a simple formula:
y = cosh(x) : y =
1
2
(e
x
+ e
x
) e
x
+ e
x
2y = 0 e
2x
2ye
x
+ 1 = 0
(e
x
)
2
2y(e
x
) + 1 = 0 e
x
=
1
2
(2y
_
4y
2
4 = y
_
y
2
1
since x [0, ): e
x
1, and hence e
x
= y +

y
2
1, so x = ln(y +

y
2
1)
hence:
arccosh(y) = ln(y +
_
y
2
1) for all y [1, )
-4
-3
-2
-1
0
1
2
3
4
-4 -3 -2 -1 0 1 2 3 4
x
arccosh(x)
cosh(x)
1
1
62
5. Functions, limits and dierentiation
5.1. Introduction
5.1.1. Rate of change, tangent of a curve
limits resolved two long-standing problems:
(i) mechanics: how to dene and nd the instantaneous rate of change of a quantity
(ii) geometry: how to dene and nd the tangent to arbitrary curves in arbitrary points
Consider time-dependent quantity x(t), t IR denotes time
(e.g. position of a particle moving along a line)
rate of change over interval t [t
1
, t
2
]: average velocity v during the interval
v =
change in x
time taken
=
x(t
2
) x(t
1
)
t
2
t
1
observation in (t, x) graph:
v is slope
of line through the points
(t
1
, x(t
1
)) and (t
2
, x(t
2
))
0
2
4
6
0 1 2 3 4 5
x(t)
(t
1
, x(t
1
))
(t
2
, x(t
2
))
v
interval
v at t
1
instantaneous speed at time t
1
:
value of v when t
2
t
1
result: tangent at curve x(t)
at the point t = t
1
problems (i) (mechanics)
and (ii) (geometry)
are the same!
So far: only ideas, denitions and pictures ...
Calculus: nd formulas for the instantaneous velocities (or tangents),
when the curves x(t) are given
63
note:
not all curves are written as x(t) or f(x) or y(x) ...
(liberate yourself from name conventions!)
5.1.2. Finding tangents and velocities why we need limits
Calculation would seem obvious, e.g.
Fermats calculation of instantaneous slope of function f(x) at value x:
work out formula for average slope during interval [x, x + h]
slope =
f(x + h) f(x)
h
put h = 0 in the result
Often this simple recipe works ...
f(x) = ax + b:
slope =
f(x + h) f(x)
h
=
[a(x + h) + b] [ax + b]
h
=
ah
h
= a
Put h = 0: slope = a
f(x) = ax
2
+ bx + c:
slope =
f(x + h) f(x)
h
=
[a(x + h)
2
+ b(x + h) + c] [ax
2
+ bx + c]
h
=
a(x
2
+ 2xh + h
2
) + bx + bh + c ax
2
bx c
h
=
a(2xh + h
2
) + bh
h
= 2ax + ah + b
Put h = 0: slope = 2ax + b
f(x) = ax
n
, n IN
+
:
use binomial formula,
slope =
f(x + h) f(x)
h
=
a(x + h)
n
ax
n
h
=
a
h
_
n

k=0
_
n
k
_
x
nk
h
k
x
n
_
=
a
h
n

k=1
_
n
k
_
x
nk
h
k
= a
n

k=1
_
n
k
_
x
nk
h
k1
= a
n1

=0
_
n
+ 1
_
x
n1
h

Put h = 0: slope = a
_
n
1
_
x
n1
= a
n!
1!(n1)!
x
n1
= anx
n1
64
But equally often it doesnt ...
f(x) = a
x
:
slope =
f(x + h) f(x)
h
=
a
x+h
a
x
h
= a
x
_
a
h
1
h
_
Putting h = 0 gives: slope = a
x
((a
0
1)/0) = a
x
(0/0) ... ??
f(x) = sin(x):
slope =
f(x + h) f(x)
h
=
sin(x + h) sin(x)
h
=
sin(x) cos(h) + cos(x) sin(h) sin(x)
h
= sin(x)
_
cos(h) 1
h
_
+ cos(x)
_
sin(h)
h
_
Putting h = 0 gives: slope = sin(x)(0/0) + cos(x)(0/0) ... ??
f(x) = cos(x):
slope =
f(x + h) f(x)
h
=
cos(x + h) cos(x)
h
=
cos(x) cos(h) sin(x) sin(h) cos(x)
h
= cos(x)
_
cos(h) 1
h
_
sin(x)
_
sin(h)
h
_
Putting h = 0 gives: slope = cos(x)(0/0) sin(x)(0/0) ... ??
The problem:
One cannot set h = 0 in expressions like (a
h
1)/h or (cos(h) 1)/h or sin(h)/h
The solution: (Newton, Leibniz)
The correct thing to do is to take h smaller and smaller, and investigate whether
the quantity [f(x + h) f(x)]/h then approaches a well-dened value for h 0.
If so: that value will be the slope at point x, to be called the derivative of f(x) at x
Notes:
(i) Newton & Leibniz had the concept, the intuitive denition of limit
but exact mathematical denition of limit was due to Cauchy, much later ...
(ii) notation for derivative:
f

(x) =
df
dx
= lim
h0
f(x + h) f(x)
h
65
We saw:
(i) derivatives of functions that are sums of powers are easy to nd
(ii) ergo: another important use of power series!
e
x
=

n=0
x
n
n!
= 1 + x +
1
2
x
2
+ . . .
sin(x) =

n=0
(1)
n
x
2n+1
(2n + 1)!
= x
1
6
x
3
+
1
120
x
5
+ . . .
cos(x) =

n=0
(1)
n
x
2n
(2n)!
= 1
1
2
x
2
+
1
24
x
4
+ . . .
Remember stumbling blocks on previous page:
lim
h0
1
h
(a
h
1) = lim
h0
1
h
_
e
ln(a
h
)
1
_
= lim
h0
1
h
_
e
h ln a
1
_
= lim
h0
1
h
_
1 + h ln a +
1
2
(h lna)
2
+ . . . 1
_
= lim
h0
_
lna +
1
2
h(ln a)
2
+ . . .
_
= ln a
lim
h0
1
h
(cos(h) 1) = lim
h0
1
h
_
1
1
2
h
2
+
1
24
h
4
+ . . . 1
_
= lim
h0
_

1
2
h +
1
24
h
3
+ . . .
_
= 0
lim
h0
1
h
sin(h) = lim
h0
1
h
_
h
1
6
h
3
+
1
120
h
5
+ . . .
_
= lim
h0
_
1
1
6
h
2
+
1
120
h
4
+ . . .
_
= 1
Hence
d
dx
a
x
= a
x
lim
h0
1
h
(a
h
1) = a
x
ln(a)
d
dx
sin(x) = sin(x) lim
h0
1
h
(cos(h) 1) + cos(x) lim
h0
1
h
sin(h) = cos(x)
d
dx
cos(x) = cos(x) lim
h0
1
h
(cos(h) 1) sin(x) lim
h0
1
h
sin(h) = sin(x)
66
5.2. The limit
Limit in words:
the output value f(x) to which a function tends (if at all)
as x approaches a specic input value x
0
Some simple abbreviations and symbols:
: there exists : for all : if and only if
5.2.1. Left and right limits
The proper denition ...
The right limit: approach x
0
from the right,
notation: lim
xx
+
0
f(x) or lim
xx
0
f(x)
denition
lim
xx
0
f(x) = L ( > 0)( > 0) : |f(x)L| < whenever x (x
0
, x
0
+)
in words:
lim
xx
0
f(x) = L for all > 0 there exists a > 0 such that
|f(x)L| < whenever x (x
0
, x
0
+)
in sloppy words:
one may get f(x) as close as one wishes to the value L
simply by lowering x suciently close to x
0

The left limit: approach x


0
from the left,
notation: lim
xx

0
f(x) or lim
xx
0
f(x)
denition
lim
xx
0
f(x) = L ( > 0)( > 0) : |f(x)L| < whenever x (x
0
, x
0
)
in words:
lim
xx
0
f(x) = L for all > 0 there exists a > 0 such that
|f(x)L| < whenever x (x
0
, x
0
)
in sloppy words:
one may get f(x) as close as one wishes to the value L
simply by raising x suciently close to x
0

67
Notes:
(i) these limits need not always exist
(ii) if they do, then lim
xx
0
f(x) might be dierent from lim
xx
0
f(x)
(iii) lim
xx
0
f(x) and lim
xx
0
f(x) could exist even if f(x
0
) does not exist
Examples (draw pictures!):
f(x) = 1/x : neither lim
x0
f(x) nor lim
x0
f(x) exist,
f(0) does not exist
f(x) = 1/x : lim
x1
f(x) = lim
x1
f(x) = f(1) = 1
f(x) = tanh(1/x) : lim
x0
f(x) = 1, lim
x0
f(x) = 1,
f(0) does not exist
f(x) =
_
x/|x| for x = 0
0 for x = 0
: lim
x0
f(x) = 1, lim
x0
f(x) = 1,
f(0) = 0
f(x) =
_

_
x for x > 0
for x = 0
x
1
for x < 0
: lim
x0
f(x) = 0, lim
x0
f(x) does not exist,
f(0) =
5.2.2. Asymptotics - limits involving innity
Approach (always from the left!),
notation: lim
x
f(x)
denition
lim
x
f(x) = L ( > 0)(X > 0) : |f(x)L| < whenever x > X
in sloppy words:
one may get f(x) as close as one wishes to the value L
simply by making x larger and larger
68
Approach (always from the right!),
notation: lim
x
f(x)
denition
lim
x
f(x) = L ( > 0)(X < 0) : |f(x)L| < whenever x < X
in sloppy words:
one may get f(x) as close as one wishes to the value L
simply by making x smaller and smaller
5.2.3. When left/right limits exists and are identical
Approach x
0
from either side,
notation: lim
xx
0
f(x)
denition (version 1)
lim
xx
0
f(x) = L lim
x0
f(x) = lim
x0
f(x) = L
denition (version 2)
lim
xx
0
f(x) = L ( > 0)( > 0) : |f(x)L| < whenever |xx
0
| <
in sloppy words:
one may get f(x) as close as one wishes to the value L
simply by taking x suciently close to x
0
, from either side
the concept of continuity of a function
denition
A function f is continuous at the point x
0
if
lim
xx
0
f(x) exists and lim
xx
0
f(x) = f(x
0
)
continuous functions are those for which the graph can always be drawn
without lifting ones pen from the paper
Mathematical expressions may involve multiple limits,
notation conventions:
lim
xx
0
lim
yy
0
f(x, y) = lim
xx
0
_
lim
yy
0
f(x, y)
_
lim
yy
0
lim
xx
0
f(x, y) = lim
yy
0
_
lim
xx
0
f(x, y)
_
69
Note: the order in which limits are taken matters! (jargon: limits do not commute)
e.g.
f(x, y) = x
2
y e
xy
: lim
x0
lim
y1
f(x, y) = lim
x0
(x
2
e
x1
) = 1/e
lim
y1
lim
x0
f(x, y) = lim
y1
(e
y
) = 1/e
f(x, y) = (2x y)/(x + 3y) : lim
x0
lim
y0
f(x, y) = lim
x0
(2x/x) = 2
lim
y0
lim
x0
f(x, y) = lim
y0
(y/3y) = 1/3
f(x, y) = 1 + tanh(x + y) : lim
x
lim
y
f(x, y) = lim
x
(1 1) = 0
lim
y
lim
x
f(x, y) = lim
y
(1 + 1) = 2
5.2.4. Rules for limits of composite expressions
(i) If lim
xx
0
f(x) = a, lim
xx
0
g(x) = b:
lim
xx
0
_
f(x) + g(x)
_
= a + b
lim
xx
0
_
f(x)g(x)
_
= ab
if b = 0 : lim
xx
0
_
f(x)/g(x)
_
= a/b
if b = 0 and a = 0 : lim
xx
0
_
f(x)/g(x)
_
does not exist
(ii) If lim
xx
0
f(x) = a, lim
ya
g(y) = b, g(a) = b:
lim
xx
0
g(f(x)) = b
(iii) determine limits via pinching or sandwiching:
Let there be a > 0 such that
(x, |xx
0
| < ) : f(x) g(x) h(x)
lim
xx
0
f(x) = lim
xx
0
h(x) = a
lim
xx
0
g(x) = a
70
5.2.5. Examples
(i) lim
x0
xsin(x
1
) = 0
-0.5
0.0
0.5
-0.5 0.0 0.5
-0.5
0.0
0.5
-0.5 0.0 0.5
x
xsin(x
1
)
proof: via sandwiching,
since always sin(. . .) [1, 1],
(x IR) : |x| xsin(x
1
) |x|
Clearly: lim
x0
|x| = 0 and lim
x0
(|x|) = 0,
hence lim
x0
xsin(x
1
) = 0
(ii) lim
x0
x
1
tan(x) = 1
proof:
use lim
x0
x
1
sin(x) = 1 and lim
x0
cos(x) = cos(0) = 1
x
1
tan(x) =
sin(x)
xcos(x)
=
_
sin(x)
x
__
1
cos(x)
_
Hence
lim
x0
x
1
tan(x) =
_
lim
x0
sin(x)
x
__
lim
x0
1
cos(x)
_
= 1.1 = 1
(iii) lim
x0
x
1
ln(1 + x) = 1
proof:
use suitable substitution, e.g. x = e
y
1 with y 0
as well as lim
x0
x
1
(e
x
1) = 1
x
1
ln(1 + x) = (e
y
1)
1
ln(e
y
) =
_
(e
y
1)/y
_
1
Hence
lim
x0
x
1
ln(1 + x) =
_
lim
y0
(e
y
1)/y
_
1
= 1
1
= 1
(iv) lim
x
xln(1 +
1
x
) = 1
proof:
substitute x = 1/y with y 0, and convert into limit (iii)
xln(1 +
1
x
) = y
1
ln(1 + y)
Hence
lim
x
xln(1 +
1
x
) = lim
y0
y
1
ln(1 + y) = 1
71
(v) lim
x0
xln(x) = 0
proof:
more tricky, substitute x = e
y
with y and think ...
xln(x) = ye
y
Proof of lim
y
ye
y
= 0, in three steps without using power series:
(a) claim: e
z
> z for all z > 0.
proof: consider f(z) = e
z
z for z 0: f(0) = 1,
d
dz
f(z) = e
z
1 > 0 for z > 0
so f increases monotonically for z > 0, starting at f(0) = 1
Thus e
z
> z for all z > 0
(b) now choose z =
1
2
y: e
y/2
> y/2, so also e
y
> (y/2)
2
=
1
4
y
2
equivalently: ye
y
< y/(
1
4
y
2
) = 4/y
(c) since also ye
y
0 for y > 0, we can proceed by sandwiching:
y > 0 : 0 ye
y
4/y
Since lim
y
(4/y) = 0, we have proven lim
y
ye
y
= 0
and hence also the original statement.
Alternative version of the proof that lim
y
ye
y
= 0, using power series:
(a) since e
y
=

n=0
y
n
/n!, one has e
y
> y

/! for any IN and any y > 0


(b) hence we know for y > 0 that 0 ye
y
< ! y
1
(c) take any > 1 and proceed by sandwiching ...
(vi) lim
x
x
1
ln(x) = 0
proof:
substitute x = 1/y with y 0, and convert into limit (v)
lim
x
x
1
ln(x) = lim
y0
y ln(y) = 0
This completes the proof.
(vii) lim
x
x
a
e
x
= 0 for any a IR
proof:
for a 0 the claim is trivial, so we concentrate on a > 0
we generalize the ideas used in proving (v)
Proof without using power series:
72
(a) we know that e
z
> z for all z > 0 (was demonstrated under (iv))
(b) now choose z = x/2a: e
x/2a
> x/2a, so also e
x
> (x/2a)
2a
= (2a)
2a
x
2a
equivalently: x
a
e
x
< x
a
/(2a)
2a
x
2a
= (2a)
2a
x
a
(c) we can proceed by sandwiching:
x > 0 : 0 x
a
e
x
(2a)
2a
x
a
Since lim
x
x
a
= 0, we have proven lim
x
x
a
e
x
= 0
This completes the proof.
Alternative version using power series:
(a) since e
x
=

n=0
x
n
/n!, one has e
x
> x

/! for any IN and any x > 0


(b) hence we know for x > 0 that 0 x
a
e
x
< ! x
a
(c) take any > a and proceed by sandwiching ...
5.3. Dierentiation
5.3.1. Derivatives of functions
Recall denition of derivative of function f for continuous functions:
(notation: df/dx, or f

(x), or dy/dx with y = f(x))


f

(x) = lim
h0
f(x + h) f(x)
h
calculation of f

(x) from scratch:


rst simplify (f(x + h) f(x))/h if possible
if putting h = 0 in the formula is allowed (Fermat)
then the result will be f

(x). Done.
If putting h = 0 is not allowed, we must determine the limit:
(i) decompose your expression into parts that have known limits,
then use the rules for limits of composite expressions,
or
(ii) substituting suitable power series for the dicult parts,
and simplify further until you can put h = 0
Note:
the formulas are used when you prove that a limit takes a value L
(they do not help in nding the candidate L in the rst place ...)
73
Elementary derivatives found earlier:
d
dx
sin(x) = cos(x)
d
dx
cos(x) = sin(x)
d
dx
a
x
= a
x
ln(a)
d
dx
x
n
= nx
n1
(n ZZ
+
)
Further examples:

d
dx
ln(x) = 1/x for x IR
+
proof:
d
dx
ln(x) = lim
h0
ln(x + h) ln(x)
h
= lim
h0
ln
_
x(1 + h/x)
_
ln(x)
h
= lim
h0
ln(x) + ln(1 + h/x) ln(x)
h
= lim
h0
ln(1 + h/x)
h
substitute y = h/x
= lim
y0
ln(1 + y)
xy
=
1
x
lim
y0
ln(1 + y)
y
=
1
x
use limit (iii) of section 4.25

d
dx
x
a
= ax
a1
for a IR, x = 0
(so far only proven for a ZZ
+
)
proof:
d
dx
x
a
= lim
h0
(x + h)
a
x
a
h
= lim
h0
x
a
(1 + h/x)
a
x
a
h
= x
a
lim
h0
(1 + h/x)
a
1
h
= x
a
lim
h0
h
1
[e
a ln(1+h/x)
1] substitute y = h/x
= x
a
lim
y0
(xy)
1
[e
a ln(1+y)
1] = ax
a1
lim
y0
(ay)
1
[e
ay
_
y
1
ln(1+y)
_
1]
= ax
a1
lim
z0
z
1
[e
z
_
lim
y0
y
1
ln(1+y)
_
1]
= ax
a1
lim
z0
z
1
[e
z
1] = ax
a1
5.3.2. Rules for derivatives of composite expressions
Objective: eciency
(we dont want to calculate derivatives always from scratch, via the limit)
(i) The sum rule:
y = f(x) + g(x) :
dy
dx
= f

(x) + g

(x)
74
proof:
d
dx
[f(x) + g(x)] = lim
h0
f(x + h) + g(x + h) f(x) g(x)
h
= lim
h0
_
f(x + h) f(x)
h
_
+ lim
h0
_
g(x + h) g(x)
h
_
= f

(x) + g

(x)
(ii) The product rule:
y = f(x)g(x) :
dy
dx
= f

(x)g(x) + f(x)g

(x)
proof:
d
dx
[f(x)g(x)] = lim
h0
f(x + h)g(x + h) f(x)g(x)
h
= lim
h0
[f(x + h) f(x)]g(x + h) + [g(x + h) g(x)]f(x)
h
= lim
h0
_
g(x + h)
f(x + h) f(x)
h
_
+ lim
h0
_
f(x)
g(x + h) g(x)
h
_
= g(x)f

(x) + f(x)g

(x)
(iii) The chain rule:
y = f(g(x)) :
dy
dx
= f

(g(x))g

(x)
proof:
d
dx
f(g(x)) = lim
h0
f(g(x + h)) f(g(x))
h
= lim
h0
f(g(x + h)) f(g(x))
g(x + h) g(x)
g(x + h) g(x)
h
=
_
lim
h0
f(g(x + h)) f(g(x))
g(x + h) g(x)
__
lim
h0
g(x + h) g(x)
h
_
= g

(x) lim
h0
f(g(x + h)) f(g(x))
g(x + h) g(x)
substitute = g(x + h) g(x)
= g

(x) lim
0
f(g(x) + ) f(g(x))

= g

(x)f

(g(x))
(some subtleties with this version,
leave that to analysis)
75
(iv) The quotient rule:
y = f(x)/g(x) :
dy
dx
=
f

(x)g(x) f(x)g

(x)
[g(x)]
2
proof:
write f(x)g
1
(x), use product rule, chain rule, and
d
dx
x
1
= x
2
d
dx
_
f(x)g
1
(x)
_
= f

(x)g
1
(x) + f(x)
d
dx
_
g
1
(x)
_
= f

(x)g
1
(x) + f(x)g

(x)
_
1/g
2
(x)
_
=
f

(x)g(x) f(x)g

(x)
g
2
(x)
Examples:
d
dx
_
x
2
sin(x)
_
= x
2
_
d
dx
sin(x)
_
+ sin(x)
_
d
dx
x
2
_
= x
2
cos(x) + 2xsin(x)
d
dx
f(ax) = af

(x)
d
dx
e
x
n
= e
x
n
_
d
dx
x
n
_
= nx
n1
e
x
n
d
dx
ln(cosh(x)) = cosh

(x).
1
cosh(x)
=
sinh(x)
cosh(x)
= tanh(x)
d
dx
x
x
=
d
dx
e
ln(x
x
)
=
d
dx
e
x ln(x)
= e
x ln(x)
d
dx
[xln(x)] = e
xln(x)
_
1. ln(x) + x
d
dx
ln(x)
_
= e
x ln(x)
_
ln(x) +
x
x
_
= x
x
[1 + ln(x)]
d
dx
f(g(h(x))) = f

(g(h(x)))
d
dx
g(h(x))
= f

(g(h(x))) g

(h(x))
d
dx
h(x)
= f

(g(h(x))) g

(h(x)) h

(x)
76
5.3.3. Derivatives of implicit functions
Above methods apply only when we can write an
explicit formula for the function f(x) we wish to dierentiate.
But: some functions are dened by a property, without a formula ...
A. functions dened as solutions of equations for points in the plane
function y(x): solution of an equation of the type F(x, y) = 0
(for a domain D where each x D is associated with only one y)
How to nd y

(x) ?
method:
(i) dierentiate the equation, using chain rule
(ii) then try to solve the result for dy/dx
Example 1:
-10
-8
-6
-4
-2
0
2
4
6
8
10
-10 -8 -6 -4 -2 0 2 4 6 8 10
x
y
y + sin(y) x = 0
F(x, y) = y + sin(y) x,
so y(x) is solution of y + sin(y) x = 0
(i) dierentiate F(x, y):
dy
dx
+ cos(y)
dy
dx
1 = 0
(ii) solve for dy/dx:
dy
dx
(1 + cos(y)) = 1
dy
dx
=
1
1 + cos(y)
Example 2:
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0 1 2 3 4
x
y
y tanh(xy) = 0
F(x, y) = y tanh(xy),
so y(x) is solution of y tanh(xy) = 0
(i) dierentiate F(x, y):
dy
dx
tanh

(xy)
d
dx
(xy) = 0
dy
dx

1
cosh
2
(xy)
_
x
dy
dx
+ y
_
= 0
(ii) solve for dy/dx:
dy
dx
_
1
x
cosh
2
(xy)
_

y
cosh
2
(xy)
= 0
dy
dx
=
y
cosh
2
(xy) x
77
B. functions dened as inverse of another given function
function f
1
(x): solution of the equation f
1
(f(x)) = x for all x, with given f
Suppose we have no formula for f
1
(x), e.g. arcsin(x)
How to nd
d
dx
f
1
(x) ?
method:
(i) dierentiate the equivalent equation f(f
1
(x)) = x, using chain rule
(ii) then solve the result for
d
dx
f
1
(x)
This can be done generally:
d
dx
f(f
1
(x)) = 1
_
d
dx
f
1
(x)
_
f

(f
1
(x)) = 1
so
d
dx
f
1
(x) =
1
f

(f
1
(x))
Example 1:
let f(x) = e
x
, so f
1
(x) = ln(x) (with x > 0)
d
dx
ln(x) =
1
f

(ln(x))
=
1
e
ln(x)
=
1
x
Example 2:
let f(x) = sin(x), so f
1
(x) = arcsin(x) (with x [

2
,

2
])
d
dx
arcsin(x) =
1
f

(arcsin(x))
=
1
cos(arcsin(x))
=
1
_
1 sin
2
(arcsin(x))
=
1

1 x
2
Example 3:
let f(x) = tan(x), so f
1
(x) = arctan(x)
d
dx
arctan(x) =
1
f

(arctan(x))
= cos
2
(arctan(x))
=
cos
2
(arctan(x))
sin
2
(arctan(x)) + cos
2
(arctan(x))
=
1
tan
2
(arctan(x)) + 1
=
1
1 + x
2
78
C. functions dened parametrically
Given two functions x(t) and y(t), varying t traces out a curve in the x-y plane
This curve denes a function y(x) implicitly
(for those t IR where each x is associated with only one y)
How to nd y

(x) ?
method:
(i) calculate x

(t) = dx/dt and y

(t) = dy/dt, then work out


dy
dx
=
dy/dt
dx/dt
(ii) if possible, use formulas for x(t) and y(t) to eliminate t from your result
Example 1:
-0.5
0.0
0.5
1.0
-8 -4 0 4 8
x
y
for t IR:
x(t) = t + cos(t)
y(t) = ln(cosh(sin(t)))
(i) dierentiate x(t) and y(t):
x

(t) = 1 sin(t)
y

(t) = cos(t) tanh(sin(t))


so
dy
dx
=
cos(t) tanh(sin(t))
1 sin(t)
(ii) cannot simplify further
Example 2:
-20
-15
-10
-5
0
5
10
15
20
-10 0 10 20 30 40 50 60 70 80 90 100
x
y
for t IR:
x(t) = e
t
y(t) = tan(t)
(i) dierentiate x(t) and y(t):
x

(t) = e
t
y

(t) = 1/ cos
2
(t)
so
dy
dx
=
e
t
cos
2
(t)
(ii) simplify by removing t:
dy
dx
=
e
t
[sin
2
(t) + cos
2
(t)]
cos
2
(t)
=
tan
2
(t) + 1
e
t
=
1 + y
2
x
79
Finally ...
Are we at all allowed to put
dy
dx
=
dy/dt
dx/dt
?
(even if these derivatives exist)
Proof 1:
(i) Assume that y can indeed be written as a function (as yet unknown) of x: y(t) = f(x(t))
We aim to calculate dy/dx = f

(x)
(ii) Apply chain rule to y(t) = f(x(t)):
y

(t) = f

(x(t))x

(t)
y

(t)
x

(t)
= f

(x(t))
Hence f

(x) =
dy/dt
dx/dt
, as claimed.
Proof 2:
Via the original denition:
x

(t) = lim
h0
h
1
[x(t + h) x(t)]
y

(t) = lim
h0
h
1
[y(t + h) y(t)]

y

(t)
x

(t)
= lim
h0
y(t + h) y(t)
x(t + h) x(t)
In the limit we switch from h to the new variable z = x(t + h) x(t).
Since h 0, also z 0.
If we write y(t) = f(x(t)) for some unknown function f,
then y(t + h) = f(x(t + h)) = f(x(t) + z):
lim
h0
y(t + h) y(t)
x(t + h) x(t)
= lim
z0
f(x(t) + z) f(x(t))
z
= f

(x(t))
Hence
y

(t)
x

(t)
= f

(x(t))
5.3.4. Applications of derivative: sketching graphs
standard procedure for sketching the graph of a function f(x)
(a reminder should be secondary school knowledge)
(i) Determine values of x for which f is dened, and those for which it isnt
(ii) Find all stationary points of f (i.e. those x where f

(x) = 0, with zero slope)


(iii) Determine the nature of the stationary points (local minimum, local maximum, or neither)
(iv) Find the points where f(x) = 0, and calculate f

(x) at these points


(v) State whether f is even, f(x) = f(x) for all x, or odd, f(x) = f(x) for all x, or neither
(vi) Sketch the graph of f over an appropriate range of x
80
6. Integration
6.1. Introduction
6.1.1. Area under a curve
We dene the integral (including notation) in terms of areas:
denition
The integral
_
b
a
f(x)dx is the total area in the (x, y) plane between the graph of y = f(x) and
the x-axis, counted positively for f(x) > 0 (area above the x-axis) and negatively for f(x) < 0
(area below the x-axis).
How to calculate
_
b
a
f(x)dx ?
The area is easy to calculate
for functions that are built
of steps only (staircases):
Denote jump points as x

,
with ZZ and
x
1
< x
2
< x
3
< . . .
If x [x

, x
+1
) : f(x) = f

-1.2
-1.0
-0.8
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0 1 2 3 4
x
f(x)
x
+1
x

Contribution to integral
from interval [x

, x
+1
):
area of rectangle, f

(x
+1
x

)
also sign is correct!
Total integral:
add up contributions of
all intervals between a and b
(let x
1
= a, x
L
= b)
_
b
a
f(x)dx =
L1

=1
f

(x
+1
x

)
81
Notes:
(i) previous formula is exact only for staircases
(ii) but we can approximate most functions to arbitrary accuracy
using staircases with smaller and smaller steps ... (limits!!)
What if f(x) is not a staircase?
Find the integral
_
b
a
f(x)dx
as the limit of the integral
for a suitable staircase,
where all steps go to zero
Not as simple as it sounds:
(i) many staircase possible ...
(ii) what is a suitable staircase?
(iii) result ought not to depend on
your choice of staircase!
(analysis gets involved)
-1.2
-1.0
-0.8
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0 1 2 3 4
x
f(x)
x
+1
x

Formal denition of integral (analysis) therefore involves


(i) consider all possible staircases where each step touches the curve of f(x)
(ii) nd area underneath each staircase in the limit width of the widest step goes to zero
(iii) check whether all these limits for dierent staircases are identical
Result in a nutshell:
If f(x) is a continuous function on [a, b], then the integral
_
b
a
f(x)dx exists
If the integral
_
b
a
f(x)dx exists, it will be equal to the limit of any staircase approximation,
where each step touches the curve of f(x), as the width of the widest step goes to zero
Consequence:
to calculate the integral of a continuous function
we can choose the most convenient staircase approximation
and then nd its limit
82
Direct route (nor relying on analysis results): sandwich method using staircases
Step (i):
build staircase functions f

(x) such that


f

(x) f(x) f
+
(x) for all x [a, b]
e.g.
x [x

, x
+1
) :
f
+
(x) = f
+

= max
x[x

,x
+1
)
f(x)
f

(x) = f

= min
x[x

,x
+1
)
f(x)
(let x
1
= a, x
L
= b)
-1.2
-1.0
-0.8
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0 1 2 3 4
staircase: f

(x)
-1.2
-1.0
-0.8
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0 1 2 3 4
f(x)
-1.2
-1.0
-0.8
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0 1 2 3 4
staircase: f
+
(x)
we are now sure that:
_
b
a
f

(x)dx
_
b
a
f(x)dx
_
b
a
f
+
(x)dx
i.e.
L1

=1
f

(x
+1
x

)
_
b
a
f(x)dx
L1

=1
f
+

(x
+1
x

)
Step (ii):
take the limit where x

x
+1
0 for all ,
in the two bounding areas A

=

L1
=1
f

(x
+1
x

) and A
+
=

L1
=1
f
+

(x
+1
x

)
lim
step widths 0
A


_
b
a
f(x)dx lim
step widths 0
A
+
Step (iii):
Conclusion:
if lim
step widths 0
A

= lim
step widths 0
A
+
= A, then:
_
b
a
f(x)dx = A
83
6.1.2. Examples of integrals calculated via staircases
Example 1:
A =
_
b
0
cos(x)dx, with b
(so on [0, b]: cos(x) decreases monotonically, i.e. if x

> x then cos(x

) < cos(x))
Method: sandwich with staircases
use tutorial exercise 39 (let m ZZ):
= 2m :
n

k=0
cos(k) =
1cos(n+)cos()+cos(n)
2 2 cos()
Step (i):
Build staircase functions f

(x) such that


f

(x) cos(x) f
+
(x) for all x [0, b]
e.g.
x [x

, x
+1
) :
f
+
(x) = f
+

= max
x[x

,x
+1
)
cos(x) = cos(x

)
f

(x) = f

= min
x[x

,x
+1
)
cos(x) = cos(x
+1
)
Choose steps of equal size,
with x
1
= 0 and x
L
= b:
x

= (1)h, with h =
b
L1
: x
1
= 0, x
2
= h, x
3
= 2h, . . . x
L
= (L1)h = b
Upper bound to A:
A
+
=
L1

=1
f
+

(x
+1
x

) = h
L1

=1
cos(x

) = h
L1

=1
cos(( 1)h)
= h
L2

k=0
cos(kh) = h
1cos((L1)h)cos(h)+cos((L2)h)
2 2 cos(h)
eliminate L : = h
1cos(b)cos(h)+cos(b h)
2 2 cos(h)
= h
1cos(b)cos(h)+cos(b) cos(h)+sin(b) sin(h)
2 2 cos(h)
= h
[1cos(b)][1cos(h)]+sin(b) sin(h)
2 2 cos(h)
=
h
2
_
1cos(b)
_
+
1
2
sin(b)
h sin(h)
1cos(h)
Lower bound to A:
A

=
L1

=1
f

(x
+1
x

) = h
L1

=1
cos(x
+1
) = h
L1

=1
cos(h)
84
= h
L

k=2
cos((k 1)h) = h
_
L1

k=1
cos((k 1)h) + cos((L1)h) cos(0)
_
= A
+
+ h cos((L1)h) h = A
+
+ h[cos(b) 1]
= h[cos(b) 1] +
h
2
_
1cos(b)
_
+
1
2
sin(b)
h sin(h)
1cos(h)
We now know that A


_
b
0
cos(x)dx A
+
.
Step (ii):
take the limit h 0 in the bounds A
+
and A

lim
h0
A
+
= lim
h0
_
h
2
_
1cos(b)
_
+
1
2
sin(b)
h sin(h)
1cos(h)
_
= lim
h0
1
2
sin(b)
h sin(h)
1cos(h)
= sin(b) lim
h0
_
1
2
h sin(h)
1cos
2
(h/2)+sin
2
(h/2)
_
= sin(b) lim
h0
_
h sin(h)
4 sin
2
(h/2)
_
= sin(b) lim
h0
_
sin(h)
h
(h/2)
2
sin
2
(h/2)
_
= = sin(b)
_
lim
h0
sin(h)
h
__
lim
h0
h/2
sin(h/2)
_
2
= sin(b)
lim
h0
A

= lim
h0
{A
+
+ h[cos(b) 1]} = lim
h0
A
+
= sin(b)
We conclude
lim
h0
A

A lim
h0
A
+
sin(b) A sin(b) A = sin(b)
Example 2:
A =
_
b
0
cos(x)dx, with arbitrary b > 0
(cos(x) need no longer decrease monotonically, so sandwich method becomes messy)
Method: use most convenient approximating staircase
(i) cos(x) is a continuous function on any interval
(ii) both f

(x) and f
+
(x) in example 1 are suitable approximating staircases to cos(x)
(each step in each staircase touches the curve, and all step sizes go to zero)
Hence:
A = lim
h0
A

= lim
h0
A
+
= sin(b)
85
Example 3:
A =
_
b
0
sin(x)dx, with b /2
(so on [0, b]: sin(x) increases monotonically, i.e. if x

> x then sin(x

) > sin(x))
Method: sandwich with staircases
we use tutorial exercise 39 (let ZZ):
= 2 :
n

k=0
sin(k) =
sin(n+)+sin()+sin(n)
2 2 cos()
Step (i):
Build staircase functions f

(x) such that


f

(x) sin(x) f
+
(x) for all x [0, b]
e.g.
x [x

, x
+1
) :
f
+
(x) = f
+

= max
x[x

,x
+1
)
sin(x) = sin(x
+1
)
f

(x) = f

= min
x[x

,x
+1
)
sin(x) = sin(x

)
Choose steps of equal size,
with x
1
= 0 and x
L
= b:
x

= (1)h, with h =
b
L1
: x
1
= 0, x
2
= h, x
3
= 2h, . . . x
L
= (L1)h = b
Upper bound to A:
A
+
=
L1

=1
f
+

(x
+1
x

) = h
L1

=1
sin(x
+1
) = h
L1

=1
sin(h)
= = h
L1

k=0
sin(kh) = h
sin(Lh)+sin(h)+sin((L1)h)
2 2 cos(h)
eliminate L : = h
sin(b + h)+sin(h)+sin(b)
2 2 cos(h)
= h
sin(b) cos(h)cos(b) sin(h)+sin(h)+sin(b)
2 2 cos(h)
= h
[1cos(b)] sin(h) + [1cos(h)] sin(b)
2 2 cos(h)
=
h
2
sin(b) +
1
2
[1cos(b)]
h sin(h)
1 cos(h)
Lower bound to A:
A

=
L1

=1
f

(x
+1
x

) = h
L1

=1
sin(x

) = h
L1

=1
sin((1)h)
= h
L2

k=0
sin(kh) = h
L1

k=1
sin(kh) h sin((L1)h) = A
+
h sin(b)
86
We now know that A


_
b
0
sin(x)dx A
+
.
Step (ii):
take the limit h 0 in the bounds A
+
and A

lim
h0
A
+
= lim
h0
_
h
2
sin(b) +
1
2
[1cos(b)]
h sin(h)
1 cos(h)
_
= lim
h0
1
2
[1cos(b)]
h sin(h)
1 cos(h)
= [1cos(b)] lim
h0
_
1
2
h sin(h)
1cos
2
(h/2)+sin
2
(h/2)
_
= [1cos(b)] lim
h0
_
h sin(h)
4 sin
2
(h/2)
_
= [1cos(b)] lim
h0
_
sin(h)
h
(h/2)
2
sin
2
(h/2)
_
= = [1cos(b)]
_
lim
h0
sin(h)
h
__
lim
h0
h/2
sin(h/2)
_
2
= 1cos(b)
lim
h0
A

= lim
h0
{A
+
h sin(b)} = lim
h0
A
+
= 1cos(b)
We conclude
lim
h0
A

A lim
h0
A
+
1cos(b) A 1cos(b) A = 1cos(b)
Example 4:
A =
_
b
0
sin(x)dx, with arbitrary b
(sin(x) need no longer increase monotonically, so sandwich method becomes more messy)
Method: use most convenient approximating staircase
(i) sin(x) is a continuous function on any interval
(ii) both f

(x) and f
+
(x) in example 3 are suitable approximating staircases to sin(x)
(each step in each staircase touches the curve, and all step sizes go to zero)
Hence:
A = lim
h0
A

= lim
h0
A
+
= 1 cos(b)
Example 5:
A =
_
b
a
x
n
dx, with a < b and n ZZ
+
(so we know the function increases monotonically)
Method: sandwich with staircases
we use tutorial exercise 39 (let ZZ):
z = 1 :
n

k=0
z
k
=
1 z
n+1
1 z
87
Step (i):
Build staircase functions f

(x) such that


f

(x) x
n
f
+
(x) for all x [a, b]
e.g.
x [x

, x
+1
) :
f
+
(x) = f
+

= max
x[x

,x
+1
)
x
n
= x
n
+1
f

(x) = f

= min
x[x

,x
+1
)
x
n
= x
n

Choose steps of non-equal size, in a so-called geometric progression:


with x
1
= 0 and x
L
= b:
x

= ah
1
, with b = ah
L1
: x
1
= a, x
2
= ah, x
3
= ah
2
, . . . x
L
= ah
L1
= b
Note:
(i) h > 1 (since we require x
+1
> x

always)
(ii) ln(b/a) = (L1) ln(a), so L 1 = [ln(b) ln(a)]/ ln(a) = ln(b)/ ln(a) 1
(iii) here x
+1
x

= ah

ah
1
= ah
1
(h 1)
(iv) largest step size: x
L
x
L1
= ah
L2
(h 1) = ah
L1
(1 h
1
) = b(1 h
1
)
(v) limit of zero step sizes: h 1
Upper bound to A:
A
+
=
L1

=1
f
+

(x
+1
x

) =
L1

=1
x
n
+1
ah
1
(h 1) = a(h 1)
L1

=1
[ah

]
n
h
1
= a
n+1
(h 1)
L1

=1
h
n+1
= a
n+1
(1 h
1
)
L1

=1
h
(n+1)
= a
n+1
(1 h
1
)
L1

=1
[h
n+1
]

= a
n+1
(1 h
1
)
L2

=0
[h
n+1
]
+1
= a
n+1
h
n
(h 1)
L2

k=0
[h
n+1
]
k
= a
n+1
h
n
(h 1)
_
1 h
(n+1)(L1)
1 h
n+1
_
eliminate L : = a
n+1
h
n
(h 1)
L2

k=0
[h
n+1
]
k
= a
n+1
h
n
(h 1)
_
1 (b/a)
n+1
1 h
n+1
_
Lower bound to A:
A

=
L1

=1
f

(x
+1
x

) =
L1

=1
x
n

ah
1
(h 1) = a(h 1)
L1

=1
[ah
1
]
n
h
1
= h
n
A
+
We now know that A


_
b
a
x
n
dx A
+
.
Step (ii):
take the limit h 1 in the bounds A
+
and A

lim
h1
A
+
= lim
h1
_
a
n+1
h
n
(h 1)
_
1 (b/a)
n+1
1 h
n+1
__
=
_
a
n+1
b
n+1
_
lim
h1
h
n
(h 1)
1 h
n+1
88
=
_
b
n+1
a
n+1
_
_
lim
h1
h
n
_
_
lim
h1
h 1
h
n+1
1
_
substitute h = e
y
with y 0
=
_
b
n+1
a
n+1
_
_
lim
y0
e
y
1
e
(n+1)y
1
_
=
_
b
n+1
a
n+1
_
_
lim
y0
e
y
1
y
(n+1)y
e
(n+1)y
1
y
(n+1)y
_
=
1
n+1
_
b
n+1
a
n+1
_
_
lim
y0
e
y
1
y
__
lim
y0
(n+1)y
e
(n+1)y
1
_
=
1
n+1
_
b
n+1
a
n+1
_
lim
h1
A

= lim
h1
_
h
n
A
+
_
= lim
h1
A
+
=
1
n+1
_
b
n+1
a
n+1
_
We conclude
lim
h1
A

A lim
h1
A
+

1
n+1
_
b
n+1
a
n+1
_
A
1
n+1
_
b
n+1
a
n+1
_
hence A =
1
n+1
_
b
n+1
a
n+1
_
.
6.1.3. Fundamental theorems of calculus: integration vs dierentiation
Any sane person would like to nd a more ecient way to calculate integrals
than via the (tedious) construction and analysis of bounding staircases ...
Furthermore: we need explicit expressions for the relevant sums to do it ...
help is at hand:
First fundamental theorem of calculus:
If a function f : [a, b] IR is continuous on the interval [a, b],
and F : [a, b] IR is another function such that
F

(x) = f(x) for all x [a, b], then


_
b
a
f(x)dx = F(b) F(a)
Second fundamental theorem of calculus:
If a function f : [a, b] IR is continuous on the interval [a, b],
and F : [a, b] IR is another function dened by
F(x) =
_
x
a
f(t)dt for all x [a, b]
then F

(x) = f(x) for all x [a, b].


89
Consequence: a new way to calculate integrals!
Need to nd a function F(x) such that F

(x) = f(x)
But where do these theorems come from?
Sketch of proofs:
Theorem II:
If a function f : [a, b] IR is continuous on the interval [a, b],
and F : [a, b] IR is another function dened by
F(x) =
_
x
a
f(t)dt for all x [a, b]
then F

(x) = f(x) for all x [a, b].


sketch of proof:
(for dierentiable f, for simplicity; proof can be done more generally!)
Let C = max
x[a,b]
|f

(t)| (maximum slope in absolute sense):


dF(x)
dx
= lim
h0
F(x + h) F(x)
h
= lim
h0
1
h
_
_
x+h
a
f(t)dt
_
x
a
f(t)dt
_
= lim
h0
1
h
_
x+h
x
f(t)dt
Easy to bound:
on t [x, x + h] one has f(x) Ch f(t) f(x) + Ch, so
_
x+h
x
[f(x) Ch]dt
_
x+h
x
f(t)dt
_
x+h
x
[f(x) + Ch]dt
h[f(x) Ch]
_
x+h
x
f(t)dt h[f(x) + Ch]
lim
h0
[f(x) Ch] lim
h0
1
h
_
x+h
x
f(t)dt lim
h0
[f(x) + Ch]
Hence
F

(x) = lim
h0
1
h
_
x+h
x
f(t)dt = f(x)
why was this a sketch rather than formal proof?
(i) restriction to dierentiable f can be lifted
(ii) should also do lim
h0
in dF(x)/dx
90
Theorem I:
If a function f : [a, b] IR is continuous on the interval [a, b],
and F : [a, b] IR is another function such that
F

(x) = f(x) for all x [a, b], then


_
b
a
f(x)dx = F(b) F(a)
proof:
in three steps:
(i) preparation:
Since f is continuous on [a, b] the integral
_
x
a
f(t)dt exists for x [a, b].
We dene G(x) =
_
x
a
f(t)dt
Properties of G:
from Theorem II : G

(x) = f(x) for all x [a, b]


from denition : G(a) = 0
Clearly (using denition of G!):
_
b
a
f(t)dt = G(b)
We are given another function F with F

(x) = f(x) for all x [a, b]


(ii) We next show that now G(x) = F(x) F(a) for all x [a, b]
How to show this? Call the dierence between F and G: H(x) = F(x) G(x)
It follows that
d
dx
H(x) = f(x) f(x) = 0 for all x [a, b]
Hence the function H(x) has zero slope on [a, b],
i.e. its graph is horizontal, so H(x) = H(a) for all x [a, b]
Consequence: F(x) G(x) = F(a) G(a) for all x [a, b],
Consequence (since G(a) = 0): G(x) = F(x) F(a) for all x [a, b]
(iii) Combine previous two intermediate results:
_
b
a
f(t)dt = G(b) = F(b) F(a)
6.1.4. Indenite and denite integrals, and other conventions
Some nal notation conventions and terminology:
Denite integral: A =
_
b
a
f(x)dx
(the object we worked with so far, always a number)
denition: see earlier
Indenite integral: F(x) =
_
f(x)dx
(i.e. without any boundary values indicated, always a function)
91
denition: any function F such that F

(x) = f(x)
Not uniquely dened: the solutions F dier by a constant, since
d
dx
C = 0
Other names: primitive of f, or anti-derivative of f
Doing an integral via Theorem I requires nding the primitive F(x) of f(x).
Often we emphasize this intermediate step, show how the integral was done
and allow the reader to verify that F

(x) = f(x), by writing


_
b
a
f(x)dx = [F(x)]
b
a
= F(b) F(a)
So far we dened
_
b
a
f(x)dx for a b
Generalization to a > b?
Logical choice: via Theorem I
(which will then hold for any a, b IR)
a > b : dene
_
b
a
f(x)dx = F(b) F(a) = (F(a) F(b)) =
_
a
b
f(x)dx
6.2. Techniques of integration
Integration now more or less boils down to this:
when given a function f(x), nd a function F(x) (the primitive) such that F

(x) = f(x)
Integration is therefore an art rather than a science:
in contrast to dierentiation, where you just follow (carefully) a set of clear rules,
integration mostly relies on a mixture of skill, experience, memory and intuition
The basic strategy: divide and conquer
manipulate (break up, simplify) the integral until it has been reduced to
expressions of which you know (i.e. remember) the primitives
So to be a successful integrator you need to
(i) know and practice the formal manipulation rules and other tricks
(ii) memorize a list of basic primitives
(iii) be creative
We must now
(i) agree on the list of elementary integrals that we will consider known
(ii) describe the tools for manipulation and simplication (most based on rules for
dierentiation) that we can use to reduce our problem to elementary integrals
92
6.2.1. List of elementary integrals and general methods for reduction
Ten elementary integrals
f(x) F(x)
x
a
(a =1)
1
a+1
x
a+1
x
1
ln|x|
ln(x) xln(x) x
e
x
e
x
cos(x) sin(x)
sin(x) cos(x)
1

1 x
2
arcsin(x)
1
1 + x
2
arctan(x)
1

x
2
1
arccosh(x)
1

x
2
+ 1
arcsinh(x)
Four general integration rules
let F(x) =
_
f(x)dx, G(x) =
_
g(x)dx, etc
(i) Linearity :
_
_
cf(x)
_
dx = cF(x)
(ii) Sum rule :
_
_
f(x) + g(x)
_
= F(x) + G(x)
(iii) Integration by parts :
_
_
f(x)G(x)
_
dx = F(x)G(x)
_
_
g(x)F(x)
_
dx
(iv) Integration by substitution :
_
f(x)dx =
_
_
f(x(t))
dx
dt
_
dt
proofs:
(i) consequence of:
d
dx
_
cF(x)
_
= cF

(x)
(ii) consequence of:
d
dx
_
F(x) + G(x)
_
= F

(x) + G

(x)
(iii) consequence of:
d
dx
_
F(x)G(x)
_
= F

(x)G(x) + G

(x)F(x)
(iv) consequence of:
d
dt
F(x(t)) = x

(t)F

(x(t))
93
Note:
(iii) and (iv) are usually given as identities for denite integrals,
Integration by parts :
_
b
a
_
f(x)G(x)
_
dx =
_
F(x)G(x)
_
b
a

_
b
a
_
g(x)F(x)
_
dx
Integration by substitution :
_
b
a
f(x)dx =
_
t(b)
t(a)
_
f(x(t))
dx
dt
_
dt
Let us inspect the validity of (iv) more carefully:
Since
_
b
a
f(x)dx = F(b) F(a), one knows generally that
d
db
_
b
a
f(x)dx = F

(b) = f(b)
Subtract the left-hand and the right-hand side in the claimed equality,
I(b) = LHS RHS =
_
b
a
f(x)dx
_
t(b)
t(a)
_
f(x(t))
dx
dt
_
dt
Then calculate dI/db, via the chain rule:
dI
db
=
d
db
_
b
a
f(x)dx
d
db
_
t(b)
t(a)
_
f(x(t))x

(t)
_
dt
= f(b) t

(b)
d
dt(b)
_
t(b)
t(a)
_
f(x(t))x

(t)
_
dt
= f(b) t

(b)
_
f(x(t))x

(t)
_
t=t(b)
= f(b) f(b) t

(b) x

(t(b)) = f(b) f(b)


_
dt
dx
dx
dt
_
x=b
= 0
Thus I(b) is independent of b. Hence we may choose b = a to calculate it, i.e.
LHS RHS = I(a) =
_
a
a
f(x)dx
_
t(a)
t(a)
_
f(x(t))
dx
dt
_
dt = 0 0 = 0
This completes the proof that LHS=RHS for all a and b,
i.e. the claimed identity (iv) is indeed true.
Warning:
changing variables via substitution must involve a one-to-one transformation
(i.e. a unique t for every x, and vice versa)
Example: suppose we are not careful
Clearly
_
1
1
dx = 2. Now do the integral via substitution of y = x
2
, with dy/dx = 2x
_
1
1
dx =
_
y=1
y=1
dy
2x(y)
= 0 . . .
94
6.2.2. Examples: integration by substitution

_
sin
m
(x) cos(x)dx:
substitute t = sin(x), so dt/dx = cos(x) i.e. dt = cos(x)dx
_
sin
m
(x) cos(x)dx =
_
t
m
dt =
1
m+1
t
m+1
=
1
m+1
sin
m+1
(x)

_
e
x
2
x dx:
sunstitute t = x
2
, so dt/dx = 2x i.e. dt = 2xdx
_
e
x
2
x dx =
1
2
_
e
t
dt =
1
2
e
t
=
1
2
e
x
2

_
(1 x
2
)
1/2
dx:
try to use the relation cos
2
() + sin
2
() = 1
substitute x = sin() with [/2, /2], so dx/d = cos() i.e. dx = cos()d
_
dx

1x
2
=
_
cos()d
_
1sin
2
()
=
_
cos()d
cos()
=
_
d = = arcsin(x)

_
(1 + x
2
)
1/2
dx:
try to use the relation cosh
2
() sinh
2
() = 1
substitute x = sinh(), so dx/d = cosh() i.e. dx = cosh()d
_
dx

1+x
2
=
_
cosh()d
_
1+sinh
2
()
=
_
cosh()d
cosh()
=
_
d = = arcsinh(x)

_
(1 + x
2
)
1
dx:
try to use the relation cos
2
() + sin
2
() = 1
substitute x = tan() with [/2, /2], so dx/d = cos
2
() i.e. dx = cos
2
()d
_
dx
1+x
2
=
_
cos
2
()d
1+tan
2
()
=
_
d
cos
2
()+sin
2
()
=
_
d = = arctan(x)

_
(1 x
2
)
1
dx:
try to use the relation cosh
2
() sinh
2
() = 1
substitute x = tanh(), so dx/d = cosh
2
() i.e. dx = cosh
2
()d
_
dx
1x
2
=
_
cosh
2
()d
1tanh
2
()
=
_
d
cosh
2
()sinh
2
()
=
_
d = = arctanh(x)
95

_
3
2
(x
2
+ 2x)
1/2
dx:
rst convert to a form similar to above, using x
2
+ 2x = (x + 1)
2
1
_
3
2
dx

x
2
+ 2x
=
_
3
2
dx
_
(x + 1)
2
1
try to use the relation cosh
2
() sinh
2
() = 1
substitute x + 1 = cosh(), so dx/d = sinh() i.e. dx = sinh()d
_
3
2
dx

x
2
+ 2x
=
_
arccosh(4)
arccosh(3)
sinh()d
_
cosh
2
() 1
=
_
arccosh(4)
arccosh(3)
sinh()d
sinh()
=
_
arccosh(4)
arccosh(3)
d =
_

_
arccosh(4)
arccosh(3)
= arccosh(4) arccosh(3)

_
1/2
3/2
(x
2
2x)
1/2
dx:
rst convert to a form similar to above, using x
2
2x = 1 (x + 1)
2
_
1/2
3/2
dx

x
2
2x
=
_
1/2
3/2
dx
_
1(x+1)
2
try to use the relation cos
2
() + sin
2
() = 1
substitute x + 1 = sin() with [/2, /2], so dx/d = cos() i.e. dx = cos()d
_
1/2
3/2
dx

x
2
2x
=
_
arcsin(1/2)
arcsin(1/2)
cos()d
_
1sin
2
()
=
_
arcsin(1/2)
arcsin(1/2)
cos()d
cos()
=
_
arcsin(1/2)
arcsin(1/2)
d =
_

_
arcsin(1/2)
arcsin(1/2)
= arcsin(
1
2
) arcsin(
1
2
)
=

6
+

6
=

3

_
(6 + 4x 2x
2
)
1/2
dx:
rst convert to a form similar to above, using
6 + 4x 2x
2
= 2(x
2
2x 3) = 2((x 1)
2
4) = 8
_
(
1
2
x
1
2
)
2
1
_
_
dx

6 + 4x 2x
2
=
1
2

2
_
dx
_
1 (
1
2
x
1
2
)
2
try to use the relation cos
2
() + sin
2
() = 1
substitute
1
2
x
1
2
= sin() with [/2, /2], so
1
2
dx/d = cos() i.e. dx = 2 cos()d
_
dx

6 + 4x 2x
2
=
1
2

2
_
2 cos()d
_
1 sin
2
()
=
1

2
_
cos()d
cos()
=

2
=
1

2
arcsin(
1
2
x
1
2
)
96

_
tan(x)dx:
write tan(x) = sin(x)/ cos(x) and substitute cos(x) = t, so dt/dx = sin(x) i.e.
dt = sin(x)dx
_
tan(x)dx =
_
sin(x)dx
cos(x)
=
_
dt
t
= ln |t| = ln | cos(x)|

_
sin
1
(x)dx:
rst convert into a form similar to earlier examples, using sin(x) = 2 sin(x/2) cos(x/2).
Then substitute cos(x/2) = t, so dt/dx =
1
2
sin(x/2) i.e. dt =
1
2
sin(x/2)dx:
_
dx
sin(x)
=
_
dx
2 sin(x/2) cos(x/2)
=
_
dt
sin
2
(x/2)t
=
_
dt
t(1cos
2
(x/2))
=
_
dt
t(1t
2
)
substitute u = t
2
, so du/dt = 2t
3
i.e. dt =
1
2
t
3
du
_
dx
sin(x)
=
_
dt
t(1t
2
)
=
1
2
_
t
3
du
t(1u
1
)
=
1
2
_
du
u(1u
1
)
=
1
2
_
du
u1
=
1
2
ln|u1| = ln

1
t
2
1

1
2
= ln

1
cos
2
(x/2)
1

1
2
= ln

1 cos
2
(x/2)
cos
2
(x/2)

1
2
= ln | tan(x/2)|
6.2.3. Examples: integration by parts
This method works when the function to be integrated is the product of two factors, one which
does not get more complicated upon repeated dierentiation (e.g. trigonometric or exponential
functions), with the other becoming simpler upon repeated dierentiation (e.g. powers):

_
arcsin(x)dx:
rst substitute x = sin(), so dx/d = cos() i.e. dx = cos()d, then integrate by parts
_
arcsin(x)dx =
_
arcsin(sin()) cos()d =
_
cos()d
= sin()
_
sin()
_
d
d

_
d = sin()
_
sin()d
= sin() + cos() = xarcsin(x) +

1 x
2
97

_
x
2
cos(2x)dx:
integrate by parts twice, to get rid of the annoying factor x
2
_
x
2
cos(2x)dx =
1
2
sin(2x)x
2

_
1
2
sin(2x)
_
d
dx
x
2
_
=
1
2
sin(2x)x
2

_
xsin(2x)
=
1
2
sin(2x)x
2

1
2
cos(2x)x +
_
1
2
cos(2x)
_
d
dx
x
_
_
=
1
2
sin(2x)x
2
+
1
2
cos(2x)x
_
1
2
cos(2x)
=
1
2
sin(2x)x
2
+
1
2
cos(2x)x
1
4
sin(2x)

_
xe
x
dx:
integrate by parts to eliminate the factor x:
_
xe
x
dx = e
x
x
_

_
e
x
_
d
dx
x
_
_
= e
x
x +
_
e
x
dx
= (x + 1)e
x
Sometimes it is helpful to create a form of two factors where initially there was just one,
by inserting 1 = (
d
dx
x), e.g.

_
ln(x)dx:
_
ln(x)dx =
_
ln(x)
_
d
dx
x
_
dx = xln(x)
_
x
_
d
dx
ln(x)
_
dx
= xln(x)
_
xx
1
dx = xln(x)
_
dx = xln(x) x

_
arctan(x)dx:
rst insert 1 = (
d
dx
x), then integrate by parts, then substitute x
2
= y, so dy/dx = 2x
_
arctan(x)dx =
_
arctan(x)
_
d
dx
x
_
dx = xarctan(x)
_
x
_
d
dx
arctan(x)
_
dx
= xarctan(x)
_
xdx
1 + x
2
= xarctan(x)
1
2
_
dy
1 + y
= xarctan(x)
1
2
ln |1 + y| = xarctan(x)
1
2
ln(1 + x
2
)
98
6.2.4. Further tricks: recursion formulae
Certain families of integrals can be calculated by a series of integrations by parts, leading in a
natural way to so-called recursion formulae. This is best explained directly via examples, one
with a family of denite integrals and one with a family of indenite ones:
Example 1:
I
n
=
_

0
x
n
e
x
dx n ZZ
+
Integration by parts (using lim
x
x
n
e
x
= 0):
I
n
=
_
x
n
e
x
_

0
+
_

0
e
x
_
d
dx
x
n
_
dx = n
_

0
e
x
x
n1
dx = nI
n1
Recursion formula: the expression for I
n
in terms of I
n1
.
Further iteration of the recursion formula: I
n
= n(n1)I
n1
= . . . = n!I
0
Thus we need only calculate I
0
to know all I
n
:
I
0
=
_

0
e
x
dx =
_
e
x
_

0
= 0 (1) = 1 hence : I
n
= n!
Example 2:
I
n,m
(x) =
_
sin
n
(x) cos
m
(x)dx n, m ZZ
Just using cos
2
(x) + sin
2
(x) = 1 already gives
I
n+2,m
(x) + I
n,m+2
=
_
sin
n
(x)[sin
2
(x) + cos
2
(x)] cos
m
(x)dx = I
n,m
(x)
Integration by parts:
I
n,m
(x) =
_
_

sin
n1
(x)
m + 1
__
d
dx
cos
m+1
(x)
_
dx
=
sin
n1
(x) cos
m+1
(x)
m + 1
+
1
m + 1
_
_
d
dx
sin
n1
(x)
_
cos
m+1
(x)dx
=
sin
n1
(x) cos
m+1
(x)
m + 1
+
n 1
m + 1
_
sin
n2
(x) cos
m+2
(x)dx
=
sin
n1
(x) cos
m+1
(x)
m + 1
+
n 1
m + 1
I
n2,m+2
(x)
=
sin
n1
(x) cos
m+1
(x)
m + 1
+
n 1
m + 1
{I
n2,m
(x) I
n,m
(x)}
Solving for I
n,m
(x) then gives
I
n,m
(x)
_
1 +
n1
m+1
_
=
sin
n1
(x) cos
m+1
(x)
m + 1
+
n 1
m + 1
I
n2,m
(x)
99
I
n,m
(x)
m+n
m+1
=
sin
n1
(x) cos
m+1
(x)
m + 1
+
n 1
m + 1
I
n2,m
(x)
Final result:
I
n,m
(x) =
sin
n1
(x) cos
m+1
(x)
m + n
+
n 1
m + n
I
n2,m
(x)
We need only calculate I
1,m
(x) and I
0,m
(x) to know all I
n,m
(x).
The integrals I
0,m
(x) will be done as a tutorial exercise, the integrals I
1,m
(x) are:
I
1,m
(x) =
_
sin(x) cos
m
(x)dx put cos(x) = y so dy/dx = sin(x)
=
_
y
m
dy =
1
m+1
y
m+1
=
1
m+1
cos
m+1
(x)
Special case of previous calculation:
(choose m = 0 in above result)
I
n
(x) =
_
sin
n
(x)dx n ZZ
Recursion formula:
I
n
(x) =
sin
n1
(x) cos(x)
n
+
n 1
n
I
n2
(x)
I
0
(x) = x
Note 1:
The result of example 1 is used to generalize the concept of factorials n! to real-valued
(and later even complex) numbers n, by using the integral as a denition:
for all z IR : z! =
_

0
x
z
e
x
dx
This prompted the introduction of the so-called gamma function (z) =
_

0
x
z1
e
x
dx,
Thus (n) = (n 1)! for integer n.
Note 2:
As an alternative in the case of integrals with trigonometric functions
one could write the latter in complex form, e.g.
_
sin
n
(x)dx = (2i)
n
_
_
e
ix
e
ix
_
n
dx use Newton

s binomial formula
= (2i)
n
n

m=0
_
n
m
_
(1)
nm
_
e
ix(2mn)
dx
=
_

_
if n even :
1
(2i)
n
_

n
m=0, m=n/2
_
n
m
_
(1)
m
i(2mn)
e
ix(2mn)
+ (1)
n/2
_
n
n/2
_
x
_
if n odd :
1
(2i)
n

n
m=0
_
n
m
_
(1)
m+1
i(2mn)
e
ix(2mn)
100
For instance:
_
sin
2
(x)dx =
1
(2i)
2
_
_
_
2

m=0, m=1
_
2
m
_
(1)
m
i(2mn)
e
ix(2m2)

_
2
1
_
x
_
_
_
=
1
4
_
_
2
0
_
1
2i
e
2ix
+
_
2
2
_
1
2i
e
2ix

_
2
1
_
x
_
=
1
4
_
1
2i
e
2ix
+
1
2i
e
2ix
2x
_
=
1
4
sin(2x) +
1
2
x
Check against recursion method of example 2:
_
sin
2
(x)dx = I
2
(x) =
1
2
sin(x) cos(x) +
1
2
I
0
(x) =
1
2
sin(x) cos(x) +
1
2
x
=
1
4
sin(2x) +
1
2
x
6.2.5. Further tricks: dierentiation with respect to a parameter
Remember: any dirty trick that leads to a proposal for a primitive is allowed,
provided one veries correctness of the proposed answer a posteriori via dierentiation!
Now consider integrals that involve a further parameter a IR:
I(x, a) =
_
f(x, a)dx
For suciently well-behaved functions it is true that
d
da
_
f(x, a)dx =
_
d
da
f(x, a)dx
But not always ...
(integral is a limit, derivative is a limit, we know that the order of limits matters!)
Our strategy:
(i) assume that moving the derivative d/da inside or outside the integral over x is allowed
(ii) use that as a tool to calculate the integral
(iii) check whether the assumption was correct by explicit dierentiation of the result
We illustrate the procedure via examples:
(notation: (
d
da
)
n
means dierentiate n times with respect to a)
Example 1:
I(x, a) =
_
x
n
e
ax
dx
=
_
_
d
da
_
n
e
ax
dx =
_
d
da
_
n
_
e
ax
dx =
_
d
da
_
n
_
a
1
e
ax
_
101
We get all the integrals we need by dierentiating a simple expression.
For instance:
_
xe
ax
dx =
d
da
_
a
1
e
ax
_
= (
x
a

1
a
2
) e
ax
_
x
2
e
ax
dx =
_
d
da
_
2
_
a
1
e
ax
_
=
d
da
_
(
x
a

1
a
2
)e
ax
_
= (
x
2
a

2x
a
2
+
2
a
3
) e
ax
Dierentiation conrms that these primitives are correct.
Example 2:
Here we exploit the relations
d
da
1
x
2
a
=
1
(x
2
a)
2
,
_
d
da
_
2 1
x
2
a
=
1.2
(x
2
a)
3
,
_
d
da
_
3 1
x
2
a
=
1.2.3
(x
2
a)
4
, etc
More generally:
_
d
da
_
n1 1
x
2
a
=
(n1)!
(x
2
a)
n
Hence, for a = 0:
I(x, a) =
_
dx
(x
2
a)
n
=
_
_
1
(n1)!
_
d
da
_
n1 1
x
2
a
_
dx =
1
(n1)!
_
d
da
_
n1
_
dx
x
2
a
Again we obtain all the (complicated) integrals we need
by dierentiating some simple basic ones:
a > 0 :
_
dx
x
2
a
=
1
a
_
dx
(x/

a)
2
1
put x = y

a
=
1

a
_
dy
1 y
2
=
1

a
arctanh(y) =
arctanh(x/

a)

a
=
1
2

a
ln
_
1 + x/

a
1 x/

_
=
1
2

a
_
ln(

+ x) ln(

x)
_
a < 0 :
_
dx
x
2
a
=
1
|a|
_
dx
(x/
_
|a|)
2
+ 1
put x = y
_
|a|
=
1
_
|a|
_
dy
1 + y
2
=
1
_
|a|
arctan(y) =
arctan(x/
_
|a|)
_
|a|
For instance:
_
dx
(x
2
+ 1)
2
=
_
d
da
_
dx
x
2
a
_
a=1
=
_
d
da
arctan(x/

a)

a
_
a=1
102
put z = (a)
1/2
and use the chain rule:
_
dx
(x
2
+ 1)
2
=
_
dz
da
.
d
dz
_
z arctan(zx)
_
_
a=1
=
_
_
1
2
(a)
3/2
__
arctan(zx) +
zx
1+z
2
x
2
_
_
a=1
=
1
2
arctan(x) +
x
2(1+x
2
)
Correctness is conrmed by explicit dierentiation.
6.2.6. Further tricks: partial fractions
Come into play when integrating
_
p(x)
q(x)
dx p(x), q(x) : polynomials
Method is based on the following two facts:
p(x) can always be written as p(x) = s(x)q(x) + r(x) (r(x): the remainder)
s(x), r(x) other polynomials, with r(x) of lower order than q(x)
This gives
_
p(x)
q(x)
dx =
_
s(x)dx +
_
r(x)
q(x)
rst part : easy!
If q(x) is of the following form, with
i
,
j
,
j
IR (all dierent),
q(x) =
n

i=1
(x +
i
)
a
i
m

j=1
(x
2
+
j
x +
j
)
b
j
with a
i
, b
i
ZZ
+
(remember:

n
i=1
k
i
= k
1
k
2
. . . k
n1
k
n
) and the order of r(x) is less than that of q(x), then
there always exists constants A
ik
, B
j
, C
j
IR such that
r(x)
q(x)
=
n

i=1
a
i

k=1
A
ik
(x +
i
)
k
+
m

j=1
b
j

=1
B
j
x + C
j
(x
2
+
j
x +
j
)

(the latter are called partial fractions)


In combination:
our initial integral can always be written as
_
p(x)
q(x)
dx =
_
s(x)dx +
n

i=1
a
i

k=1
A
ik
_
dx
(x +
i
)
k
+
m

j=1
b
j

=1
_
(B
j
x + C
j
)dx
(x
2
+
j
x +
j
)

103
_
s(x)dx with s(x) polynomial: easy
_
(x + )
k
dx: easy
_
(Bx + C)/(x
2
+ x + )

: do-able ...
Our general strategy for integrating a ratio of polynomials:
convert the ratio into the above form
Note 1:
The simplest case of the above is the following
If q(x) is of the following form, with
i
IR (all dierent),
q(x) =
n

i=1
(x +
i
) with
i
IR
and the order of r(x) is less than that of q(x), then there are constants A
i
IR such that
r(x)
q(x)
=
n

i=1
A
i
x +
i
and our initial integral can always be written as
_
p(x)
q(x)
dx =
_
s(x)dx +
n

i=1
A
i
ln |x +
i
|
The constants A
j
can be found upon multiplying our initial equation by x + a
j
followed by setting x = a
j
:
r(x)

n
i=1
(x +
i
)
=
n

i=1
A
i
x +
i

r(x)

n
i=1, i=j
(x +
i
)
= A
j
+
n

i=1, i=j
A
i
x +
j
x +
i
now put x =
j
:
A
j
=
r(
j
)

n
i=1, i=j
(
i

j
)
Note 2:
Let is inspect the not easy but do-able integrals above in more detail
First: write numerator as derivative of quadratic form in denominator
_
Bx + C
(x
2
+ x + )

dx =
B
2
_
2x
(x
2
+ x + )

dx + C
_
dx
(x
2
+ x + )

=
B
2
_
2x +
(x
2
+ x + )

dx +
_
C
1
2
B
_
_
dx
(x
2
+ x + )

104
=
B
2
_
d
dx
_
1
1
(x
2
+ x + )
1
_
+
_
C
1
2
B
_
_
dx
(x
2
+ x + )

=
B
2(1)
(x
2
+ x + )
1
+
_
C
1
2
B
_
_
dx
(x
2
+ x + )

Second: complete squares in the remaining integral and make an appropriate substitution
x
2
+ x + = (x +
1
2
)
2
+ (
1
4

2
) put y = x +
1
2
,
2
= |
1
4

2
|
_
dx
(x
2
+ x + )

=
_
dy
(y
2

2
)

The latter integrals are of the form done earlier


(when explaining dierentiation with respect to a parameter).
Let us illustrate how this works via examples:
Example 1:
I =
_
dx
(x
2
1)(x + 3)
We recognize that
(i) we integrate a ratio of polynomials method of partial fractions
(ii) order of numerator is less than that of denominator,
i.e. we just have just the remainder r(x)/q(x) (no s(x) needed)
(iii) in fact the simplest case: r(x) = 1 and q(x) = (x + 1)(x 1)(x + 3)
We know that we can always nd constants A
1
, A
2
, A
3
such that
1
(x
2
1)(x + 3)
=
A
1
x 1
+
A
2
x + 1
+
A
3
x + 3
for all x IR
Find these constants:
A
1
=
x 1
(x
2
1)(x + 3)

A
2
(x 1)
x + 1

A
3
(x 1)
x + 3
=
1
(x + 1)(x + 3)

A
2
(x 1)
x + 1

A
3
(x 1)
x + 3
put x = 1
=
1
(1 + 1)(1 + 3)
=
1
8
A
2
=
x + 1
(x
2
1)(x + 3)

A
1
(x + 1)
x 1

A
3
(x + 1)
x + 3
=
1
(x 1)(x + 3)

A
1
(x + 1)
x 1

A
3
(x + 1)
x + 3
put x = 1
=
1
4
105
A
3
=
x + 3
(x
2
1)(x + 3)

A
1
(x + 3)
x 1

A
2
(x + 3)
x + 1
=
1
x
2
1

A
1
(x + 3)
x 1

A
2
(x + 3)
x + 1
put x = 3
=
1
8
Hence
I =
_
_
1
8(x 1)

1
4(x + 1)
+
1
8(x + 3)
_
dx
=
1
8
ln |x 1|
1
4
ln |x + 1| +
1
8
ln |x + 3| =
1
8
ln

x
2
+2x3
x
2
+2x+1

Example 2:
I =
_
xdx
(x
2
+ 1)(x + 2)
We recognize that
(i) we integrate a ratio of polynomials method of partial fractions
(ii) order of numerator is less than that of denominator (no s(x) needed)
(iii) this one is not of the simplest form q(x) = (x +
1
)(x +
2
)(x +
3
)
We know that we can always nd constants A, B, C such that
x
(x
2
+ 1)(x + 2)
=
A
x + 2
+
Bx + C
x
2
+ 1
for all x IR
Find rst constant:
A =
x(x + 2)
(x
2
+ 1)(x + 2)

(Bx + C)(x + 2)
x
2
+ 1
=
x
x
2
+ 1

(Bx + C)(x + 2)
x
2
+ 1
put x = 2 : A =
2
5
so we must nd constants B, C such that
x
(x
2
+ 1)(x + 2)
=
Bx + C
x
2
+ 1

2
5(x + 2)
for all x IR
Insert two convenient values for x and solve the two resulting eqns for B and C:
x = 1 : B C =
1
5
x = 1 : B + C =
3
5
The solution is: B = 2/5 and C = 1/5. Hence
I =
1
5
_ _
2x + 1
x
2
+ 1

2
x + 2
_
dx
=
1
5
_
2xdx
x
2
+ 1
+
1
5
_
dx
x
2
+ 1

2
5
_
dx
x + 2
=
1
5
ln |x
2
+ 1| +
1
5
arctan(x)
2
5
ln|x + 2|
106
6.3. Some simple applications
6.3.1. Calculation of surface areas
The area A of the surface between the x-axis and a curve f(x),
taken from x = a to x = b (with the accepted sign conventions) is: A =
_
b
a
f(x)dx
Area inside a circle:
x
y
R R
Equation for circle with radius R,
centred in the origin: x
2
+ y
2
= R
2
Upper half of circle: y =

R
2
x
2
, with x [R, R]
Area A between upper half of circle and x-axis:
A =
_
R
R

R
2
x
2
dx
put x = Rcos() so dx = Rsin()d
= R
_
0

_
R
2
R
2
cos
2
() sin()d
= R
_

0
_
R
2
R
2
cos
2
() sin()d
= R
2
_

0
sin
2
()d = R
2
_

0
_
1
2

1
2
cos(2)
_
d
= R
2
_
1
2

1
4
sin(2)
_

0
=
1
2
R
2
Since this is exactly half of the surface area inside the circle:
A
circle
= R
2
Area inside an ellipse:
x
y
R R
Equation for an ellipse centred at the origin,
with foci on x-axis at x = a
and with sum of distances to the foci given by 2R:
_
(x a)
2
+ y
2
+
_
(x + a)
2
+ y
2
= 2R
Not yet of the required form y = f(x) ...
Rewrite ellipse equation:
move one term to the right, then square both sides
_
(xa)
2
+y
2
= 2R
_
(x+a)
2
+y
2
(xa)
2
+y
2
= 4R
2
+(x+a)
2
+y
2
4R
_
(x+a)
2
+y
2
107
2xa = 4R
2
+2xa4R
_
(x+a)
2
+y
2
_
(x+a)
2
+y
2
= R+xa/R
(x+a)
2
+y
2
= (R+xa/R)
2
Giving: y
2
= R
2
a
2
x
2
(1 a
2
/R
2
)
Upper half of ellipse: y =
_
R
2
a
2
x
2
(1a
2
/R
2
), with x [R, R]
Hence, area A between upper half of ellipse and x-axis:
A =
_
R
R
_
R
2
a
2
x
2
(1a
2
/R
2
) dx
=

R
2
a
2
_
R
R

1
x
2
R
2
dx put x = Rcos() so dx = Rsin()d
= R

R
2
a
2
_
0

_
1cos
2
() sin()d
= R

R
2
a
2
_

0
sin
2
()d =
1
2
R

R
2
a
2
( integral already calculated for the circle)
Since this is exactly half of the surface area inside the ellipse:
A
ellipse
= R

R
2
a
2
6.3.2. Calculation of volumes of revolution
x
f(x)
Take the graph of a function f(x)
with f(x) 0 for all x [a, b]
Revolve this around the x-axis (see gure)
Result: a solid in three dimensions
What is its volume V ?
split the x-axis into small steps of size h
(i.e. split solid into small slices of width h)
e.g. x
i
= a + (i 1)h,
with i = 1, . . . , L and h = (b a)/L
let the cross-section of the solid at point x
i
have an area equal to A(x
i
)
then a slice at position x
i
contributes an amount to the volume
that is approximately hA(x
i
) (if h is suciently small)
108
x

h
A(x
i
)
A
A
A
A
A
A
cross-section of the solid at point x
i
is a circle with radius f(x
i
)
hence its area is A(x
i
) = f
2
(x
i
)
Total volume:
sum all contributions from the L slices:
V
L

i=1
hf
2
(x
i
)
Expression becomes exact for h 0:
V = lim
h0

i=1
hf
2
(x
i
) =
_
b
a
f
2
(x)dx
Example:
Our solid becomes a sphere
if what we revolve around the x-axis is a circle:
x
2
+ y
2
= R
2
y =

R
2
x
2

_
_
_
f(x) =

R
2
x
2
x [R, R]
Hence
V
sphere
=
_
R
R
f
2
(x)dx =
_
R
R
(R
2
x
2
)dx
=
_
R
2
x
1
3
x
3
_
R
R
= 2(R
3

1
3
R
3
) =
4
3
R
3
Example:
Our solid becomes a cigar
if what we revolve around the x-axis is an ellipse:
y
2
= R
2
a
2
x
2
(1a
2
/R
2
)
_
_
_
f(x) =
_
R
2
a
2
x
2
(1a
2
/R
2
)
x [R, R]
Hence
V
cigar
=
_
R
R
f
2
(x)dx =
_
R
R
R
2
a
2
x
2
(1a
2
/R
2
)dx
=
_
(R
2
a
2
)x
1
3
(1a
2
/R
2
)x
3
_
R
R
= 2
_
(R
2
a
2
)R
1
3
(R
2
a
2
)R
_
=
4
3
R(R
2
a
2
)
109
6.3.3. Calculation of the length of curves
(x
i
, y
i
) (x
i+1
, y
i
)
(x
i+1
, y
i+1
)

f(x)
Consider an arbitrary curve in the plane,
described by a function y = f(x)
What is the length L of this curve
between, say, x = a and x = b ?
split the x-axis into small steps of size h
e.g. x
i
= a + (i 1)h,
with i = 1, . . . , K and h = (b a)/K
write f(x
i
) = y
i
consider the curve segment between
the points (x
i
, y
i
) and (x
i+1
, y
i+1
)
Pythagoras theorem in triangle:
the length of this segment is approximately

i

_
(x
i+1
x
i
)
2
+ (y
i+1
y
i
)
2
(if x
i+1
x
i
suciently small)
use x
i+1
x
i
= h and y
i
= f(x
i
)

i
h

1 +
_
y
i+1
y
i
h
_
2
= h

1 +
_
f(x
i
+ h) f(x
i
)
h
_
2
Combine:
L = lim
h0
K

i=1

i
= lim
h0
h
K

i=1

1 +
_
f(x
i
+ h) f(x
i
)
h
_
2
=
_
b
a

1 +
_
df
dx
_
2
dx
Example:
Circumference of a circle with radius R?
Twice length of the curve y =

R
2
x
2
,
with x [R, R]:
L
circle
= 2
_
R
R

1 +
_
d
dx
(R
2
x
2
)
1
2
_
2
dx = 2
_
R
R

1 +
_
x

R
2
x
2
_
2
dx
= 2
_
R
R

1 +
x
2
R
2
x
2
dx = 2
_
R
R

R
2
R
2
x
2
dx put x = Ry
= 2R
_
1
1
dy

1 y
2
= 2R
_
arcsin(y)
_
1
1
= 2R
_

2
(

2
)
_
= 2R
110
(x
i
, y
i
) (x
i+1
, y
i
)
(x
i+1
, y
i+1
)

(x(t), y(t))
Consider an arbitrary curve in the plane,
described parametrically by
two functions x(t) and y(t)
What is the length L of this curve
between, say, t = a and t = b ?
split the t-axis into small steps of size h
e.g. t
i
= a + (i 1)h,
with i = 1, . . . , K and h = (b a)/K
write x(t
i
) = x
i
and y(t
i
) = y
i
consider the curve segment between
the points (x
i
, y
i
) and (x
i+1
, y
i+1
)
Pythagoras theorem in triangle:
the length of this segment is approximately

i

_
(x
i+1
x
i
)
2
+ (y
i+1
y
i
)
2
(if t
i+1
t
i
suciently small)
use x
i
= x(t
i
) and y
i
= y(t
i
)

i
h

_
x
i+1
x
i
h
_
2
+
_
y
i+1
y
i
h
_
2
= h

_
x(t
i
+ h) x(t
i
)
h
_
2
+
_
y(t
i
+ h) y(t
i
)
h
_
2
Combine:
L = lim
h0
K

i=1

i
= lim
h0
h
K

i=1

_
x(t
i
+ h) x(t
i
)
h
_
2
+
_
y(t
i
+ h) y(t
i
)
h
_
2
=
_
b
a
_
(dx/dt)
2
+ (dy/dt)
2
dt
Example:
Circumference of a circle with radius R?
parametrize x() = Rcos(), y() = Rsin(),
with [0, 2]:
L
circle
=
_
2
0
_
(dx/d)
2
+ (dy/d)
2
d =
_
2
0
_
R
2
sin
2
() + R
2
cos
2
() d
= 2R
111
Example:
(x(t), y(t))

t =0

t =
x
y
Length of a spiral with radial velocity v
and angular velocity ?
parametrize
x(t) = vt cos(t), y(t) = vt sin(t),
with t [0, ]:
dx
dt
= v cos(t) vt sin(t)
dy
dt
= v sin(t) + vt cos(t)
Hence:
L
spiral
=
_

0
_
(dx/dt)
2
+ (dy/dt)
2
dt
= v
_

0
_
_
cos(t) t sin(t)
_
2
+
_
sin(t) + t cos(t)
_
dt
= v
_

0

1 +
2
t
2
dt put t = u so dt = du/
=
v

_

0

1 + u
2
du put u = sinh(z) so du = cosh(z)dz
=
v

_
arcsinh()
0
cosh
2
(z)dz =
v
4
_
arcsinh()
0
_
e
2z
+ 2 + e
2z
_
dz
=
v
4
_
arcsinh()
0
_
2z +
1
2
e
2z

1
2
e
2z
_

0
=
v
4
_
2 arcsinh() + sinh(2arcsinh())
_
=
v
2
_
arcsinh() + cosh(arcsinh())
_
=
v
2
ln
_
+

2
+1
_
+
v
2
_
1 + sinh
2
(arcsinh())
=
v
2
ln
_
+

2
+1
_
+
v
2

1 +
2

2
112
7. Taylors theorem and series
7.1. Introduction to series and questions of convergence
7.1.1. Series notation and elementary properties
denition:
A series is an expression of the form

n=n
0
a
n
(usually n
0
= 0, 1)
denition:
A partial sum of the series is an expression of the form S
N
=

N
n=n
0
a
n
denition
A series

n=n
0
a
n
is called convergent if the limit S = lim
N
S
N
exists. Only then will the
series yield a well-dened nite number. If the series does not converge it is called divergent.
Notes:
If a series

n=n
0
a
n
converges, then lim
n
a
n
= 0
Proof: lim
n
a
n
= lim
n
(S
n
S
n1
) = S S = 0
We have already inspected the following series in exercises:

n=1
2
n
: S
N
=
N

n=1
2
n
= 12
N
lim
N
S
N
= 1, series convergent

n=1
1 : S
N
=
N

n=1
1 = N lim
N
S
N
does not exist, series divergent
However, having lim
n
a
n
= 0 is not enough to guarantee that a series converges, the a
n
will
also have to go to zero suciently fast as n gets larger. See e.g. the two examples below:
Examples:
The series

n=1
n
1
is divergent
Proof:
Consider the partial sums S
N
=

N
n=1
n
1
, they obey
S
2N
S
N
=
1
N+1
+
1
N+2
+ . . . +
1
2N1
+
1
2N
N
1
2N
=
1
2
If we assume that the series converges, then lim
N
S
N
= S exist. However, taking
the limit lim
N
in the previous inequality would then lead us to the contradiction
0 = S S
1
2
. We conclude that the series must be divergent.
113
The series

n=1
1/n(n+1) is convergent
Proof:
Since
1
n(n+1)
=
1
n

1
n+1
, the partial sum S
N
can be written as
S
N
=
N

n=1
1
n(n+1)
=
N

n=1
_
1
n

1
n+1
_
=
_
1
1

1
2
_
+
_
1
2

1
3
_
+ . . . +
_
1
N

1
N+1
_
= 1 +
_

1
2
+
1
2
_
+
_

1
3
+
1
3
_
+ . . . +
_

1
N
+
1
N
_

1
N+1
= 1
1
N+1
It follows that lim
N
S
N
= 1, so the series converges:

n=1
1/n(n+1) = 1.
7.1.2. Series convergence criteria
If we are only interested in whether a series converges/diverges, the start index n
0
is irrelevant;
hence one often speaks simply about the convergence or divergence of a series

n
a
n
.
We now mention a number of useful results on the convergence of series, without proof.
The rst applies if our a
n
are similar for large n to those of another series that we know.
The others apply if we can calculate either lim
n
|a
n
|
1/n
or lim
n
|a
n+1
/a
n
|:
(C1)

n
b
n
is convergent and lim
n
|a
n
|/b
n
exists

n
a
n
is convergent
(C2) lim
n
|a
n
|
1/n
< 1

n
a
n
is convergent
(C3) lim
n
|a
n
|
1/n
> 1

n
a
n
is divergent
(C4) lim
n
|a
n+1
/a
n
| < 1

n
a
n
is convergent
(C5) lim
n
|a
n+1
/a
n
| > 1

n
a
n
is divergent
The tricky series are those with
lim
n
|a
n
|
1/n
= lim
n
|a
n+1
/a
n
| = 1
Examples:
The series

n
1/n
2
is convergent
Proof:
Let a
n
= 1/n
2
, and compare to the convergent series

n
b
n
with b
n
= 1/n(n+1):
lim
n
|a
n
|/b
n
= lim
n
1/n
2
1/n(n+1)
= lim
n
n
2
+n
n
2
= lim
n
_
1+
1
n
_
= 1
It now follows from (C1) above that

n
1/n
2
is also convergent.
114
The series

n
(1)
n+1
/n is convergent
Proof:
Separate in the partial sum S
N
the terms with even n from those with odd n. Write
n = 2m for even n, and n = 2m1 for odd n.
N even : S
N
=
N

n=1, odd
1
n

N

n=1, even
1
n
=
N/2

m=1
1
2m1

N/2

m=1
1
2m
=
N/2

m=1
1
2m(2m1)
N odd : S
N
=
N

n=1, odd
1
n

N

n=1, even
1
n
=
(N+1)/2

m=1
1
2m1

(N1)/2

m=1
1
2m
=
(N1)/2

m=1
1
2m(2m1)
+
1
N
We see that our present series

n
(1)
n+1
/n is convergent if and only if the series

n
a
n
with a
n
=
1
2n(2n1)
is convergent. We may now use the convergence of

n
b
n
with b
n
= 1/n
2
together with statement (C1) to nish the proof:
lim
n
|a
n
|/b
n
= lim
n
1/2n(2n1)
1/n
2
= lim
n
_
1
4
1
11/2n
_
=
1
4
We conclude:

n
a
n
converges, and therefore also

n
(1)
n+1
/n converges.
7.1.3. Power series notation and elementary properties
denition:
A power series is an expression of the form S(x) =

n=n
0
b
n
x
n
i.e. a series where the terms are a
n
= b
n
x
n
Why our interest?
(i) we saw that some functions can be written as power series f(x) =

n=0
b
n
x
n
(ii) each of these functions had its own unique coecients {b
n
}
(iii) power series representations of functions are extremely useful
(iv) which other functions can perhaps be written as power series f(x) =

n=0
b
n
x
n
?
(v) how would we nd the right coecients {b
n
} for any given function?
(vi) how can we know whether the power series would converge?
(vii) how fast does a power series converge to the original function it represents?
Let us apply the convergence criteria (C2 C5) to power series,
by substitution of a
n
= b
n
x
n
(statements below make sense only if the various limits exist)
115
Let R
1
= lim
n
|b
n
|
1/n
:
|x| < R
1
:

n
b
n
x
n
converges
|x| > R
2
:

n
b
n
x
n
diverges
Let R
2
= lim
n
|b
n
/b
n+1
| :
|x| < R
2
:

n
b
n
x
n
converges
|x| > R
2
:

n
b
n
x
n
diverges
Clearly, the two limits above must be the same.
This can be shown properly: if the limits exist, then R
1
= R
2
= R
We are now led in a natural way to the concept of radius of convergence:
denition
The radius of convergence R of a power series

n=n
0
b
n
x
n
:
a nonnegative number such that the series converges for all x with |x| < R
and diverges for all x with |x| > R.
It can be calculated either from R = lim
n
|b
n
|
1/n
or from R = lim
n
|b
n
/b
n+1
|
if these latter limits exist
Notes:
(i) Exactly at the radius of convergence, i.e. for |x| = R,
there is no general rule (just inspect the series at hand in detail)
(ii) If R = 0 there is apparently no x = 0 for which the series converges
(iii) If R = the series converges for all x IR
Examples:
e
x
=

n=0
x
n
/n!
Here b
n
= 1/n!, so
R = lim
n
|b
n
|
|b
n+1
|
= lim
n
(n+1)!
n!
= lim
n
(n+1) =
Thus the series for e
x
converges for all x IR (as claimed earlier).
f(x) =

n=0
x
n
Here b
n
= 1, so R = lim
n
|b
n
|/|b
n+1
| = 1.
Thus the series f(x) =

n=0
x
n
converges for |x| < 1.
Compare this to our earlier result

N
n=0
x
n
= (1 x
N+1
)/(1 x) for x = 1.
We now see that this gives the series expansion for the function f(x) = 1/(1 x):
1
1 x
=

n=0
x
n
|x| < 1
116
Building (1 x)
1
=

n=0
x
n
as a power series,
by taking more and more terms in the summation:
-10
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
1
2
3
4
5
6
7
8
9
10
-2 -1 0 1 2
x
1
2
3 4 5 6
7
dashed :
1
1 x
solid : S
N
(x) =
N

n=0
x
n
for dierent choices of N
values of N are indicated in italics
As N increases: S(x) starts to resemble (1 x)
1
more and more,
but only on the interval (1, 1)
f(x) is only fully identical to (1 x)
1
for N ,
within the radius of convergence, i.e. for |x| < 1
sin(x) =

m=0
(1)
m
x
2m+1
/(2m+1)!
(be careful! not yet written in standard form with x
n
)
For n odd: b
n
= (1)
m
/(2m+1)! (where n = 2m+1), i.e. b
n
= (1)
(n1)/2
/n!
For n even: b
n
= 0
R = lim
n
|b
n
|
1/n
= lim
n
_
0 if n even
(n!)
1/n
if n odd
117
It can be shown that (n!)
1/n
as n , hence our series converges for all x IR.
A quicker way to prove that R = is based on the inequality |

n=0
a
n
|

n=0
|a
n
|:
|

m=0
(1)
m
x
2m+1
/(2m+1)!|

m=0
|x|
2m+1
/(2m+1)!

n=0
|x|
n
/n!
The last series (the exponential function of |x|) converges for all x IR,
hence also

m=0
(1)
m
x
2m+1
/(2m+1)! converges for all x IR (as claimed earlier).
7.2. Taylors theorem
We have achieved some understanding of when power series converge,
and next turn to the construction of such series for arbitrary functions.
First some further notation: f
(n)
(x) =
_
d
dx
_
n
f(x)
(the n-th derivative of f at the point x)
7.2.1. Expression for the coecients of power series
We will inspect dierent routes towards the formula for the coecients of a power series.
Let us rst consider non-pathological functions, where the derivative of the sum equals the sum
of the derivatives. Here we can use simple ideas:
Claim:
If a given function f(x) can be written as a power series,
with some convergence radius R > 0, i.e.
x IR, |x| < R : f(x) =

n=0
b
n
x
n
and if dierentiation and summation in the power series for f can be interchanged
(as for nite sums), then b
n
= f
(n)
(0)/n! (if this n-th derivative exists).
Proof:
If the function f(x) is indeed identical to the series S(x) =

n=0
b
n
x
n
for |x| < R,
then also their derivatives must be identical, i.e. f
(m)
(x) = S
(m)
(x) for |x| < R. Thus
f
(m)
(x) =
_
d
dx
_
m

n=0
b
n
x
n
=

n=0
b
n
_
d
dx
_
m
x
n
=

n=0
b
n
n(n1)(n2) . . . (nm+1)x
nm
=

n=m
b
n
n!
(nm)!
x
nm
Now choose x = 0: all powers of x in the right-hand side with m > n will vanish, giving
f
(m)
(0) = b
m
m!
118
Hence b
m
= f
(m)
(0)/m! as claimed. This also shows that under the assumed conditions there
can only be one power series representation of the function f.
Let us now derive this result in another way, where we dont need to interchange summation
and dierentiation. It is based on repeated application of the fundamental theorem of calculus:
f
(n)
(x) = f
(n)
(0) +
_
f
(n+1)
(y)dy
Let f be dierentiable as many times as we need:
f(x) = f(0) +
_
x
0
f
(1)
(y)dy put y = xz
1
= f(0) + x
_
1
0
f
(1)
(xz
1
)dz
1
= f(0) + x
_
1
0
_
f
(1)
(0) +
_
xz
1
0
f
(2)
(y)dy
_
dz
1
= f(0) + f
(1)
(0)x + x
_
1
0
__
xz
1
0
f
(2)
(y)dy
_
dz
1
put y = xz
1
z
2
= f(0) + f
(1)
(0)x + x
2
_
1
0
z
1
__
1
0
f
(2)
(xz
1
z
2
)dz
2
_
dz
1
= f(0) + f
(1)
(0)x + x
2
_
1
0
z
1
__
1
0
_
f
(2)
(0) +
_
xz
1
z
2
0
f
(3)
(y)dy
_
dz
2
_
dz
1
= f(0) + f
(1)
(0)x + x
2
f
(2)
(0)
_
1
0
z
1
__
1
0
dz
2
_
dz
1
+ x
2
_
1
0
z
1
__
1
0
__
xz
1
z
2
0
f
(3)
(y)dy
_
dz
2
_
dz
1
put y = xz
1
z
2
z
3
= f(0) + f
(1)
(0)x + x
2
f
(2)
(0)
_
1
0
z
1
dz
1
+ x
3
_
1
0
z
2
1
__
1
0
_
z
2
_
1
0
f
(3)
(xz
1
z
2
z
3
)dz
3
_
dz
2
_
dz
1
= f(0) + f
(1)
(0)x +
1
2
f
(2)
(0)x
2
+ x
3
_
1
0
z
2
1
__
1
0
_
z
2
_
1
0
f
(3)
(xz
1
z
2
z
3
)dz
3
_
dz
2
_
dz
1
We observe:
(i) we once more generate the coecients b
n
= f
(n)
(0)/n!
(ii) but now we also nd explicit formulas for the dierence between the true function f
and the partial sums S
N
(x) =

N
n=0
f
(n)
(0)x
n
/n! of the power series:
f(x) =
N

n=0
f
(n)
(0)
n!
x
n
+ R
N+1
(x)
with e.g. R
1
(x) = x
_
1
0
f
(1)
(xz
1
)dz
1
R
2
(x) = x
2
_
1
0
z
1
__
1
0
f
(2)
(xz
1
z
2
)dz
2
_
dz
1
119
R
3
(x) = x
3
_
1
0
z
2
1
__
1
0
_
z
2
_
1
0
f
(3)
(xz
1
z
2
z
3
)dz
3
_
dz
2
_
dz
1
7.2.2. Taylor series around x = 0
The last result gives us essentially the so-called Taylor series expansion in powers of x of a
dierentiable function f, the only dierence with the above form is that the remainder term is
written in a nicer way:
denition
The Taylor expansion to order N around x = 0 of a function f is the following (exact)
expression, involving an N-th order polynomial and a remainder term R
N+1
(x):
f(x) =
N

n=0
f
(n)
(0)
n!
x
n
+ R
N+1
(x) R
N+1
(x) =
1
N!
_
x
0
f
(N+1)
(y)(xy)
N
dy
Proof:
(of the form claimed to be exact for the remainder term)
This is done by induction. We dene as always the dierence between left- and right-hand side
of the claimed identity,
A
N
(x) = f(x)
N

n=0
f
(n)
(0)
n!
x
n

1
N!
_
x
0
f
(N+1)
(y)(xy)
N
dy
We aim to prove that A
N
(x) = 0 for all integer N 0. First we check the basis, i.e. N = 0:
A
0
(x) = f(x)
f
(0)
(0)
0!
x
0

1
0!
_
x
0
f
(1)
(y)(x y)
0
dy
= f(x) f(0)
_
x
0
f

(y)dy = f(x) f(0)


_
f(y)
_
x
0
= 0
So the claim is true for N = 0. Next we prove the induction step. We assume that A
N
(x) = 0
(i.e. the Taylor expansion is exact for the value N) and prove from this that also A
N+1
(x) = 0:
A
N+1
(x) = f(x)
N+1

n=0
f
(n)
(0)
n!
x
n

1
(N+1)!
_
x
0
f
(N+2)
(y)(xy)
N+1
dy
= f(x)
N

n=0
f
(n)
(0)
n!
x
n

f
(N+1)
(0)
(N+1)!
x
N+1

1
(N+1)!
_
x
0
f
(N+2)
(y)(xy)
N+1
dy
use A
N
(x) = 0 : =
1
N!
_
x
0
f
(N+1)
(y)(xy)
N
dy
f
(N+1)
(0)
(N+1)!
x
N+1

1
(N+1)!
_
x
0
f
(N+2)
(y)(xy)
N+1
dy
integr by parts : =
1
N!
_
x
0
f
(N+1)
(y)(xy)
N
dy
f
(N+1)
(0)
(N+1)!
x
N+1
120

1
N!
__
f
(N+1)
(y)(xy)
N+1
N + 1
_
x
0
+
_
x
0
f
(N+1)
(y)(xy)
N
dy
_
=
f
(N+1)
(0)
(N+1)!
x
N+1
0 +
f
(N+1)
(0)
(N+1)!
x
N+1
= 0
This completes the proof.
Notes:
One can regard the Taylor series as an approximation of f for values of x close to zero,
with the remainder term R
N+1
(x) indicating the precise dierence between the polynomial
approximation up to order N and the true function f
If the n-th derivative of f is bounded on the interval [R, R], i.e. |f
(n)
(x)| C
n
for all
x [R, R], then |R
N+1
(x)| C
n
x
N+1
/(N+1)!
Proof:
|R
N+1
(x)|
1
N!
_
x
0
|f
(N+1)
(y)|(xy)
N
dy C
n
1
N!
_
x
0
(xy)
N
dy
= C
n
1
(N+1)!
_
(xy)
N+1
_
x
0
=
C
n
(N+1)!
x
N+1
For small |x|, the correction is therefore generally much smaller than any of the previous
terms in the approximating polynomial.
7.2.3. Taylor series around x = a
One can generalize the above Taylor series in order to have an expansion of functions around
some other value x = a, in terms of powers in the deviation x a.
denition
The Taylor expansion to order N around x = a of a function f is the following (exact)
expression,
involving an N-th order polynomial and a remainder term R
N+1
(x):
f(x) =
N

n=0
f
(n)
(a)
n!
(xa)
n
+ R
N+1
(x) R
N+1
(x) =
1
N!
_
x
a
f
(N+1)
(y)(xy)
N
dy
Proof:
This can be done by a simple change of variables. Write x = a +u, dene g(u) = f(a +u), and
121
apply the previous Taylor expansion to g(u):
g(u) =
N

n=0
g
(n)
(0)
n!
u
n
+ R
N+1
(u) R
N+1
(u) =
1
N!
_
u
0
g
(N+1)
(z)(uz)
N
dz
Use the fact that g
(n)
(u) = (
d
du
)
n
g(u) = (
d
du
)
n
f(a + u) = f
(n)
(a + u). Thus g
(n)
(0) = f
(n)
(a).
Insertion into the above expansion, together with g(u) = f(a + u) and u = x a, then gives:
f(a + u) =
N

n=0
f
(n)
(a)
n!
(xa)
n
+ R
N+1
R
N+1
=
1
N!
_
xa
0
f
(N+1)
(a + z)(xaz)
N
dz put z = y a
=
1
N!
_
x
a
f
(N+1)
(y)(xy)
N
dy
Notes:
One can regard the generalized Taylor series as an approximation of f for values of x
close to a, with the remainder term R
N+1
(x) indicating the precise dierence between the
polynomial approximation up to order N and the true function f
If the n-th derivative of f is bounded on the interval [a R, a +R], i.e. |f
(n)
(x)| C
n
for
all x [a R, a + R], then |R
N+1
(x)| C
n
(x a)
N+1
/(N+1)!
Proof:
|R
N+1
(x)|
1
N!
_
x
a
|f
(N+1)
(y)|(xy)
N
dy C
n
1
N!
_
x
a
(xy)
N
dy
= C
n
1
(N+1)!
_
(xy)
N+1
_
x
a
=
C
n
(N+1)!
(xa)
N+1
For small values of |xa|, the correction is therefore generally much smaller than any of
the previous terms in the approximating polynomial.
7.3. Examples
7.3.1. Series expansions for standard functions
We can now derive the power series for arbitrary functions
(including the ones we simply stated in the past):
f(x) = e
x
:
Derivatives: f
(n)
(x) = (
d
dx
)
n
e
x
= e
x
, so f
(n)
(0) = e
0
= 1. Hence
e
x
=
N

n=0
x
n
n!
+ R
N+1
(x) R
N+1
(x) =
1
N!
_
x
0
e
y
(xy)
N
dy
122
So if lim
N
R
N+1
(x) = 0, which is here true for all x IR:
e
x
=

n=0
x
n
n!
= 1 + x +
1
2
x
2
+
1
6
x
3
+ . . .
f(x) = sin(x):
Derivatives: f
(2m)
(x) = (1)
m
sin(x) and f
(2m+1)
(x) = (1)
m
cos(x), so f
(2m)
(0) = 0 and
f
(2m+1)
(0) = (1)
m
. Hence
sin(x) =
(N1)/2

m=0
(1)
m
x
2m+1
(2m+1)!
+ R
N+1
(x)
R
N+1
(x) =
_
_
_
1
N!
(1)

_
x
0
cos(y)(xy)
N
dy if N = 2
1
N!
(1)
+1
_
x
0
sin(y)(xy)
N
dy if N = 2 + 1
So if lim
N
R
N+1
(x) = 0, which is here easy to prove for all x IR, since sin(y) [1, 1]
and cos(y) [1, 1]:
sin(x) =

m=0
(1)
m
x
2m+1
(2m+1)!
= x
1
3
x
3
+
1
120
x
5
+ . . .
f(x) = ln(1 x):
Derivatives: f
(1)
(x) = (1x)
1
, f
(2)
(x) = (1x)
2
, f
(3)
(x) = 2(1x)
3
, f
(4)
(x) =
2.3(1x)
4
. Further iteration: f
(n)
(x) =(n1)!(1x)
n
for n 1 (this one can also
prove by induction). So f
(n)
(0) = (n1)! for n 1, and f(0) = 0. Hence
ln(1x) =
N

n=1
x
n
n
+ R
N+1
(x) R
N+1
(x) =
_
x
0
(xy)
N
(1y)
N+1
dy
So if lim
N
R
N+1
(x) = 0, which is here true for all |x| 1:
ln(1x) =

n=1
x
n
n
= x
1
2
x
2

1
3
x
3

1
4
x
4
+ . . .
7.3.2. Indirect methods for nding Taylor series
If you only need the rst few terms of a Taylor series, combine & manipulate series you know!
(add, multiply, change signs, divide, dierentiate, integrate, etc)
but keep track of which powers you keep on board and which you dont, for consistency
Examples:
e
x
= 1 x +
1
2
x
2

1
6
x
3
+ . . .
123
ln(1 + x) = x
1
2
x
2
+
1
3
x
3

1
4
x
4
+ . . .
1
1 + x
=
d
dx
ln(1+x) =
d
dx
_
x
1
2
x
2
+
1
3
x
3

1
4
x
4
+ . . .
_
= 1 x + x
2
x
3
+ . . .
1
cos(x)
=
1
1
1
2
x
2
+
1
24
x
4
+ . . .
=
1
1 +
_

1
2
x
2
+
1
24
x
4
+ . . .
_
= 1
_

1
2
x
2
+
1
24
x
4
+ . . .
_
+
_

1
2
x
2
+
1
24
x
4
+ . . .
_
2
+ . . .
= 1 +
1
2
x
2

1
24
x
4
+ . . . +
1
4
x
4
+ . . .
= 1 +
1
2
x
2
+
5
24
x
4
+ . . .
(1 + x)

= e
ln(1+x)
= e
(x
1
2
x
2
+
1
3
x
3

1
4
x
4
+...)
= 1 +
_
x
1
2
x
2
+
1
3
x
3

1
4
x
4
+ . . .
_
+
1
2

2
_
x
1
2
x
2
+
1
3
x
3

1
4
x
4
+ . . .
_
2
+ . . .
= 1 + x
1
2
x
2
+ . . . +
1
2

2
x
2
+ . . .
= 1 + x
1
2
(1)x
2
+ . . .
1

1 + x
= (1 + x)

1
2
= 1
1
2
x +
3
8
x
2
+ . . .
tan(x) =
sin(x)
cos(x)
=
x
1
6
x
3
+
1
120
x
5
+ . . .
1
1
2
x
2
+
1
24
x
4
+ . . .
=
_
x
1
6
x
3
+
1
120
x
5
+ . . .
__
1 +
1
2
x
2
+
5
24
x
4
+ . . .
_
= x +
1
2
x
3
+
5
24
x
5

1
6
x
3

1
12
x
5
+
1
120
x
5
+ . . .
= x +
1
3
x
3
+
2
15
x
5
+ . . .
124
7.4. LHopitals rule
We saw earlier how power series can be used to calculate nontrivial limits,
for example:
lim
x0
tan(x)

1 + x 1
= lim
x0
x +
1
3
x
3
+
2
15
x
5
+ . . .
1 +
1
2
x
1
8
x
2
+ . . . 1
= lim
x0
1 +
1
3
x
2
+
2
15
x
4
+ . . .
1
2

1
8
x + . . .
= 2
lim
x0
1
3
tan(3x) sin(x)
2x
3
= lim
x0
1
3
_
3x +
1
3
(3x)
3
+ . . .
_

_
x
1
6
x
3
+ . . .
_
2x
3
= lim
x0
x + 3x
3
x +
1
6
x
3
+ . . .
2x
3
= lim
x0
19
6
x
3
+ . . .
2x
3
=
19
12
Let us nally use power series to inspect more generally what can be said
about limits of fractions:
lim
x0
f(x)
g(x)
= lim
x0
f(0) + xf

(0) +
1
2
x
2
f

(0) + . . .
g(0) + xg

(0) +
1
2
x
2
g

(0) + . . .
Thus, if f(0) = g(0) = 0 (the nontrivial limits, where Fermats simple rule fails):
lim
x0
f(x)
g(x)
= lim
x0
xf

(0) +
1
2
x
2
f

(0) + . . .
xg

(0) +
1
2
x
2
g

(0) + . . .
= lim
x0
f

(0) +
1
2
xf

(0) + . . .
g

(0) +
1
2
xg

(0) + . . .
=
f

(0)
g

(0)
This result is called LHopitals rule.
Notes:
(i) Never forget that LHopitals rule applies only when f(0) = g(0) = 0
(ii) In contrast to using power series directly,the rule works only if g

(0) = 0
Examples of the use of LHopitals rule:
lim
x0
a
x
1
x
= lim
x0
_
ln(a)e
x ln(a)
_
x=0
1
= ln(a)
lim
x0
sin(x)
x
=
sin

(0)
1
= cos(0) = 1
lim
x0
e
7x
1 sin(x/3)
ln(1 + x)
=
_
7e
7x


3
cos(x/3)
_
x=0
[(1 + x)
1
]
x=0
= 7

3
125
8. Exercises
CALCULUS 1 TUTORIAL EXERCISES I
1. Use the induction method to prove the following statements:
(n IN) :
n

k=1
k
4
=
1
30
n(n + 1)(2n + 1)(3n
2
+ 3n 1)
(n IN) : n + 1 2
n
(n + 1)!
(n IN, n 4) : n
2
2
n
n!
2. Express the following three complex numbers in the form a + ib, with a, b IR:
(3 + i) (2 + 6i) (1 + i)(1 + 2i) (2 i)
2
3. Let z and z be a conjugate pair of complex numbers. Prove the following three identities:
Re(z) =
1
2
(z + z) Im(z) =
1
2i
(z z) (z) = z
4. Prove the following two statements:
(z, w | C) : z + w = z + w
(z, w | C) : z.w = z.w
5. z and z be a conjugate pair of complex numbers. Prove the following claims:
zz IR zz 0
Prove that z.z = 0 if and only if z = 0.
6. Let z, w | C. Prove the following four statements:
|Re(z)| |z| |Im(z)| |z| |z.w| = |z|.|w| |z| = |z|
7. Let z, w | C, with w = 0. prove the following statements:
z/w = z/w |z/w| = |z|/|w|
8. Express the following ve complex numbers in the form a + ib, with a, b IR:
i
3
i
4
i
5
i
63
(i)
10
9. Express the following ve complex numbers in the form a + ib, with a, b IR:
1
23i
2+i
1+3i
1i
1+i
(1+i

3)
3
(1 i

3)
3
10. Find all solutions of the equation z
3
1 = 0. Hint: write z
3
1 as the product of (z 1)
and another factor.
126
CALCULUS 1 TUTORIAL EXERCISES II
11. Simplify the four complex numbers (1 + i)
4
, (1 i)
4
, (1 + i)
4
, and (1 i)
4
. Use your
results to nd all solutions of the equation z
4
+ 1 = 0.
12. Let z = 1 + 2i. Calculate the following seven complex numbers, and plot them in the
complex plane (i.e. on an Argand diagram):
z, 1/z, 1/z, z + z, z z, zz, and z/z.
13. What are the modulus and the principal values of the arguments of the following complex
numbers?
(a) 1 + i (b) 1 i (c) 2 2i (d) 1 +

3i
(e) 1

3i (f) 1 i (g) e
2i/3
(h) 2e
i/4
(i) e
3i/4
(j) e
2+i/2
14. Let z = re
i
, with r, IR and r 0. Find all values of r and such that z
5
= 1. Plot
the corresponding numbers in the complex plane.
15. Let z = re
i
, with r, IR and r 0. Find all values of r and such that z
6
= 64. Plot
the corresponding numbers in the complex plane.
16. Find all solutions z | C of the equation (z 1)
5
= 1. Plot the corresponding numbers in
the complex plane.
17. Find all solutions z | C of the equation (z i)
6
= 64. Plot the corresponding numbers in
the complex plane.
18. Let a, b and u denote real numbers. Find the real and the imaginary parts of the following
complex numbers:
(i) e
a+ib
(ii)
e
ua
e
ibu
a + ib
(iii) e
e
iu
19. Sketch the following sets in the complex plane:
A = {z | C | |z i|
2
= 4}
B = {z | C | (z i)
2
= 4}
C = {z | C | 1 < |z + 1| 3}
127
CALCULUS 1 TUTORIAL EXERCISES III
20. Prove by induction that

n
k=1
2
k
= 1 2
n
for all n IN. Use this result to determine
the value of the series

k=1
2
k
. Give an example of a series of the form

n
k=1
a
k
, with
nite values of the sum for any nite n, but such that the summation

k=1
a
k
does not
exist.
21. Show that for x IR the series representation e
z
=

n=0
z
n
/n! of the exponential function
has the following properties (here and in subsequent exercises you will be allowed to
interchange dierentiation and summation):
d
dx
e
ax
= ae
ax
e
0
= 1
22. Use the series representations of the trigonometric functions, i.e.
sin(z) =

k=0
(1)
k
z
2k+1
(2k+1)!
and cos(z) =

k=0
(1)
k
z
2k
(2k)!
, to derive
the following properties for IR:
d
d
sin() = cos(), sin(0) = 0
d
d
cos() = sin(), cos(0) = 1
23. Give the values of cos() and sin() for the following choices of the angle:
= 0,

6
,

4
,

3
,

2
,
2
3
,
3
4
,
5
6
, ,
5
4
,
3
2
,
5
3
Give your values as fractions in terms of

2,

3 etc.
(i.e. not as decimals as given by a calculator)
24. Show that cos(/12) =
1
2
_
2 +

3.
(Hint: use the formula for cos(2) in terms of cos())
25. Find all solutions IR (if any) of the equation cot() + tan() = , for the following
three cases:
(i) = 1 (ii) = 2 (iii) = 4
26. Let , IR. Rewrite each of the following expressions as a constant times a product of
trigonometric functions:
(i) sin( ) + sin( + ) (ii) cos( ) cos( + )
128
CALCULUS 1 TUTORIAL EXERCISES IV
27. Express the following combinations of functions in the form c sin( + ), where c and
are real constants which you have to nd (assume || < /2):
(a) sin() + cos() (b)

3 sin() cos() (c) sin() tan() cos()


28. Find the values of
(a) arcsin(sin(5/6)) (b) arcsin(sin(7/6)) (c) arccos(cos(7/6))
(d) arccos(cos(11/6))
29. Show that the series representations of sinh and cosh are:
sinh(z) =

k=0
z
2k+1
(2k + 1)!
cosh(z) =

k=0
z
2k
(2k)!
30. Use the above series representations of the hyperbolic functions (rather than the denition
in terms of exponentials) to re-derive the following for x IR:
d
dx
sinh(x) = cosh(x), sinh(0) = 0
d
dx
cosh(x) = sinh(x), cosh(0) = 1
31. Show that
d
dx
tanh(x) = 1 tanh
2
(x).
32. Simplify the following complex numbers in which , IR to the standard form a + ib,
with a, b IR:
(a) cosh(i) (b) sinh(i/3) (c) tanh(i/6)
(d) sin( + i) (e) cos( i) (f) cosh()
33. Find formulae that express the hyperbolic functions sinh(x), cosh(x) and tanh(x) as ratios
of polynomials of t = tanh(x/2).
34. Calculate the following three derivatives:
(a)
d
dx
arccosh(x) (b)
d
dx
arcsinh(x) (c)
d
dx
arctanh(x)
129
CALCULUS 1 TUTORIAL EXERCISES V
35. Let z | C, z = 1. Prove the following statement by induction:
n

k=0
z
k
=
1 z
n+1
1 z
Now choose z = e
i
with IR, and use the identity that you have just proven to nd
formulas for the two sums

n
k=0
sin(k) and

n
k=0
cos(k). What if is a multiple of 2?
Check the correctness of your formulas for the simplest cases n = 0 and n = 1.
36. Use the explicit formula for the function arcsinh : IR IR, namely arcsinh(y) =
ln(y +

y
2
+ 1), to show that
arcsinh(sinh(x)) = x for all x IR
sinh(arcsinh(y)) = y for all y IR
37. Use the explicit formula for the function arccosh : [1, ) [0, ), namely arccosh(y) =
ln(y +

y
2
1), to show that
arccosh(cosh(x)) = x for all x [0, )
cosh(arccosh(y)) = y for all y [1, )
38. Use the explicit formula for the function arctanh : (1, 1) IR, namely arctanh(y) =
1
2
ln[(1 + y)/(1 y)], to show that
arctanh(tanh(x)) = x for all x IR
tanh(arctanh(y)) = y for all y (1, 1)
130
CALCULUS 1 TUTORIAL EXERCISES VI
39. Let n IN, and calculate the limit lim
n
_
1 +
1
n
_
n
.
Hint: substitute n
1
= x and try to use limits calculated in the lectures.
40. Calculate the following limits, if they exist, without using power series:
(a) lim
x0
1 cos(x)
x
2
(b) lim
x0
sin(7x)
tan(2x)
(c) lim
x0
sin(x
2
)
x
(d) lim
x0
cosh(x)
41. Calculate the following limits, if they exist, using the power series representations of the
hyperbolic functions:
(a) lim
x0
sinh(x)
x
(b) lim
x0
1 cosh(x)
x
(c) lim
x0
1 cosh(x)
x
2
(d) lim
x0
tanh(x)
x
42. Calculate the following limits by making clever substitutions:
(a) lim
x
sin(x)
x
(b) lim
x0
arcsin(x)
x
(c) lim
x0
arcsin(3x)
tan(5x)
(d) lim
x0
arctanh(x)
x
43. Calculate the following limits, if they exist, using any suitable method:
(a) lim
x
sin(x)
x
(b) lim
x
cos(x)
x
2
(c) lim
x
x(e
1/x
1) (d) lim
x0
x
x
(e) lim
x
ln(x)
x
2
+ 1
(f) lim
x0
x
x
1
xln(x)
(g) lim
x0
e
1/x
(h) lim
x0
x
sin(x)
(i) lim
x0
a
x
b
x
x
(a>b>0) (j) lim
x0
(x + 1) ln(x)
sin(x)
(k) lim
x0
e
1/x
(l) lim
x
xe
1/x
131
CALCULUS 1 TUTORIAL EXERCISES VII
44. Let f
k
(x) : IR IR denote n arbitrary functions, with k = 1, 2, . . . , n, and let

n
k=1
f
k
(x) = f
1
(x).f
2
(x). . . . .f
n1
(x).f
n
(x). Prove by induction the following generalized
version of the product rule:
for all n ZZ
+
:
d
dx
_
n

k=1
f
k
(x)
_
=
_
n

=1
f

(x)
f

(x)
__
n

k=1
f
k
(x)
_
45. Now prove the previous generalized product rule directly (i.e. without induction), for the
special case where f
k
(x) > 0 for all x IR and all k. Hint: write f(x) = e
g(x)
with
g(x) = ln(f(x)), and use the chain rule.
46. calculate the following derivatives:
(a)
d
dx
e
x
2
(b)
d
dx
ln(tan(x))
(c)
d
dx
(xln(x) x) (d)
d
dx
arcsin(x
2
)
(e)
d
dx
arctan(e
x
) (f)
d
dx
e
sin(x)
(g)
d
dx
xe
arctan(x)
(h)
d
dx
ln |arcsin(x)|
47. calculate the following derivatives:
(a)
d
dx
arcsin
_
1 x
1 + x
_
(b)
d
dx
3
sin(x)
(c)
d
dx
ln |x +

x
2
a
2
| (d)
d
dx

1 + x
x
(e)
d
dx
1
x
arccos
_

1 x
2
_
(f)
d
dx
x
x sin(x)
(g)
d
dx
ln (cosh(x) + sinh(x)) (h)
d
dx
ln
_
_
(1 + x
2
)/(1 x
2
)
_
132
CALCULUS 1 TUTORIAL EXERCISES VIII
48. Each of the following is an equation that which determines y as an implicit function of x.
Find in all cases an expression for dy/dx.
(a) x
2
+ y
2
= 1 (b) y
3
+ x
3
= 1
(c) sinh(x) + cosh(y) = 1 (d) xy e
x+y
= 2
(e) sin(y) + y = x
3
(f) y
2
+ x(x 1)(x + 1) = 0
49. Each of the following are a pair of equations which determine x and y in terms of a
parameter t IR, which denes a function y(x) implicitly. Find in all cases an expression
for dy/dx.
(a) x = cos(t), y = sin(t) (b) x = cosh(t), y = sinh(t)
(c) x = t + sin(t), y = cos(t) (d) x = t
3
, y = t
2
(e) x = e
2t
, y = tanh(t) (f) x = e
t
cos(t), y = e
t
sin(t)
50. Calculate the integral A =
_
b
0
e
ax
dx (with a > 0 and b > 0) using the sandwich method,
with suitably constructed bounding staircase functions, similar to how this was done in
the lectures for integrals such as
_
b
0
cos(x)dx and others. Hint: use staircases with steps of
equal size h, and evaluate the summations that show up using the formula in exercise 39,
i.e.
z = 1 :
n

k=0
z
k
=
1 z
n+1
1 z
51. Find a primitive for each of the following functions, i.e. a function F(x) such that
F

(x) = f(x). Prove your claims.


(a) f(x) = 1/(1 + x) (b) f(x) = 1/(1 x)
2
(c) f(x) = 1/(2x + 1)
3
(d) f(x) = 1/(1 + x
2
)
(e) f(x) = 1/(1 x
2
) (f) f(x) = 1/

1 x
2
(g) f(x) = 1/

1 + x
2
(h) f(x) = xe

1
2
x
2
133
CALCULUS 1 TUTORIAL EXERCISES IX
52. Calculate the following indenite integrals using the method of substitution:
(a)
_
e
x
dx
1 + e
x
put x = ln(y)
(b)
_

1 x
2
dx put x = sin(y)
(c)
_
xdx

1 x
2
put x
2
= 1 y
(d)
_
dx

a
2
+ x
2
put x = at
(e)
_
2x
3
dx
1 + x
put x = t 1
(f)
_
xdx
1 +

x
put x = t
2
53. Calculate the following indenite integrals using the method of substitution:
(a)
_
cos(sin(x)) cos(x)dx (b)
_
2xdx
1 + x
4
(c)
_
e
x
dx
e
2x
1
(d)
_
dx
2

1 x
(e)
_
xdx

1 x
2
(f)
_
x
2
dx

1 x
2
(g)
_
dx
(1 x
2
)
3/2
(h)
_
xdx
(1 x
2
)
3/2
(i)
_
x
2
dx
(1 x
2
)
3/2
(j)
_
dx
cos(x)
54. Calculate the following indenite integrals using integration by parts:
(a)
_
xsinh(x)dx (b)
_
(1 + x
2
)e
x
dx
(c)
_
arcsin(x)dx (d)
_
x
3
sin(x)dx
(e)
_
x
2
_
ln(x)
_
2
dx (f)
_
xarctan(x)dx
134
CALCULUS 1 TUTORIAL EXERCISES X
55. Dene the indenite integrals I
n
(x) for n ZZ
+
as follows:
I
n
(x) =
_
dx
(1 + x
2
)
n
Use integration by parts to derive the following recursion formula:
I
n
(x) =
x
2(n1)(1+x
2
)
n1
+
2n3
2n2
I
n1
(x)
Use the recursion formula to nd the indenite integral
_
(1 + x
2
)
2
dx.
56. Dene the indenite integrals I
n
(x) for n ZZ as I
n
(x) =
_
cos
n
(x)dx. Use integration by
parts to derive the following recursion formula:
I
n
(x) =
1
n
sin(x) cos
n1
(x) +
n1
n
I
n2
(x)
Use the recursion formula to nd the indenite integral
_
cos
4
(x)dx.
57. (i) Dene the denite integrals I
n
(a) for n ZZ
+
and a IR as I
n
(a) =
_

x
2n
e
ax
2
.
Assume that dierentiation with respect to a can be moved from outside to inside
the integral. Use repeated dierentiation with respect to a to nd a formula that
expresses I
n
(a) in terms of I
0
(a).
(ii) Show that I
0
(a) = C/

a, where C =
_

e
y
2
dy. From now on you may use (without
proof) that C =

.
(iii) Use the previous results to show that
_

x
2
e
x
2
/2
=

2 and that
_

x
4
e
x
2
/2
=
3

2.
58. Use the method of partial fractions to calculate the following integrals:
(a)
_
dx
x
2
+ x 6
(b)
_
xdx
(1 x)
2
(1 + x)
(c)
_
dx
2x
2
+ x 1
(d)
_
xdx
12x
2
7x 12
59. Calculate the length L of the curve in the plane described by the function y = ln(cos(x)),
between the points x = /4 and x = /4. Hint: use the result of an earlier exercise to
deal with the integral.
135
CALCULUS 1 TUTORIAL EXERCISES XI
60. Find out for the following series whether they are convergent or divergent, using the various
results stated and/or derived in the lectures or otherwise:
(a)

n
n
3
(b)

n
(1)
n
n
2
(c)

n
1/

n (d)

n
n

e
n
( > 0)
61. Find the radii of convergence for the following powers series:
(a)

n
x
n
/n
3
(b)

n
(1)
n
x
n
/n
2
(c)

n
x
n
/

n (d)

n
x
n
n

e
n
( > 0)
62. Derive the Taylor expansions for the following functions, up to order N = 3 for the rst
three functions and up to order N for the last one, and give in each case an exact expression
for the remainder term:
(a) f(x) = tan(x) (b) f(x) = tanh(x)
(c) f(x) = x
3
(d) f(x) =

1+x e
sin(x)
63. Write the following two functions as power series of the form

n=0
b
n
x
n
, and determine
for each the associated radius of convergence:
(a) f(x) = 1/(1x)
2
(b) f(x) = arctan(x)
Substitute x = 1 into your result under (b) (why is it not obvious that this is allowed?)
and derive an expression for the number as a series (the so-called Leibniz series).
64. Find the rst three terms (i.e. up to order x
3
) of the Taylor expansions for the following
two functions, by combining and/or manipulation the Taylor expansions of other functions
that you know:
(a) f(x) = tanh(x) (b) f(x) = ln
_
1+2x
12x
_

You might also like