Algorithmics PDF
Algorithmics PDF
Andreas de Vries
Version: February 3, 2014
Contents
I
Foundations of algorithmics
8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
9
10
10
11
11
12
12
13
13
13
14
14
15
Algorithmic analysis
2.1 Correctness (effectiveness) . . . . . . . . . . . . . .
2.1.1 Correctness of Euclids algorithm . . . . . . .
2.2 Complexity to measure efficiency . . . . . . . . . . .
2.2.1 Asymptotic notation and complexity classes . .
2.2.2 Time complexity . . . . . . . . . . . . . . . .
2.2.3 Algorithmic analysis of some control structures
2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
16
16
16
17
18
20
21
22
Recursions
3.1 Introduction . . . . . . . . . . . . . . . . . . .
3.2 Recursive algorithms . . . . . . . . . . . . . .
3.3 Searching the maximum in an array . . . . . .
3.4 Recursion versus iteration . . . . . . . . . . . .
3.4.1 Recursive extended Euclidean algorithm
3.5 Complexity of recursive algorithms . . . . . . .
3.6 The towers of Hanoi . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
24
24
25
26
27
28
28
31
.
.
.
.
.
34
34
35
36
37
37
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Sorting
4.1 Simple sorting algorithms . . . . . . . . . . . . . . . .
4.2 Theoretical minimum complexity of a sorting algorithm
4.3 A recursive construction strategy: Divide and conquer .
4.4 Fast sorting algorithms . . . . . . . . . . . . . . . . .
4.4.1 MergeSort . . . . . . . . . . . . . . . . . . .
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4.5
5
II
6
4.4.2 QuickSort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.3 HeapSort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of sort algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Optimization
37
40
42
43
43
43
45
47
48
49
53
Optimization problems
6.1 Examples . . . . . . . . . . . . . . . . . . . . . . .
6.2 The general structure of optimization problems . . .
6.2.1 The search space . . . . . . . . . . . . . . .
6.2.2 The objective function . . . . . . . . . . . .
6.3 Approaches to solve optimization problems . . . . .
6.3.1 Analytical solution methods for S Rn . . .
6.3.2 Combinatorical solution methods for S Zn
6.3.3 Biologically inspired solution methods . . .
Graphs and shortest paths
7.1 Basic definitions . . . . . . . . . . . . . . . . . .
7.2 Representation of graphs . . . . . . . . . . . . .
7.2.1 Adjacency matrices contra adjacency lists
7.3 Traversing graphs . . . . . . . . . . . . . . . . .
7.3.1 Breadth-first search . . . . . . . . . . . .
7.3.2 Depth-first search . . . . . . . . . . . . .
7.4 Cycles . . . . . . . . . . . . . . . . . . . . . . .
7.4.1 Hamiltonian cycle problem HC . . . . .
7.4.2 Euler cycle problem EC . . . . . . . . .
7.5 Shortest paths . . . . . . . . . . . . . . . . . . .
7.5.1 Shortest paths . . . . . . . . . . . . . . .
7.5.2 The principle of relaxation . . . . . . . .
7.5.3 Floyd-Warshall algorithm . . . . . . . .
7.5.4 Dijkstra algorithm . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
54
54
55
55
56
59
59
59
60
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
61
62
63
63
63
64
65
66
66
66
68
68
69
70
Dynamic Programming
8.1 An optimum-path problem . . . . . . . . . . . . . . . . . .
8.1.1 General observations . . . . . . . . . . . . . . . . .
8.1.2 Solving the path problem by dynamic programming
8.2 The Bellman functional equation . . . . . . . . . . . . . . .
8.3 Production smoothing . . . . . . . . . . . . . . . . . . . . .
8.3.1 The problem . . . . . . . . . . . . . . . . . . . . .
8.3.2 Reformulation as a sequential decision problem . . .
8.3.3 The graphical solution . . . . . . . . . . . . . . . .
8.3.4 The solution: Wagner-Whitin algorithm . . . . . . .
8.4 The travelling salesman problem . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
73
73
74
75
76
77
77
78
79
80
82
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8.4.1
9
Simplex algorithm
9.1 Mathematical formulation . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 The simplex algorithm in detail . . . . . . . . . . . . . . . . . . . . . . . .
9.3 What did we do? or: Why simplex? . . . . . . . . . . . . . . . . . . . . .
9.3.1 What actually is a simplex? . . . . . . . . . . . . . . . . . . . . .
9.4 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4.1 Economical interpretation of duality in linear optimization problems
10 Genetic algorithms
10.1 Evolutionary algorithms . . . . . . . . . . . . . . . .
10.2 Basic notions . . . . . . . . . . . . . . . . . . . . . .
10.3 The canonical genetic algorithm . . . . . . . . . . .
10.4 The 0-1 knapsack problem . . . . . . . . . . . . . . .
10.5 Difficulties of genetic algorithms . . . . . . . . . . . .
10.5.1 Premature convergence . . . . . . . . . . . . .
10.5.2 Coding . . . . . . . . . . . . . . . . . . . . .
10.6 The traveling salesman problem . . . . . . . . . . . .
10.7 Axelrods genetic algorithm for the prisoners dilemma
10.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . .
Appendix
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
83
.
.
.
.
.
.
85
85
86
89
89
90
91
.
.
.
.
.
.
.
.
.
.
93
93
93
94
97
98
98
98
100
101
103
104
A Mathematics
105
A.1 Exponential and logarithm functions . . . . . . . . . . . . . . . . . . . . . . . . . . 105
A.2 Number theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
A.3 Searching in unsorted data structures . . . . . . . . . . . . . . . . . . . . . . . . . . 107
B Dictionary for mathematics and computer science
111
C Arithmetical operations
113
Bibliography
116
Dear reader,
perhaps you are surprised finding these lecture notes written in English (or better: in a scientific idiom
which is very similar to English). We have decided to write them in English for the following reasons.
Firstly, any computer scientist or information system technologist has to read a lot of English documents, web sites, text books particularly if he or she wants to get to know innovative issues. So this
is the main reason: training for non-natively English speaking students. Secondly, foreign students
and international institutions shall benefit from these notes. Thirdly, the notes offer a convenient way
to learn the English terminology corresponding to the German one given in the lecture.
To help you to learn the English terminology, you will find a small dictionary at the end of these
notes, presenting some of the expressions most widely used in computer sciences and mathematics,
along with their German translations. In addition, there are listed some arithmetical terms in English
and in German.
You might as well get surprised to find another human language mathematics. Why mathematics in a book about algorithmics? Algorithms are, in essence, applied mathematics. Even if they
deal with apparently unmathematical subjects such as manipulating strings or searching objects,
mathematics is the basis. To mention just a few examples: the classical algorithmic concept of recursion is very closely related to the principle of mathematical induction; rigorous proofs are needed for
establishing the correctness of given algorithms; running times have to be computed.
The contents of these lecture notes spread a wide range. On the one hand they try to give the basic
knowledge about algorithmics, such that you will learn the following questions: What is an algorithm
and what are its building blocks? How can an algorithm be analyzed? How do standard well-known
algorithms work? On the other hand, these lecture notes introduce into the wide and important field
of optimization. Optimization is a basic principle of human activity and thinking, it is involved in the
sciences and in practice. It mainly deals with the question: How can a solution to a given problem
under certain constraints be achieved with a minimum cost, be it time, money, or machine capacity?
Optimization is a highly economical principle. However, any solution of an optimization problem
is a list of instructions, such as do this, then do that, but only under the condition that . . . , i.e., an
algorithm the circle is closed.
So we think optimization to be one of the basic subjects for you as a student of business information systems, for it will be one of your main business activities in the future. Surely, no lecture
can give an answer to all problems which you will be challenged, but we think that it is important
to understand that any optimization problem has a basic structure it is the structure of a given
optimization problem that you should understand, because then you may solve it in a more efficient
way (you see, another optimization problem).
Of course, a single semester is much too short to mention all relevant aspects. But our hope is that
you gain an intuitive feeling for the actual problems and obstacles. For this is what you really must
have to face future challenges understanding.
Hagen February 3, 2014
Andreas de Vries
Introduction
The central notion of computer science is algorithm, not information. An algorithm is a detailed
and finite recipe to solve a problem. Algorithms always act on data. So algorithms and data belong
together. This simple fact is most consequently realized in object-oriented approach: Here algorithms
are realized in so-called methods and data are named attributes. They both form a unity called
object.
The right choice of algorithms and data structures therefore is the most important step to solve a
problem with the help of a computer. The subject of this script is the systematic study of algorithms
and data in different kinds of application.
Operation
Operand structure
function
algebra
algorithm
data structure
procedure, function
data type
method
attributes
{z
}
|
object
An algebra determines a set of objects and its arithmetic, i.e. the way in which the objects can be
calculated. It defines an operations called addition, one called scalar multiplication, and one
called multiplication. An example for an algebra is a vector space, where the objects are vectors,
where multiplication is the vector product, and addition and scalar multiplication are as usual.
Mathematics
Algebra
Objects
Arithmetic operations
number algebra numbers x, y x + y, x y, x y, x/y
vector algebra
vectors v, w
v w, v w, x v (for x R)
matrix algebra matrices A, B A B, A B, x Ax v (for x R)
more. They fly aeroplanes and starships, control power stations and cars, find and store information,
or serve as worldwide communication devices. Over the last three decades, computers have caused a
technological, economic, and social revolution which could be hardly foreseen.
Parallel to the technology changes, and in part having enabled them, there is a development of
various programming languages. From the first higher programming languages of the 1950s for
scientific and business-oriented computations, like Fortran and COBOL, to internet-based languages
like Java or PHP, every new field of activity made available some new programming languages specialized in it.
In view of this impressive and enormous developments, the question may be raised: Is there
anything that remains constant during all these changes? Of course, there are such constants, and they
were to a great part stated already before the invention of the first computers in the 1930s, mainly
achieved by the mathematicians Godel, Turing, Church, Post and Kleene: These are the fundamental
laws underlying any computation and hence any programming language. These fundamental laws of
algorithms are the subject of this book, not a particular programming language.
However, in this book the study of algorithms is done on the background and influenced by the
structure of Java, one of the most elaborated and widely used programming languages. In particular,
the pseudocode to represent algorithms is strongly influenced by the syntax of Java, although it should
be understandable without knowing Java.
References
The literature on algorithmics and optimization is immense. The following list only is a tiny and
uncomplete selection.
T.H. Cormen et al.: Introduction to Algorithms [5] classical standard reference, with considerable breadth and width of subjects. A must for a computer scientist.
R. L. Graham, D. E. Knuth & O. Patashnik: Concrete Mathematics [16] very good reference
for the mathematical foundations of computer programming and algorithmics. (concrete is
a blending of continuous and discrete); One of the authors, Knuth, is the inventor of the
fabulous text-writing system TEX, the essential basis of LATEX these lecture notes are set with. . .
H. P. Gumm & M. Sommer: Einfuhrung in die Informatik. [18] broad introduction to computer science, with emphasis on programming.
D. Harel & Y. Feldman: Algorithmik. Die Kunst des Rechnens [20] gives a good overview
over the wide range of algorithms and the underlying paradigms, even mentioning quantum
computation.
D. W. Hoffmann: Theoretische Informatik [23] broad introduction to theoretical computer
science.
F. Kaderali & W. Poguntke: Graphen, Algorithmen, Netze [25]. Basic introduction into the
theory of graphs and graph algorithms.
T. Ottmann & P. Widmayer: Algorithmen und Datenstrukturen [32] classical standard reference.
A. Barth: Algorithmik fur Einsteiger [1] a nice book explaining principles of algorithmics.
W. Press et al.: Numerical Recipes in C++ [35] for specialists, or special problems. To lots
of standard, but also rather difficult problems, there is given a short theoretical introduction and
descriptions of efficient solutions. Requires some background in mathematics.
For further reading in German I recommend [19, 22, 37, 43].
7
Part I
Foundations of algorithmics
Chapter 1
Elements and control structures of
algorithms
1.1
Mathematical notation
Definition 1.1. For any real number x we denote by bxc the greatest integer which is less than or equal
to x, or more formally:
bxc = max{n Z| n 5 x}.
(1.1)
The b. . .c-signs are called floor-brackets or lower Gau-brackets.
For example we have b5.43c = 5, bc = 3, b 2c = 1, b5.43c = 6. Note that for two positive
integers m, n N we have
jmk
= m div n,
n
where div denotes integer division. In Java, we have for two integer variables int m,n
(
jmk
m/n
if m n = 0,
=
n
m/n 1 if m n < 0.
1.1.1
or
k n mod m.
For positive numbers m and n, this means the same as %. However, for n < 0 and m > 0, there is a
difference:
n mod m = (m + n % m) % m
if n < 0 and m > 0.
(1.2)
This difference stems from the fact that modulo m mathematically means a consequent arithmetic
only with numbers k satisfying 0 5 k < m, whereas % denotes the remainder of an integer division.
5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10
n
n mod 3
1
2
0
1
2 0 1 2 0 1 2 0 1 2 0 1
n % 3 2 1
0 2 1 0 1 2 0 1 2 0 1 2 0 1
For instance, 5 % 3 = (5 % 3) = 2, but 5 mod 3 = 1. Thus the result of the modulo operation
always gives a nonnegative integer k < 3, cf. [16, 3.4].
1.2
The notion of algorithm is basic to all of computer programming. The word algorithm itself is
quite interesting. It comes from the name of the great Persian mathematician, Abu Abdullah abu
Jafar Muhammad ibn Musa al-Khwarizmi (about 780 about 850) literally Father of Abdullah,
Jafar Mohammed, son of Moses, native of Khwarizm. The Aral sea in Central Asia was once known
as the Lake Khwarizm, and the Khwarizm region is located south of that sea. Al-Khwarizmi wrote
the celebrated book Kitab al-jabr wal-muqabala (Rules of restoring and equating), which was a
systematic study of the solutions of linear and quadratic equations. From the title of this book stems
the word algebra (al-jabr). For further details see [26, 27].
Very famous, and older than 2300 years, is Euclids algorithm, called after the Greek mathematician Euclid (350300 b.c.). Perhaps he did not invent it, but he is the first one known to have written
it down. The algorithm is a process for finding the greatest common divisor of two numbers.
Algorithm 1.2. (Euclids algorithm) Given two positive integers m and n, find their greatest common
divisor gcd, that is, the largest positive integer that evenly divides both m and n.
E1. [Exchange m and n] Exchange m n.
E2. [Reduce n modulo m] Assign to n its value modulo m:
n n % m.
(Remember: modulo means remainder of division; after the assignment we have 0 5 n < m.)
E3. [Is n greater than zero?] If n > 0 loop back to step E1; if n 5 0, the algorithm terminates, m is
the answer.
Let us illustrate by an example to see how Euclids algorithm works. Consider m = 6, n = 4.
Step E1 exchanges m and n such that m = 4 and n = 6; step E2 yields the values m = 4, n = 2;
because in E3 still n > 0, step E1 is done again.
Again arriving in E1, m and n are exchanged, yielding the new values m = 2, n = 4; E2 yields
the new value n = 0, and still m = 2; E3 tells us that m = 2 is the answer.
Thus the greatest common divisor of 6 and 4 is 2,
gcd (6, 4) = 2.
A first observation is that the verbal description is not a very convenient technique to describe the
effect of an algorithm. Instead, we will create a value table denoting the values depending on the
time, so to say the evolution of values during the algorithm time flow of the algorithm, see 1.1.
1.2.1
Pseudocode
A convenient way to express an algorithm is pseudocode. This is an artificial and informal language
which is similar to everyday English, but also resembles to higher-level programming languages such
as Java, C, or Pascal. (In fact, one purpose of pseudocode is just to enable the direct transformation
into a programming language; pseudocode is the mother of all programming languages). Euclids
algorithm in pseudocode reads as follows:
10
euclid (m, n) {
while ( n > 0 ) {
m n;
n n % m;
}
return m;
}
By convention, any assignmet is terminated by a semicolon (;). This is in accordance with most of
the common programming languages (especially Java, C, C++, Pascal, PHP). Remarkably, Euclids
algorithm is rather short in pseudocode. Obviously, pseudocode is a very effective way to represent
algorithms, and we will use it throughout this script.
We use the following conventions in our pseudocode.
1. In the first line the name of the algorithm appears, followed by the required parameters in
parentheses: euclid (m, n)
2. Indention indicates block structure. For example, the body of the while-loop only consists of
one instruction. Often we will indicate block structure in addition by {...} (as in Java or C), but
it could easily be read as begin ... end (as in Pascal)).
3. We use as control structure key words only while, for, and if ... else as in the common programming languages (see below for details on control structures)
4. Comments are indicated by the double slash //. It means that the rest of the line is a comment.
5. We will use the semicolon (;) to indicate the end of an instruction.
With the Euclidean algorithm we will explore what the basic element are out of which a general
algorithm can be built: the possible operations, the assigment, and three control structures. With
these elements we are able to define what actually an algorithm is.
1.3
1.3.1
(1.3)
with the domain of definition D Rd and the range R Rr and d, r N {}. For instance,
the modulo operation is given by the function
f : N2 N,
f (m, n) = m % n.
Here d = 2 and r = 1.
Even non-numerical operations such as assignment of a memory address or string addition
(concatenation) are possible operations, the sets D and R only have to be defined appropriately. (In
the end: All strings are natural numbers, [3] p.213.) Also Boolean functions evaluating logical expressions (such as x < y) are possible.
1.3.2
Instructions
t m; m n; n t;
is defined as
(1.4)
Sometimes we will use multiple assignments m n t; it means that both variables m and n are
assigned the value of t.
1.4
Control structures
Only one instruction can be executed at a time. It is the order of execution that can vary, determined
by the so-called control structure. There are five types of flow control.
12
1.4.1
Sequence
It is the simplest of the control structures. Here each instruction is simply executes one after the other.
In pseudocode, the sequence structure simply reads
instruction 1;
...;
instruction n;
1.4.2
Selection, choice
The selection is used to choose among alternative courses of instructions. In pseudocode it reads
if (condition) {
instruction 1;
...;
instruction m
} else {
instruction 1;
...;
instruction n;
}
Here a condition is a logical proposition being either true or false. It is also called a Boolean
expression. If it is true, the instructions i1 , ..., im are executed, if it is false, the instructions e1 , ..., en
are executed. If n = 0, the else-branch can be omitted completely. An example is given in Euclids
algorithm
if (m < n) {
mn
}
1.4.3
Loop, repetition
In a loop a given sequence of instruction is repeated as long as a specific condition is true. In pseudocode it is expressed by
while (condition) {
instruction 1;
...;
instruction n;
}
If the loop is performed a definite number of times, we also use the for-statement:
13
for (i = 1 to m) {
instruction 1;
...;
instruction n;
}
It means that the instruction block is executed m times.
1.4.4
Subroutine calls
Subroutines are algorithms which can be invoked in another algorithm by simply calling its name and
inputting the appropriate parameters, and whose results can be used. The pseudocode of a subroutine
call may look like:
myAlgorithm(m, n) {
k = m subroutine(n);
return k;
}
A special subroutine call is the recursion which we will consider in more detail below. The terminology varies, subroutines are also be known as routines, procedures, functions (especially if they
return results) or methods.
1.4.5
Exception handling is a construct to handle the occurrence of some exception which prevents the
algorithm to proceed in a well-defined way. Such an exception may be a division by zero which may
occur during its flow of execution. The pseudocode of an exception handling is given as follows:
try {
instruction 1;
...;
instruction n;
} catch ( exception1 A ) {
instruction A;
} catch ( exception2 B ) {
instruction B;
}
Here the try-block contains the instructions of the algorithm. These instructions are monitored to
perform correctly. If now an exception occurs during its execution, it is said to be thrown, and
according to its nature it is catched by one of the following catch-blocks, i.e., the execution flow
is terminated and jumps to the appropriate catch-block. The sequence of catch-blocks has to be
arranged from the special cases to more general cases. For instance, the first catched exception may
be an arithmetic exception such as division by zero, the next one a more general runtime exception
such as number parsing of a non-numeric input, or a IO exception such as trying to read a file which
is not present, and so on to the catch-block for the most possible exception. In Java, the most general
exception is an object of the class Exception.
14
1.5
Definition of an algorithm
Now we are ready to define a general algorithm. In essence it calculates to every input a deterministic
output by executing finite instructions. More formally:
Definition 1.3. An algorithm is a finite sequence of instructions, constructed by one of the control
structures, which takes a (possibly empty) set of values as input and produces a unique set of values
as output in a finite time.
input
(x1 , x2 , . . .)
output
...
...
The output usually should not be an empty set, for then the algorithm has no output and is needless.1 It is interesting to note that any algorithm can be expressed by the first three control structures
sequence, selection, and loop. This is a theoretical result from the 1960s.2 This definition is equivalent to another theoretical concept, the Turing machine. In principle, this is a computing device
executing a program. It is the theoretical model of a general computer program and was originally
studied by Turing in the 1930s.3
So this is an algorithm. Further synonymic notions are routine, process, or method. By our
definition, an algorithm thus has the following important properties:
1. (finite) An algorithm always terminates after a finite number of steps. Euclids algorithm for
instance is finite, because in step E1 m is always assigned the maximum value max(m, n) of m
and n, whereas in step E2 m decreases properly. So for each initial pair (m, n) the algorithm
will definitely terminate. (Note, however, that the number of steps can become arbitrarily large;
certain huge choices for m and n could cause step E1 be executed more than a million times.) It
can be proved4 that Euclids algorithm for two natural numbers m and n takes at most N loops
where is the greatest natural number with
N 5 2.078 ln[max(m, n)] + 0.6723,
2. (definite) Each step of an algorithm is defined precisely. The actions to be carried out must be
rigorously and unambiguously specified for each case.
3. (elementary) All operation must be sufficiently basic that they can in principle be done exactly
and in a finite length of time by someone using pencil and paper. Operations may be clustered
to more complex operations, but in the end they must be definitely reducible to elementary
mathematical operations.
4. (input) An algorithm has zero or more inputs, i.e. data which are manipulated.
5. (ouput or return) An algorithm has one or more returns, i.e. information gained by the data and
the algorithm.
1 Tue
= (1 + 5)/2.
2 C.
15
Chapter 2
Algorithmic analysis
There are two properties which have to be analysed when designing and checking an algorithm. On
the one hand it has to be correct, i.e., it must answer the posed problem effectively. Usually to
demonstrate the correctness of an algorithm is a difficult task, it requires a mathematical proof. On
the other hand, an algorithm should find a correct answer efficiently, i.e., as fast as possible with the
minimum memory space.
2.1
Correctness (effectiveness)
A major task to do for algorithms is to show that it is correct. It is not sufficient to test the algorithm
with selected examples: If a test fails, the algorithm is indeed shown to be incorrect but if some
tests are o.k., the algorithm may be false nonetheless. A famous example is the function (Euler
equation)
f (n) = n2 + n + 41.
(2.1)
If one asserts that f (n) yields a prime number, one can test it for n = 0, 1, 2, 3, yes even for n = 10
f (n) = 151 (this is a prime number). This seems to verify the assertion. But for n = 40 suddenly we
have f (40) = 412 : That is not a prime number!
What we need is a mathematical proof, verifying the correctness rigorously.
It is historically interesting to note that Euclid did not prove the correctness of his algorithm! He
in fact verified the result of the algorithm only for one or three loops. Not having the notion of a proof
by mathematical induction, he could only give a proof for a finite number of cases. (In fact he often
proved only the case n = 3 of a theorem he wanted to establish for general n.) Although Euclid is
justly famous for the great advantages he made in the art of logical deduction, techniques for giving
valid proofs were not discovered until many centuries later. The crucial ideas for proving the validity
of algorithms are only nowadays becoming really clear [28] p.336.
2.1.1
Definition 2.1. A common divisor of two integers m and n is an integer that divides both m and n.
We have so far tacitly supposed that there always exists a greatest common divisor. To be rigorous
we have to show two things: There exists at least one divisor, and there are finitely many divisors. But
we already know that 1 is always a divisor; on the other hand the set of all divisors is finite, because
by Theorem A.3 (iv) all divisors have an absolute value bounded by |n|, as long as n 6= 0. Thus there
are at most 2n 1 divisors of a non-vanishing n. In a finite non-empty set there is an upper bound, the
unique greatest common divisor of m, n, denoted gcd(m, n). By our short discussion we can conclude
1 5 gcd (m, n) 5 max(|m|, |n|)
16
if m 6= 0 or n 6= 0.
(2.2)
m, n Z.
(2.3)
(2.5)
Then r2 , r3 , . . . is the sequence of remainders that are computed in the while-loop. Also, after the k-th
iteration of the while-loop we have
m rk+1 , n rk .
It follows from Theorem 2.2 (ii) that gcd (rk+1 , rk ) = gcd (m, n) is not changed during the algorithm,
as long as rk+1 > 0. Thus we only need to prove that there is a k such that rk = 0. But this follows
from the fact that by (2.5) the sequence (rk )k=1 is strictly decreasing, so the algorithm terminates
surely. But if rk+1 = 0, we have simply that gcd (rk1 , rk ) = rk , and thus n = rk is the correct result.
This concludes the proof of the correctness of the Euclidean algorithm, since after a finite time it
yields the gcd (m, n).
Q.E.D.
2.2
Another important aspect of algorithmic analysis is the complexity of an algorithm. There are two
kinds of complexity which are relevant for an algorithm: Time complexity T (n) and space complexity S(n). The time complexity is measured by the running time it requires from the start until its
termination, and the space complexity measures its required memory space when implemented on a
computer.
To analyze the running time and the space requirement of an algorithm exactly, we must know the
details about the implementation technology, such as hardware and software. For instance, the running
time of a given algorithm depends on the frequency of the CPU, and also on the underlying computer
architecture; the required memory space, on the other hands, depends on the programming language
and its representation of data structures. To determine the complexities of an algorithm thus appears
as an impracticable task. Moreover, the running time and required space calculated in this way are
not only properties of the considered algorithm, but also of the implementation technology. However,
we would appreciate some measures which are independent from the implementation technology. To
obtain such asymptotic and robust measures, the O-notation has been introduced.
17
2.2.1
The notations we use to describe the complexities of an algorithm are defined in terms of functions
T : N R+ ,
n 7 T (n),
(2.6)
That means, the domain of T consists of the natural numbers, which are mapped to a nonnegative real
number. For example,
T (n) = 2n2 + n + 1,
or
T (n) = n ln n.
(2.7)
O is also referred to as a Landau symbol or the big-O. Figure 2.1 (a) illustrates the O-symbol.
Although O(g(n)) denotes a set of functions f (n) having the property (2.7), it is common to write
Figure 2.1: Graphic examples of the O, , and notations. In each part, the value of n0 is shown as the minimum
possible value; of course, any greater value would also work. (a) O-notation gives an upper bound for a function up to a
constant factor. (b) -notation gives a lower bound for a function up to a constant factor. (c) -notation bounds a function
up to constant factors.
f (n) = O(g(n)) instead. We use the big-O-notation to give an asymptotic upper bound on a function f (n), up to a constant factor.
Example 2.4. (i) We have 2n2 + n + 1 = O(n2 ), because 2n2 + n + 1 5 4n2 for all n = 1. (That is,
c = 4, n0 = 1 in (2.7); note that we could have chosen c = 3 and n0 = 2).
(ii) More general, any quadratic polynomial a2 n2 + a1 n + a0 = O(n2 ). To show this we assume
c = |a2 | + |a1 | + |a0 |; then
a2 n2 + a1 n + a0 5 cn2
n = n0 (with n0 = 1)
n = ai bi
i=0
18
(2.8)
The maximum index m depends on n.1 We write the expansion as digits (an an1 . . . a1 a0 )b . Some
examples:
b=2:
b=3:
b=4:
b=5:
25
25
25
25
= 1 24 + 1 23 + 0 22 + 0 21 + 1 20 = (11001)2
= 2 32 + 2 31 + 1 30 = (221)3
= 1 42 + 2 41 + 1 40 = (121)4
= 1 52 + 0 51 + 0 50 = (100)5
Let now denote lb (n) the length of the b-adic expansion of a positive integer n. Then
ln n
+ 1.
ln b
ln n
1
If n = 3 (i.e., n0 = 3), we have ln n > 1, and therefore ln
b + 1 < ln b + 1 ln n, i.e.,
lb (n) = blogb nc + 1 5 logb n + 1 =
l(n) < c ln n
1
+ 1.
ln b
Therefore we have
lb (n) = O(ln n),
(2.9)
no matter what the value of b is. Therefore the number of digits of n in any number system belongs
to the same complexity class O(ln n).
The complexity class (g(n))
The -notation provides an asymptotic lower bound. For two functions f , g : N R+ we write
if c R+ , n0 N such that cg(n) 5 f (n) n = n0 .
(2.10)
We say that now f (n) dominates g(n). The intuition behind is shown in figure 2.1 (b).
f (n) = (g(n)) f (n) (g(n))
Example 2.5. We have 12 n3 n + 1 = (n2 ), because 21 n3 n + 1 > 13 n2 for all n = 1. (That is, c = 13 ,
n0 = 1 in (2.10)).
The complexity class (g(n))
If a function f (n) satisfies both conditions f (n) O(g(n)) and f (n) (g(n)), we call it asymptotically tightly bounded by g(n)), and we write
f (n) = (g(n)).
or (correctly):
f (n) (g(n)).
(2.11)
A function f (n) thus belongs to the set (g(n)) if there are two positive constants c1 and c2 such that
it can be sandwiched between c1 g(n) and c2 g(n) for sufficiently large n. Figure 2.1 (c) gives an
intuitive picture of the functions f (n) and g(n). For all values of n right of n0 f (n) lies at or above
c1 g(n) and at or below c2 g(n). In other words, for all n = n0 the function f (n) is equal to the function
g(n) up to a constant factor.
The definition of (g(n)) requires that every member f (n) of (g(n)) is asymptotically nonnegative, i.e. f (n) = 0 whenever n is sufficiently large. Consequently, the function g(n) itself must be
asymptotically nonnegative (or else (g(n)) is empty).
1 This
is an important result from elementary number theory. It is proved in any basic mathematical textbook, e.g.
[13, 33].
19
Example 2.6. (i) Since we have 2n2 + n + 1 = O(n2 ) and 2n2 + n + 1 = (n2 ), we also have 2n2 +
n + 1 = (n2 ).
(ii) Let b be an integer with b > 1 nad lb (n) = blogb nc + 1 the length of the b-adic expansion of
a positive integer n. Then (c 1) ln n 5 lb (n) < c ln n
for n = 3 and with c = ln1b + 1. Therefore
we have
lb (n) = (ln n).
(2.12)
The complexity classes of polynomials are rather easy to determine. A polynomial fk (n) of degree of
degree k for some k N0 is the sum
k
fk (n) = ai ni = a0 + a1 n + a2 n2 + a3 n3 + . . . + ak nk ,
i=0
with the constant coefficients ai R. We can then state the following theorem.
Theorem 2.7. A polynomial of degree k is contained in the complexity class (nk ), i.e., fk (n) = (nk ).
Example 2.8. We saw above that the polynomial 2n2 + n + 1 is in the complexity class (n2 ), according to the theorem. The polynomial, however, is not contained in the following complexity classes:
2n2 + n + 1 6= O(n),
2n2 + n + 1 6= (n3 ),
2n2 + n + 1 6= (n3 );
2.2.2
Time complexity
The running time T (n) of an algorithm on a particular input of size n is the number of instructions
(steps) executed. We roughly assume a constant amount t0 of time for each instruction. We can
therefore restrict ourselves to only counting the steps executed doing the algorithm, because the
real physical time then is achieved by multiplying with t0 . For example, if in an algorithm for input
of size n the number of instructions executed is 2n2 + 3, then we will write for short
T (n) = 2n2 + 3,
although we should write T (n) = (2n2 + 3) t0 . This is common in computer science, because t0 is a
quantity that is machine-dependent and does not depend from the algorithm.
Analysis of the running time T is done in two ways:
1. Worst-case analysis determines the upper bound of running time for any input. Knowing it will
give us the guarantee that the algorithm will never take any longer.
2. Average-case analysis determines the running time of a typical input, i.e. the expected running
time. It sometimes may come out that the average time is as bad as the worst-case running time.
The complexity of an algorithm is measured in the number T (n) of instructions to be done, where T is
a function depending on the size of the input data n. If, e.g., T (n) = 3n + 4, we say that the algorithm
is of linear time complexity, because T (n) = 3n + 4 is a linear function. Time complexity functions
that occur frequently are given in the following table, cf. Figure 2.2.
Complexity T (n)
ln n, log2 n, log10 n, . . .
n, n2 , n3 , . . .
2n , en , 3n , 10n , . . .
Notation
logarithmic time complexity (log n)
polynomial time complexity (nk )
exponential time complexity (kn )
20
T (n)
polynomial
exponential
n2
2n
logarithmic
ln n
n
Figure 2.2: Qualitative behavior of typical functions of the three complexity classes O(ln n), O(nk ), O(kn ), k R+ .
Definition 2.9. An algorithm is called efficient, if T (n) = O(nk ) for a constant k, i.e., if it has polynomial time complexity or is even logarithmic.
Analyzing even a simple algorithm can be a serious challenge. The mathematical tools required
include discrete combinatorics, probability theory, algebraic skill, and the ability to identify the most
significant terms in a formula.
Example 2.10. It can be proved2 that the Euclidean algorithm has a running time
TEuclid (m, n) = O(log2 (mn)),
(2.13)
if all divisions and iterative steps are considered. (However, it may terminate even for large numbers
m and n after a single iteration step, namely if m | n or n | m.) Therefore, the Euclidean algorithm
is efficient, since it has logarithmic running time in the worst case, in dependence of the sizes of its
input numbers.
2.2.3
In the sequel let S1 and S2 be instructions (or instruction blocks) with running times T1 (n) = O( f (n))
and T2 (n) = O(g(n)). We assume that both f (n) and g(n) differ from zero, i.e. O( f (n)) is at least
O(1):
O(1) O( f (n)),
O(1) O(g(n)).
The running time of an operation is O(1). A sequence of c operations is c O(1) = O(1).
A sequence S1 ; S2 has running time
T (n) = T1 (n) + T2 (n) = O( f (n)) + O(g(n)) = O( f (n) + g(n)).
Usually, one of the functions f (n) or g(n) is dominant, that is f (n) = O(g(n)) or g(n) =
O( f (n)). Then we have
(
T (n) =
(2.14)
O(g(n)) if f (n) = O(g(n)).
The running time of a sequence of instructions can thus be estimated by the running time of the
worst instruction.
2
Cf. [5, p902]; for the number of iterations we have TEuclid (m, n) = (log max[m, n]), see footnote 4 on p. 15
21
A selection
if (C)
S1
else
S2
consists of the condition C (an operation) and the instructions S1 and S2 . It thus has running
time T (n) = O(1) + O( f (n)) + O(g(n)), i.e.
(
O( f (n)) if g(n) = O( f (n)),
T (n) =
(2.15)
O(g(n)) if f (n) = O(g(n)).
In a repetition each loop can have a different running time. All these running times have to
be summed up. Let be f (n) the number of loops to be done, and g(n) be the running time of
one loop. (Note that f (n) = O(1) if the number of loops does not depend on n) Then the total
running time T (n) of the repetition is given by T (n) = O( f (n)) O(g(n)), or
T (n) = O f (n) g(n) .
(2.16)
The same properties hold true for and , respectively.
Example 2.11. Let us examine the time complexities of the operations search, insert, and delete
in some data structures of n nodes have. To find a particular node in a linked list, for instance, we
have to start at the head of the list and in the worst case that the last one is the searched node
run through the whole list. That is, the worst case implies n comparisons. Let a comparison an a
computer take time c; this is a constant, independent from the magnitude n of the list, but depending
on the machine (e.g., on speed of the processor, on quality of the compiler). So the worst-case total
running time is T (n) = c n. In O-notation this means
T (n) = O(n).
After a node is found, deleting it requires a constant amount of running time, namely setting two
pointers (for a doubly linked list: four pointers). Therefore, we have for the deletion method
T (n) = O(1).
Similarly, T (n) = O(n) for the insertion of a node. To sum up, the running times for linked lists and
other data structures are given in Table 2.1.
Data structure search insert
linked list
O(n)
O(1)
array
O(n)
O(n)
sorted array
O(ln n) O(n)
delete
O(1)
O(n)
O(n)
Table 2.1: Running times T (n) for operations on data structures of n nodes in the worst cases.
2.3
Summary
Algorithmic analysis proves the correctness and studies the complexity of an algorithm by
mathematical means. The complexity is measured by counting the number of instructions that
have to be done during the algorithm on a RAM, an idealized mathematical model of a computer.
22
Asymptotic notation erases the fine structure of a function and lets only survive its asymptotic
behavior for large numbers. The O, , and -notation provide an asymptotical bounds on a
function. We use them to simplify complexity analysis. If the running time of an algorithm
with input size n is T (n) = 5n + 2 we may say simply that it is O(n). The following essential
aspects have to be kept in mind:
The O-notation eliminates constants: O(n) = O(n/2) = O(17n) = O(6n + 5). For all these
expressions we write O(n). The same holds true for the -notation and the -notation.
The O-notation yields upper bounds: O(1) O(n) O(n2 ) O(2n ). (Note that you cannot change the sequence of relations!) So it is not wrong to say 3n2 = O(n5 ).
The -notation yields lower bounds: (2n ) (n2 ) (n) (1). So, 3n5 = (n3 ).
The -notation yields tight bounds: (1) 6 (n) 6 (n2 ) 6 (2n ). So 3n2 = (n2 ), but
3n2 6= (n5 ).
Suggestively, the notations correspond to the signs 5, =, and = as follows:
T (n) = O(g(n)) T (n) 5 g(n)
T (n) = (g(n)) T (n) = g(n)
T (n) = (g(n)) T (n) = g(n)
The O-notation simplifies the worst-case analysis of algorithms, the -notation is used if exact
complexity classes can be determined. For many algorithms, a tight complexity bound is not
possible! For instance, the termination of the Euclidean algorithm does not only depend on the
100
99
size of m and n, even for giant numbers such as m = 10100 and n = 1010 it may terminate
after a single step: gcd(m, n) = n.
There are three essential classes of complexity, the class of logarithmic functions O(log n), of
polynomial functions O(nk ), and of exponential functions O(kn ), for any k R+ .
An algorithm with polynomial time complexity is called efficient.
23
Chapter 3
Recursions
3.1
Introduction
Building stacks is closely related to the phenomenon of recursions. Building stacks again are related
to the construction of relative phrases in human languages. In the most extreme form recursions in
human language probably occur in German: the notorious property of the German language to put
the verb at the end of a relative phrase has a classical persiflage due to Christian Morgenstern at the
beginning of his Galgenlieder:
Es darf daher getrost,
was auch von allen,
deren Sinne,
weil sie unter Sternen,
die,
wie der Dichter sagt,
zu dorren, statt zu leuchten, geschaffen sind
geboren sind,
vertrocknet sind,
behauptet wird,
enthauptet werden . . .
A case of recursion is shown in figure 3.1. Such a phenomenon is referred to as feedback in the
( Restricted Use)
engineerings. Everyone knows the effect of a microphone held near a loudspeaker amplifying the
input of this microphone . . . the high whistling noise is unforgetable.
24
3.2
Recursive algorithms
For any n Z we obtain n! = fac (n). Why? Now, let us prove it by induction:
Induction start. For n = 0 we have fac (0) = 1. Hence fac (0) = 0!.
Induction step n n + 1. We assume that
fac (n) = n!
(3.1)
by def.
O.k., you perhaps believe this proof, but maybe you do not see why this recursion works? Consider
for example the case n = 3:
call fac (3) which yields fac (3) = 3 fac (2)
call fac (2) which yields fac (2) = 2 fac (1)
call fac (1) which yields fac (1) = 1 fac (0)
call fac (0) which returns 1
this yields fac (1) = 1 1 = 1, hence returns 1
this yields fac (2) = 2 1 = 2, hence returns 2
this yields fac (3) = 3 2 = 6, hence returns 6
Voil`a, fac (3) = 6 is the result! The call and return sequence is shown in figure 3.2. We can make the
following general observations. A recursive algorithm (here fac (n)) is called to solve a problem. The
algorithm actually knows how to solve only the simplest case or so called base case (or base cases,
here n = 0). If the algorithm is called with this base case, it returns a result. If the algorithm is called
with a more complex problem, it divides the problem into two pieces: one piece that the algorithm
25
fac (3)
HH
H
j
fac (2)
H
HH
j
fac (1)
HH
6
2
1
H
j
fac (0)
1
*
*
*
knows how to do (base case), and one piece that it does not know. The latter piece must resemble
the original problem, but be a slightly simpler or smaller version of the original problem. Because
this new problem looks like the original one, the algorithm calls itself to go to work with the smaller
problem this is referred to as a recursive call or the recursion step.
The recursion step executes while the original call of the algorithm is still open (i.e., it has not
finished executing). The recursion step can result in many more recursive calls, as the algorithm
divides each new subproblem into two conceptual pieces. For the recursion to eventually terminate,
each time the algorithm calls itself with a smaller version of the problem, the sequence of smaller and
smaller problems must converge to the base case in finite time. At that point the algorithm recognizes
the base case, returns a result to the previous algorithm and a sequence of returns ensues up the line
until the first algorithm returns the final result.
Recursion resembles much the concept of mathematical induction, which we learned above. In
fact, the problem P(n) is proven by reducing it to be true if the smaller problem P(n 1) is true.
This in turn is true if the smaller problem P(n 2) is true, and so on. Finally, the base case, called
induction start, is reached which is proven to be true.
3.3
Let us now look at another example, finding the maximum element on an array a[] of integers. The
strategy is to split the array in two halves and take the half whose maximum is greater than the
maximum of the other one, until we reach the base cases where there remains only one or two nodes.
Let be l, r two integers. Then the algorithm maximum (a[], l, r) is defined by:
algorithm searchmax (a[], l, r) // find the maximum a[l], a[l + 1], . . . , a[r]
if (l = r)
// the base case
return a[l];
else m searchmax(a[], l + 1, r)
// remains open until base case is reached!
if ( a[l] > m )
return a[l]
else
return m;
The solution can be described by the illustration in figure 3.3. Another way to visualize the working
of searchmax (and a general recursive algorithm as well) is figure 3.4. It shows the sequence of
successive calls and respective returns.
The searchmax algorithm for an array of length n takes exactly 2n operations, namely n calls and
n returns. Thus the running time of this algorithm is
Tsearchmax (n) = O(n).
26
(3.2)
if 3 < max(1, 2)
| if 7 < max(2, 2)
| max(2, 2) 5
max(1, 2) 7
|
max(0, 2) 7
H
j
max(2, 2)
7
*
7
5
*
Exercise 3.1. Try this algorithm out with the input array
a[] = [3, 9, 2, 8, 6],
3.4
l = 0 r = 4.
In this section we compare the two approaches of recursion and iteration and discuss why one might
choose one approach over the other in a particular situation.
Both iteration and recursion are based on a control structure: Iteration uses a repetition structure (while); recursion uses a selection structure (if ). Both iteration and recursion involve repetition:
Iteration explicitly uses the repetition structure; recursion achieves repetition through repeated subroutin (method or function) calls. Iteration and recursion each involve a termination test: Iteration
terminates when the loop-continuation condition fails; recursion terminates when a base case is recognized. Both iteration and recursion can occur infintely: An infinite loop occurs with iteration if
the loop-continuation test never becomes false; infinite recursion occurs if the recursion step does not
reduce the problem each time in a manner that converges to the base case.
Practically, recursion has many negatives. It repeatedly invokes the mechanism, and consequently
the overhead, of method calls. This can be expensive in both processor time and memory space.
Each recursive call causes another copy of the method (actually, only the methods variables!) to be
created. Iteration normally occurs within a method, so the overhead of repeated method calls and
extra memory assignment is omitted. So why recursion?
Rule 1. Any recursion consisting of a single recursion call in each step (a primitive recursion) can
be implemented as an iteration, and vice versa.
As an example for the fact that any recursive algorithm can be substituted by an iterative one let
us look at the following iterative definition of fac (n) determining the value of n!:
27
3.4.1
There is a short recursive version of the extended Euclidean algorithm which computes additional
useful information. Specifically, the algorithm invoked with the integers m and n computes the integer
coefficients x0 , x1 , x2 such that
x0 = gcd(m, n) = x1 m + x2 n.
(3.3)
Note that x1 and x2 may be zero or negative. These coefficients are very useful for the solution
of linear Diophantine equations, particularly for the computation of modular multiplicative inverses
in cryptology. The following algorithm extendedEuclid takes as input an arbitrary pair (m, n) of
positive integers and returns a triple of the form (x0 , x1 , x2 ) that satisfies Equation (3.3).
algorithm extendedEuclid( long m, long n ) {
long[] x = {m, 1, 0};
long tmp;
if ( n == 0 ) {
return x;
} else {
x = extendedEuclid( n, m % n );
tmp = x[1];
x[1] = x[2];
x[2] = tmp - (m/n) * x[2];
return x;
}
}
3.5
In general, analyzing the complexity of recursive algorithm is not a trivial problem. We will first
compute the running time of the recursive factorial algorithm to outline the principles.
Look at the left part of figure 3.5. Here we see the sequence of recursive calls of fac (n), starting
at the top with the call of fac (n), followed by the call of fac (n 1) and so on, until we reach the base
case n = 0. This yields a so-called recursion tree. (It is a rather simple tree, with only one branch at
each generation.)
At the base case, we achieve the solution fac (0) = 1 which is returned to the previous generation,
etc. We count that the recursion tree has n + 1 levels (or generations).
If we want to analyze the complexity, we have to compute the running time on each level. Let be
n = 0. Then the running time T (0) is a constant c0 ,
T (0) = c0 .
c0 is given by the operations
28
complexity T (n)
recursion tree
e
fac (n)
T (n) + c
fac (n 1)
T (n 1) + c
..
.
..
.
fac (1)
T (1) + c
fac (0)
T (0)
Figure 3.5: Recursion tree of calls of the factorial algorithm and the respective running times T (n).
determining whether n = 0 (comparison)
deciding to execute the base case (if-statement)
assigning (or returning) the value 1.
So we could for instance estimate c0 = 3 (Remember that this is a rough estimate! It depends on the
concrete computer machine how much elementary operations indeed are executed for an arithmetic
operation or an assignment.)
What then about T (1)? We first see that there are the following operations to be done:
determining whether n = 0 (comparison)
deciding to execute the else case (if-statement)
calling fac (0).
This results in a running time
T (1) = T (0) + c,
where c is a constant (which is approximately 3: c 3). But we already know T (0), and so we have
T (1) = c + c0 . Now, analogously to the induction step we can conclude: The running time T (n) for
any n = 1 is given by
T (n) = T (n 1) + c.
To summarize, we therefore achieve the following equation for the running time T (n) for an arbitrary
n = 0:
(
c0
if n = 0,
T (n) =
(3.4)
T (n 1) + c if n = 1.
This is a recurrence equation, or short: a recurrence. It is an equation which not directly yields
a closed formula for T (n) but which yields a construction plan to calculate T (n). In fact, for this
special recurrence equation we achieve the simple solution T (n) = n c + c0 , i.e.,
T (n) = (n).
(3.5)
For wide class of recursion algorithms, the time complexity can be estimated by the so-called Master
theorem [5, 4.3]. A simple version of it is the following theorem.
29
Theorem 3.2 (Master Theorem, special case). Let a = 0, b > 1 be constants, and let T : N R+
be a function defined by the recurrence
T0
if n = n0 ,
T (n) =
(3.6)
log
a
b
a T (bn/bc) + (n
) otherwise
with some initial value T0 . Then the T (n) can be estimated asymptotically as being polynomial:
T (n) = (nlogb a log n).
(3.7)
if n = 1,
if n > 1.
(3.9)
(3.10)
(3.11)
f (n)
f (n)
f (n)
..
.
Therefore, T (n) 5 f (n) (2 + log2 n) = O( f (n) log n). If especially a f (n/a) = f (n), we even have
T (n) = f (n) (1 + dlog2 ne) = ( f (n) log n).
30
Finally we state a result which demonstrates the power as well as the danger of recursion. It is
quite easy to generate exponential growth.
Theorem 3.4. Let a = 2 be an integer and f : R+ R+ a positive function of at most polynomial
growth, i.e., there exists a constant k N0 such that f (n) = O(nk ). Then a function T : N R+
obeying the recursion equation
f (0)
if n = 0,
T (n) =
(3.12)
a T (n 1) + f (n) if n > 0.
can be asymptotically estimated as
T (n) = (an ).
(3.13)
Proof. Analogously to Fig. 3.6, we see that according to Eq. (3.12) there are n generation levels in the
call tree of T (n), and therefore an basis cases. As long as f grows at most polynomially, this means
that T (n) = (an ).
Examples 3.5. (i) The recursion equation T (n) = 21 T ( n2 ) + n is in the class Eq. (3.6) with a = b = 2,
hence T (n) = (n log n).
(ii) A function T (n) satisfying T (d n2 e) + 1 is of the class (3.9) with f (n) = 1, i.e., T (n) = O(log n).
(iii) The algorithm drawing the Koch snowflake curve, a special recursive curve, to the level n
has a time complexity T (n) given by T (n) = 4T (n 1) + c1 with a constant c1 . Since it therefore
obeys (3.12) with a = 4 and f (n) = c1 , we have T (n) = (4n ).
3.6
Legend has it that in a temple in the Far East, priests are attempting to move a stack of (big gold
or stone) disks from one peg to another. The initial stack had 64 disks threaded onto one peg and
arranged from bottom to top by decreasing size. The priests are attempting to move the stack from
this peg to a second peg under the constraints that
exactly one disk is moved at a time and
at no time may a larger disk be placed above a smaller disk.
A third peg is available for temporarily holding disks. So schemtaically the situation looks like as in
figure 3.7. According to the legend the world will end when the priests complete their task. So we
will attack the problem, but better wont tell them the solution. . .
Let us assume that the priests are attempting to move the disks from peg 1 to peg 2. We wish to
develop an algorithm that will output the precise sequence of peg-to-peg disk transfers. For instance,
the output
13
31
Figure 3.7: The towers of Hanoi for the case of nine disks.
means: Move the most upper disk at peg 1 to the top of peg 3. For the case of only two disks, e.g.,
the output sequence reads
1 3,
1 2,
3 1.
(3.14)
Try to solve the problem for n = 4 disks. There should be 15 moves.
If we were to approach the general problem with conventional methods, we would rapidly found
ourselves hopelessly knotted up in managing disks. Instead, if we attack the problem with recursion
in mind, it immediately becomes tractable. Moving n disks can be viewed in terms of moving only
32
// base case
What about the running time of this algorithm? In fact, from the recursion algorithm we can
directly derive the recursion equation
c0
if n = 1,
(3.15)
T (n) =
2T (n 1) + c1 otherwise,
with c0 being a constant representing the output effort in the basis case, and c1 the constant effor in
the recursion step. Since this equation is of the class of Theorem 3.4 with a = 2 and f (n) = c1 for
n > 1, f (1) = c0 , we have
T (n) = (2n ).
(3.16)
If we try to exactly count the moves to be carried out, we achieve the number f (n) of moves for the
problem with n disks as follows. Regarding the algorithm, we see that f (n) = 2 f (n 1) + 1 for n = 1,
with f (0) = 0. (Why?) It can be proved easily by induction that then
f (n) = 2n 1
for n = 0.
(3.17)
Summary
A recursion is a subroutine calling itself during its execution. It consists of a basis case (or
basis cases) which do not contain a recursive call but return certain values, and of one or several
recursive steps which invoke the subroutine with slightly changed parameters. A recursion
terminates if for any allowed arguments the basis case is reached after finitely many steps.
A wide and important class of recursions, the primitive recursions consisting of a single
recursive call in the recursion step, can be equivalently implemented iteratively, i.e., with loops.
The time complexity of a recursive algorithm is detrmined by a recursion equation which can be
directly derived from the algorithm. There are the following usual classes of recursion equations
T (n) = T (n 1) + c,
T (n) = bT (n 1) + c,
with constants a = 1, b > 1, c > 0 and some appropriate base cases. These have solutions with
the respective asymptotic behaviors
T (n) = (n),
T (n) = (bn ),
33
Chapter 4
Sorting
So far we have come to know the data structures of array, stack, queue, and heap. They all allow an
organization of data such that elements can be added to, or deleted from. In the next few chapter we
survey the computer scientists toolbox of frequently used algorithms and discuss their efficiency.
A big part of overall CPU time is used for sorting. The purpose of sorting is not only to get
the items into a right order but also to bring together what belongs together. To see for instance all
transactions belonging to a specific credit card account, it is convenient to sort the data records by
credit card number and then look only at the intervall containing the respective transactions.
4.1
SelectionSort. The first and easiest sorting algorithm is the selectionSort. We assume that the data
are stored in an array a[n].
selectionSort (a[], n)
// sorts array a[0], a[1], . . . , a[n 1] ascendingly
for (i = 0; i 5 n 2; i++) {
// find minimum of a[i], . . . , a[n]
min i;
for ( j = i + 1; j 5 n 1; j++) {
if (a[ j] < a[min])
min j;
}
a[i] a[min];
}
Essentially, for the first loop there are made n 1 operations (inner j-loop), for the second one n 1,
. . . . Hence its running time is given by
n1
Tsel (n) = (n 1) + (n 2) + . . . + 1 =
k=
k=1
n(n 1)
= O(n2 ).
2
Insertion sort. Another simple method to sort an array a[] is to insert an element in the right order.
This is done by the insertion sort. To simplify this method we define the element a[0] by initializing
a[0] = . (In practice means an appropriate constant.)
34
Tins (n) = 1 + 2 + . . . + n =
k=
k=1
(n + 1)n
= O(n2 ).
2
In the mean we can expect the inner loop running through half of the lower array positions. The effort
then is
1
1 n1
n(n 1)
Ts (n) = (1 + 2 + . . . + (n 1)) = i =
= O(n2 ).
2
2 i=1
4
BubbleSort. Bubble sort, also known as exchange sort, is a simple sorting algorithm which works
by repeatedly stepping through the list to be sorted, comparing two items at a time and swapping them
if they are in the wrong order. The algorithm gets its name from the way smaller elements bubble
to the top (i.e., the beginning) of the list by the swaps.
bubbleSort (a[]) // sorts array a[0], a[1], . . . , a[n 1] ascendingly
for (i = 1; i < n; i + +)
for ( j = 0; j < n i; + + j)
if (a[ j] > a[ j + 1]) a[ j] a[ j + 1];
Its running time again is Tbub (n) = O(n2 ). Following is a slightly improved version which does not
waste running time if the array is already sorted. Here the pass through the array is repeated until no
swaps are needed:
bubbleSort (a[]) // sorts array a[0], a[1], . . . , a[n 1] ascendingly
do {
swapped false;
for ( j = 0; j < n 1; j++)
if (a[ j] > a[ j + 1])
a[ j] a[ j + 1]; swapped true;
} while (swapped);
4.2
Are there better sorting algorithms? The sorting algorithms considered so far are based on comparisons of keys of elements. They are therefore called (key) comparison sort algorithms It is clear that
they cannot be faster than (n), because each element key has to be considered. (n) is an absolute
35
lower bound for key comparison algorithms. It can be proved that any key comparison sort algorithm
needs at least
Tsort (n) = (n log n)
(4.1)
comparisons in the worst case for a data structure of n elements [19, 6.4]. However, in special
situations there exist sorting algorithms which have a better running time, noteworthy the pigeonhole
sort sorting an array of positive integers. It is a special version of the bucket sort [22, 2.7].
pigeonholeSort (int[] a)
// determine maximum entry of array a:
max ;
for (i 0; i < a.length; i++) if (max < a[i]) max a[i];
b = new int[max +1]; // b has max +1 pigeonholes
for (i 0; i < a.length; i++) b[a[i]]++; // counts the entries of pigeonhole a[i]
// copy the pigeonhole entries back to a:
j 0;
for (i = 0; i < b.length; i++)
for (k = 0; k < b[i]; k++)
a[ j] = i; j++;
It has time complexity O(n + max a) and space complexity O(max a) where max a denotes the maximum entry of the array. Hence, if 0 5 a[i] 5 O(n), then both time and space complexity are O(n).
4.3
The strategy to split a problem into several parts and to solve each part recursively, is called divide
and conquer. We can formulate it as follows:
if the object set is small enough
solve the problem directly
else
divide: split the set into several subsets (if possible, of equal size)
conquer: solve the problem for each subset recurively
merge: combine the solutions of the subsets to a solution of the total problem
If the several parts have approximately equal size, the algorithm is called a balanced divide and
conquer algorithm
Theorem 4.1. A balanced divide and conquer algorithm has running time
(i) O(n) if the divide and merge steps each only need O(1) running time;
(ii) O(n log n) if the divide and merge steps each have linear running time O(n).
This theorem is a powerful result, it solves the complexity analysis of a wide class of algorithms with
a single hit. Its proof is based on Theorem 3.2, with a = b = 2.
Remark 4.2. For a balanced divide-and-conquer algorithm, the running time T (n) is given by a
recursion equation:
O(1)
if n = 1,
(4.2)
T (n) =
O(n) + 2 T (n/2) + O(1) if n > 1.
|
{z
}
|{z}
|{z}
divide
conquer
merge
Hence f (n) = O(1) + O(n) = O(n), i.e. f has linear growth. Therefore, T (n) = O(n log n).
Examples of divide and conquer algorithms are mergeSort and quickSort.
36
4.4
4.4.1
MergeSort
4.4.2
QuickSort
This algorithm has much in common with mergeSort. It is a recursive divide and conquer algorithm
as well. But whereas mergeSort uses a trivial divide step leaving the greatest part of the work to
the merge step, quickSort works in the divide step and has a trivial merge step instead. Although it
has a bad worst case behavior, it is probably the most used sorting algorithm. It is comparably old,
developed by C.A.R. Hoare in 1962.
Let be a = a0 . . . an1 the sequence to be operated upon. The algorithm quickSort(l, r) works as
follows:
divide the r element sequence al . . . ar into two sequences al , . . . , a p1 and a p+1 . . . ar such that
each element of the first sequence is smaller than any element of the second sequence: ai 5 a p
with l 5 i < p and a j = a p with p < j 5 r. This step we call partition, and the element a p is
called pivot element.1 Usually, the element ar is chosen to be the pivot element, but if you want
you can choose the pivot element arbitrarily.
1 pivot:
37
Figure 4.1: Left figure: mergeSort for an array of 10 elements. Right figure: quickSort for an array of 10 elements
conquer by sorting the two sequences recursively by calling quickSort for both sequences;
there is nothing to merge because both sequences are sorted separately.
algorithm quickSort (l, r)
// for the case l = r there remains nothing to do (base case)!
if (l < r)
p partition (l, r);
quicksort (l, p 1);
quicksort (p + 1, r);
Here the subalgorithm partition is given by
algorithm partition (l, r)
// finds the right position for the pivot element ar
i l 1; j r;
while ( i < j ) {
i++;
while (i < j && ai < ar )
// i-loop
i++;
j ;
while ( i < j && a j > ar )
// j-loop
j ;
if ( i = j )
ai ar ;
else
ai a j ;
}
return i;
We see that after the inner i-loop the index i points to the first element ai from the left which is
38
greater than or equal ar , ai = ar (if i < j). After the j-loop j points to the first element a j from the
right which is smaller than ar (if j > i). Therefore, after the subalgorithm partition the pivot element
a p is placed on its right position (which will not be changed in the sequel). See Fig. 4.1.
Complexity analysis of quickSort
The complexity analysis of quickSort is not trivial. The difficulty lies in the fact that finding the pivot
element a p depends on the array. In general, this element is not in the middle of the array, and thus
we do not necessarily have a balanced divide-and-conquer algorithm. :-(
The relevant step is the divide-step consisting of the partition algorithm. The outer loop is executed
exactly once, whereas the two inner loops add to n 1. The running time for an array of length n = 1
is a constant c0 , and for each following step we need time c additionally to the recursion calls. Hence
we achieve the recurrence equation
c
if n = 1,
0
(4.3)
T (n) =
if n > 1 (1 5 p 5 n).
(n 1) + c + T (p 1) + T (n p) + 0
{z
} |{z}
| {z } |
divide
conquer
merge
Worst case
In the worst case, p = 1 or p = n This results in a worst case recurrency equation
(
c0
if n = 1,
Tworst (n) =
(n 1) + Tworst (n 1) + c if n > 1.
(4.4)
Building up
Tworst (1) = c0
Tworst (2) = 1 + T (1) + c = 1 + c + c0
Tworst (3) = 2 + T (2) + c = 2 + 1 + 2c + c0
..
.
n1
n
Tworst (n) = k + (n 1)c + c0 =
+ (n 1)c + c0 = O(n2 ).
2
k=1
Therefore, quickSort is not better than insertionSort in the worst case. (Unfortunately, the worst case
is present, if the array is already sorted. . . )
Best and average case
The best case is shown easily. It means that for each recursion step the pivot index p is chosen in the
middle of the array area. This means that quickSort then is a balanced divide-amd-conquer algorithm
with a linear running time divide-step:
Tbest (n) = O(n log n).
(4.5)
It can be shown that the average case is only slightly longer [22] 2.4.3:
Taverage (n) = O(n log n).
39
(4.6)
4.4.3
HeapSort
(4.7)
(b)
(a)
Figure 4.2: (a) The subroutine insert. (b) The subroutine deleteMax.
40
algorithm deleteMax ()
h0 hn ;
n n 1;
i 0;
while (2i + 1 5 n)
l 2i + 1; r 2(i + 1);
if (r 5 n)
if (hl > hr )
max l;
else
max r;
else
max l;
if (hi < hmax )
hi hmax ; i max;
else
i n + 1;
// exit loop
reheap
Algorithm reheap lets the element al sink down into the heap such that a subheap al+1 , ai+1 . . . ar is
made to a subheap al , ai+1 . . . ar .
algorithm reheap (l, r)
i l;
while (2i + 1 5 r)
if (2i + 1 < r)
if (a2i+1 > a2(i+1) ) // choose index c of greatest child 1st comparison
c 2i + 1;
else
c 2(i + 1);
else
// 2i+1 = r, only one child!
c 2i + 1;
if (ai < ac )
// necessary to exchange with child? 2nd comparison
ai ac ; i c;
else
i r;
// exit loop!
Note by Equation (4.7) in [8] that the left child of node ai in a heap if it exists is a2i+1 , and
the right child is a2(i+1) . Figure 4.3 shows how reheap works. Algorithm reheap needs two key
comparisons on each tree level, so at most 2 log n comparisons for the whole tree. Therefore, the
complexity Treheap of reheap is
Treheap (n) = O(log n).
(4.8)
Now we are ready to define algorithm heapSort for an array a with n elements, a = a0 , a1 , . . . , an1 .
Initially, a need not be a heap.
41
8
7
3
5
1
2
4
1
6
5
1
4
3
1
2
algorithm heapSort ()
for (i = b(n 1)/2c; i = 0; i- -)
reheap (i, n 1);
for (i = n 1; i = 1; i- -)
a0 ai ; reheap (0, i 1);
How does it work? In phase 1 (building the heap) the subheap ab(n1)/2c+1 , . . . , an1 is extended to
the subheap ab(n1)/2c , . . . , an1 . The loop is run through (n/2) times, each with effort O(log n). In
phase 2 the sorted sequence is built from the tail part of the array. For this purpose the maximum a0 is
exchanged with ai , and thus the heap area is reduced by one node to a0 , . . . , ai1 . Because a1 , . . . , ai1
still is a subheap, reheaping of a0 makes a0 , . . . , ai1 a heap again:
0
7
|
2 4
{z
heap area
0
}
i+1
n1
9 14 23 31 54 64 72
|
{z
}
increasingly ordered sequence of
the n i 1 greatest elements
In phase 2 the loop will be run through for (n 1) times. Therefore, in total heapSort has complexity
O(n log n) in the worst case.
4.5
To summarize, we have the complexities of the various sorting algorithms listed in Table 4.1.
Complexity selection/insertion/bubble
worst case
O(n2 )
average case
O(n2 )
space
O(1)
quick sort
O(n2 )
O(n ln n)
O(ln n)
merge sort
O(n ln n)
O(n ln n)
O(n)
heap sort
O(n ln n)
O(n ln n)
O(1)
pigeonhole sort
O(n)
O(n)
O(n)
Table 4.1: Complexity and required additional memory space of several sorting algorithms on data structures with n
entries; pigeonhole sort is assumed to be applied to integer arrays with positive entries 5 O(n).
42
Chapter 5
Searching with a hash table
Is it possible to optimize searching in unsorted data structures? In [8, Satz 4.3] we learned the theoretic
result that searching a key in an unsorted data structure is linear in the worst case, i.e., (n). In
Theorem A.5 on p. 107) it is shown that a naive brute force search, or: exhaustion, costs running
time of order (n) also on average. So, these are the mathematical lower bounds which restrict a
search and cannot be decreased.
However, there is a subtle backdoor through which at least the average bound can be lowered
considerably to a constant, i.e., O(1), albeit to the price of additional calculations. This backdoor is
called hashing.
The basic idea of hashing is to calculate the key from the object to store and to minimize the
possible range these keys can attain. The calculation is performed by a hash function. Sloppily
said, the hashing principle consists in storing the object chaotically somewhere, but remembering the
position by storing the reference in the hash table with the calculated key. Searching the original
object one then has to calculate the key value, look it up in the hash table, and get the reference to the
object.
The hashing principle is used, for instance, by the Java Collection classes HashSet and HashMap. The underlying concept of the hash function, however, is used also in totally different areas of
computer science such as cryptology. Two examples of hash functions used in cryptology are MD5
and SHA-1.
5.1
5.1.1
Hash values
Words and alphabets
To write texts we need symbols from an alphabet. These symbols are letters, and they form words.
We are going to formally define these notions now.
Definition 5.1. An alphabet is a finite nonempty set = {a1 , . . . , as } with a linear ordering
a1 < a2 < < as .
Its elements ai are called letters (also symbols, or signs).
Because alphabets are finite sets, their letters can be identified with natural numbers. If an alphabet
has m letters, its letters can be identified (coded) with the numbers
Zm = {0, 1, . . . , m 1}.
(5.1)
For instance, for the 26-letter alphabet of example 5.2 (i) we can choose the code hi : Z26 ,
given by
ai A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
hai i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
(5.2)
That means, hNi = 14. Another example is the 127-letter alphabet of the ASCII-Code, where e.g.
hAi = 65, hN) = 78, or hai = 97. A generalization is Unicode which codifies 216 = 65 536 letters. For
notational convenience the 216 numbers are usually written in their hexadecimal representation with
four digits (note: 216 = 164 ), i.e.
Z(216 ) = {000016 , 000116 , 000216 , . . . , FFFF16 }
(5.3)
The first 256 letters and their hexadecimal code is given in figure 5.1.
0000
002
003
004
005
006
!1A 0 @ P `
" 2 !
0000
0001
0010
0011
0020
0021
0030
0040
0050
0060
007F
0031
0041
0051
0061
%5 $ 4 D T d
0003
0004
0014
0023
0024
0015
0025
0016
0026
(8 '
)9 (
0007
0008
0017
0018
0027
0028
*: )
0009
0034
0043
0044
0053
0054
0063
0064
0035
0045
0055
0065
'7 & 6 F V f
0006
0033
0019
0029
+; *
000A
001A
002A
,< +
-= ,
000B
000C
001B
001C
002B
002C
0036
0046
0056
0066
0037
0038
0047
0048
0057
0058
0067
0068
I Y i
y
0079
:
003A
0059
J Z j
004A
007A
005A
; K [
003B
004B
005B
< L \
003C
0069
004C
005C
006A
007B
006C
007C
/? .
000E
001E
002D
002E
0@ /
000F
001F
002F
003D
004D
005D
006D
003E
004E
005E
006E
004F
005F
006F
00A4
0095
00A5
0096
00A6
)9
00B0
00B1
00E
00C0
00C1
00D0
00D1
00E0
00E1
0097
0098
00A7
00A8
0099
00A9
+;
008A
00F0
00C2
00D2
00E2
00F1
00F2
00B3
00C3
00D3
00E3
00F3
00B4
00C4
00D4
00E4
00B5
009A
00AA
00C5
00D5
00E5
00F4
00F5
00B6
00C6
00D6
00E6
00F6
$
00B7
00C7
00D7
00E7
00F7
00B8
00B9
00C8
00D8
00E8
00F8
00C9
00D9
00E9
00F9
00BA
00CA
00DA
00EA
00FA
,<
-=
008B
008C
009B
009C
00AB
00AC
00BB
00BC
00CB
00CC
00DB
00DC
00EB
00EC
.>B
/?
007D
008D
007E
008E
009D
009E
00AD
00AE
0@
008F
009F
00AF
00BD
00BE
00CD
00CE
00DD
00DE
00ED
00FB
00FC
00FD
00EE
00FE
00BF
00CF
00DF
00EF
00FF
The Unicode Standard 4.0, Copyright 19912003, Unicode, Inc. All rights reserved.
44
00FF
00F
00B2
*:
0089
The Unicode Standard 4.0, Copyright 19912003, Unicode, Inc. All rights reserved.
00A3
007F
0094
(8
0088
? O _ o B
003F
0093
0087
> N ^ n ~
00A2
'7
0086
k {
006B
001D
0078
0049
00D
&6
0085
0077
.> - = M ] m }
000D
0076
0039
0092
%5
0075
00A1
0084
8 H X h x
00A0
$4
0083
7 G W g w
0091
0074
0090
#3
0082
0073
00C
"2
0081
&6 % 5 E U e u
0005
0013
0052
00B
!1A
0080
$4 # 3 C S c
0042
00A
0071
0032
009
0070
0072
0022
0062
0012
0080
007
1 A Q a q
#3 " 2 B R b
0002
001
( Restricted
Use)
7
5.1.2
Hash functions
where gi = 2 + (1) =
i=1
1 if i is odd,
3 if i is even.
For example,
h(978389821656) = 138 mod 10 = 2,
since
9 7 8 3 8 9 8 2 1 6 5 6
1 3 1 3 1 3 1 3 1 3 1 3
9 21 8 9 8 27 8 6 1 18 5 18 138.
Therefore 978-3-89821-656-2 is a valid ISBN number.
A hash function cannot be invertible, since it maps a huge set of words onto a relatively small set
of hash values. Thus there must be several words with the same hash value, forming a collision.
Hash functions are used in quite different areas. They do not only play an important role in the
theory of data bases, but are also essential for digital signatures in cryptology. For instance, they
provide an invaluable technique to support reliable communications. Consider a given message m
which shall be transmitted over a channel; think, for instance, of IP data packets sent through the
internet or a bit string transmitted in a data bus in your computer. Most channles are noisy and may
modify or damage the original message. In the worst case the receiver does not notice that the data
are corrupted and relies on wrong information.
1 also
45
A quick way to enable the receiver to check the incoming data is to send along with the message
w its hash value h(w), i.e., to send
(w, h(w)).
If sender and receiver agree upon the hash function, then the receiver can check the data consistency
by simply taken the received message m0 , compute its hash value h(w0 ), and compare it to the received
hash value h(w). If the transmission has been modified during the transmission, and the hash function
is good enough, then the receiver notices a difference and may contact the sender to resend him the
message.
This is realized very often in communication channels. In case of IP packets or in the data bus of
your computer, the hash function is a simple bitwise parity check, in cryptographic communications
it is a much more complex function such as SHA-1. A short survey of important hash functions used
in cryptology is given in Table 5.1. Notably, each of them base on MD4 which has been developed
Hash Function
MD4
MD5
RIPEMD-128
SHA-1
RIPEMD-160
Block Length
128 bit
128 bit
128 bit
160 bit
160 bit
Relative
Running Time
1,00
0,68
0,39
0,28
0,24
original specification of SHA as published by the NSA did not contain the bit rotation. It corrected a technical
problem by which the standard was less secure than originally intended [36, S. 506]. To my knowledge, the NSA never
has explained the nature of the problem in any detail.
46
3. Initialize the variables and constants: In SHA there are used 80 constants K0 , . . . , K79 (with
only four different values), given by
0x5A827999 = b 2 230 c
wenn 0 5 t 5 19,
0x6ED9EBA1 = b 3 230 c
wenn 20 5 t 5 39,
Kt =
30
0x8F1BBCDC = b 5 2 c
wenn 40 5 t 5 59,
30
0xCA62C1D6 = b 10 2 c wenn 60 5 t 5 79,
five constants A, . . . , B given by
A = 0x67452301,
B = 0xEFCDAB89,
D = 0x10325476,
C = 0x98BADCFE,
E = 0xC3D2E1F0.
b = B,
c = C,
d = D,
e = E.
if 0 5 t 5 19,
(x y) (x z)
(x y) (x z) (y z) if 40 5 t 5 59,
ft (x, y, z) =
xyz
otherwise.
5.2
The basic idea of hashing is quite different. First, the dictionary is implemented as an unsorted array
of size n. The address of each word in the dictionary is stored in a smaller array t of m pointers with
m 5 n, the so-called hash table. Second, the address of each word w is calculated by a function
h : U {0, . . . , m 1},
w 7 h(w)
which assigns to each word w a certain index h(w) in the hash table. h is called hash function, and
h(w) is called hash value. The principle is illustrated in figure 5.2. To use a picture, the hash function
distributes the N words into m containers. Each container is an entry of the hash table.
3 MD5
consists of nearly the same instructions as SHA, but has only 64 constants Ki = b232 | sin i|c, only four constants
A, . . . , D and four variables (actually, it computes a hash value of only 128 bits!).
47
Figure 5.2: Hashing principle. Here the universe U = Z16 = {0, 1, . . . , 15}, the hash table t with m = 10 entries, and the
hash function h(w) = w mod 10.
5.3
Collisions
We have a principal problem with hash functions. The domain of definition is a huge sets of words of
size N, whereas the number of address items m usually is much smaller, m N. That means it may
come to the effect that various different words obtain the same hash value. As we defined above, such
an event is called a collision. Let us examine the following example.
Example 5.9. We now construct a simple hash table. Let be the universe be
U = {22, 29, 33, 47, 53, 59, 67, 72, 84, 91}.
Moreover let h : U Z11 be the hash function h(w) = w mod 11. Then we calculate the hash table
h(w)
w
0
33, 22
1
67
2
3
91, 47
4
59
5
6
72
7
29, 84
8
9
10
The example demonstrates that relatively many collisions can occur even though m = N, i.e. even
though there are as many addresses as words! This at first glance surprising fact is closely related to
the famous birthday paradox we explain later on.
How probable are collisions? We assume an ideal hash function distributing the n words equally
probable on the m hash values. Let be n 5 m (because for n > m a collision must occur!). Denote
p(m, n) = probability for at least one collision in n words and m hash values.
(In the sequel we will often shortly write p instead of p(m, n).) Then the probability q that no
collision occurs is
q = 1 p.
(5.5)
48
We first will calculate q, and then deduce p from q. So, what is q? If we denote by qi the probability
that the i-th word is mapped to a hash value without a collision under the condition that all the former
words are valued collisionless, then
q = q1 q2 . . . qn .
First we see that q1 = 1, because initially all hash values are vacant and the first word can be mapped
on any value without collision. However, the second word finds one hash value occupied and m 1
vacant values. Therefore q2 = (m 1)/m. So we found generally that
qi =
mi+1
m
1 5 i 5 n,
because the i-th word finds (i 1) values occupied. Thus we have for p
p = 1
m(m 1)(m 2) (m n + 1)
mn
(5.6)
In table 5.2 there are numerical examples for m = 365. It shows that only 23 words have to be present
such that a collision occurs with a probability p > 0.5! For 50 words, the probability is 97%, i.e. a
collision occurs almost unavoidably. The Hungarian-American mathematician Paul Halmos (1916
n p(365, n)
22
0.476
23
0.507
0.970
50
m
365
1 000 000
2128 3 1038
1.18 m
22.49
1177.41
2.2 1019
Table 5.2: The probability p for collision occurence for m = 365 (left) and Halmos estimates for some hash capacities
m
2006, Computers
are important but not for mathematics, [21, p. 31]) computed the estimate
n 1.18 m for the number n of words such that p(m, n) > 1/2 [21, pp. 31].
Example 5.10. Birthday paradox. Suppose a group of n people in a room. How probable is it that at
least two people have the same birthday? In fact, this question is equivalent to the collision problem
above. Here the number n of words corresponds to the number of persons, and the possible birthdays
corresponds to m = 365 hash values. Thus the table 5.2 also gives an answer to the birthday paradox:
For 23 persons in a room the probability that two have the same birthday is greater than 50%!
5.3.1
Once we have put up with the fact that hashing collisions occur with high probability even for
comparably small numbers of words to insert into a dictionary , we have to think about strategies
to handle with collisions.
Hashing with chaining
A solid method to resolve collisions is to create a linked list at each hash table entry. Then any new
word w will be appended to the list of its hash value h(w). This is the collision resolution by chaining.
To continue with example 5.9 above, we obtain the hash table in figure 5.3.
Complexity analysis. Assume we want to insert n words into a hash table of size m. For all three
operations insert, delete, and member of word w the linked list at the hash entry t[h(w)] must be run
49
through. In the worst case all words obtain the same hash value. Searching words in the hash table
then has the same time complexity as running through a linked list with n objects, i.e.
Tworst (n) = O(n).
However, we will see that hash table have a average case complexity. To start the analysis, we first
consider the question: How much time does an search take? The average length of a linked list is
n/m. The running time of computing the hash function is a constant, i.e. O(1). Adding both running
times for an average case search yields the complexity Tmean (n) = O(1 + n/m).
Theorem 5.11. The average complexity of the three dictionary algorithms insert, delete, and member
of a hashing with linked lists (separate chaining) is
Tmean (n) = O(1 + )
(5.7)
n
(5.8)
m
where m denotes the size of the hash table, and n the inserted words. The worst case complexity is
=
(5.9)
50
1. (Double hashing) Use two hash functions h(w), h0 (w) mod m and try the hash values
hi (w) = h(w) + ih0 (w)
for i = 0, 1, 2, . . . , m 1 one after the other, until a free value is found.
2. Use hash m functions hi (w), i = 0, 1, . . . , m 1, and try the hash values
h0 (w), h1 (w), h2 (w), . . . , hm1 (w)
until a free one is found.
There is one great problem for hashing with open addressing, concerning the deletion of words. If a
word w is simply deleted, a word v that has been passed over w because of a collision could not be
found! Instead, the cell where w is located has to be marked as deleted but cannot be simply released
for a new word.
Therefore, hashing with open addressing is not appropriate for
very dynamical applications where there are lots of inserts and deletes;
for cases in which the number n of words to be inserted is greater than the hash table.
Complexity analysis. We assume that the hash function sequence hi is ideal in the sense that the
sequence h0 (w), h1 (w), . . . , hm1 (w) is uniformly distributed over the possible hash values. In this
case we speak of uniform hashing. Then we have the following theoretical result.
Theorem 5.12. Let be hi an ideal hash function sequence for m hash values, where already n values
are already occupied. Then the expected costs (numbers of probes) is approximately
Cn0 =
1
1
1
1
ln .
(5.10)
(5.11)
complexity
O(1 + )
1
)
1
1
1
O( ln )
51
Hash functions for collision resolution. Let the number m of possible hash values be given.
Linear probing. One common method is the linear probing. Suppose a hash function h(w).
hi (w) = h(w) + i mod m,
i = 0, 1, . . . , m 1.
(5.12)
Quadratic probing. Suppose a hash function h(w).
2
hi (w) = h(w) + i mod m.
(5.13)
Double Hashing. Suppose two hash functions h(w), h0 (w). Then we define a sequence of hash
functions
0
2
i = 0, 1, . . . , m 1.
(5.14)
hi (w) = h(w) + h (w) i mod m,
We require that the two hash functions are (stochastically) independent and uniform. This
means that for two different words v 6= w the events
X = h(v) = h(w)
and
X 0 = h0 (v) = h0 (w)
each occur with probability 1/m, and both events together occur with probability 1/m2 ; or
expressed in formulae:
P(X) =
1
,
m
P(X 0 ) =
1
,
m
1
.
m2
This yields a real excellent hash function! Experiments show that this function has running
times that are practically not distinguishable from ideal hashing. However, it is not easy to find
appropriate pairs of hash functions which can be proved to be independent. Some are given in
[28] pp.528
52
Part II
Optimization
53
Chapter 6
Optimization problems
In this chapter we present the definition and formal structure of optimization problems and give some
typical examples.
6.1
Examples
Example 6.1. (Regression polynomial) In statistics, one is often interested in finding a regression
polynomial, or regression curve, for a given series of data pairs (t1 , y1 ), . . . , (tN , yN ) where1 ti , yi
R for i = 1, . . . , N. Such data may represent measurement samples with uncertainties due to the
measurement apparatus or signal perturbations by noise. To find a regression polynomial of degree
n 1 to these data we mean that we want to specify the p real coefficients x0 , x1 , . . . , xn1 of a
polynomial p : R R,
p(t) = x0 + x1t + x2t 2 + + xn1t n1
(6.1)
such that yi p(ti ) for all i = 1, . . . , N. For instance, if we want to find a linear regression polynomial,
we have p = 2 and look for two parameters x0 , x1 such that yi x0 + x1ti (Figure 6.1).
.6 y
.5
.4
.3
.2
.1
(a)
.6 y
.5
.4
.3
.2
.1
t
2
(b)
t
2
Figure 6.1: (a) Scatterplot of the data sample of the sample (1, .07), (2, .15), (3, .28), (4, .42), (5, .57). (b) Linear
regression of the sample.
A data pair series with ti = i especially represents a time series, i.e., a series of data values (y1 , y2 ,
. . . ,yN ), where yt denotes the data value measured at time or period t. For instance, these data may
represent sales figures of a certain article at different periods. Specifying a regression polynomial
offers the possibility to gain from past sales figures a forecast for the next few periods. The linear
regression line, e.g., represents the bias, or trend line [39].
Example 6.2. (Traveling salesman problem (TSP)) A traveling salesman must visit n cities such
that the round trip is as short as possible. Here short may be meant with respect to time or with
respect to distance, depending on the instance of the problem. The TSP is one of the most important
and by the way one of the hardest optimization problems. It has many applications. For instance,
a transport service which has to deliver goods at different places may be considered as a TSP; another
1 The
54
Bochum
49
17
28
81
Dsseldorf
52
61
38
46
Soest
Dortmund
Hagen
18
45
56
27
18
Iserlohn
50
Meschede
112
Kln
Figure 6.2: A TSP for n = 8 cities. What is the shortest round-trip for the traveling salesman visiting each city exactly
once, starting and terminating in Hagen?
example of a TSP is the problem to program a robot to drill thousands of holes into a circuit board as
quick as possible.
There are many generalizations of the TSP, for example the travel time between two cities may
depend on time, such as the rush hours where it is longer than at night.
Example 6.3. (Production planning) A company produces n products P1 , . . . , Pn , gaining a specific
profit for each product. To manufacture these products, the company has available m1 machines each
of which has a limited total production time and specific manufacturing times for each product, as
well as m2 resource materials which are required in specific portions for each product but which are
only disposable in limited certain quantities. How much, then, of each product, measured in quantity
units per period (e.g., kg/day), should be manufactured to maximize the profit?
6.2
Before we consider strategies to solve optimization problems, we first study the general structure of
such problems. What do all optimization problem do have in common? To answer this question, we
have to formalize a general optimization problem and point out its essential elements. In mathematics,
the term optimization refers to the finding of an optimum for a real-valued function f : S R on a
given domain S of possible solutions. Usually, such a searched optimum is a global maximum or
minimum, and the domain underlies some given constraints. In symbols, an optimization problem is
given by
f : S R,
f (x) max for x S
(6.2)
for a maximum problem, for example optimizing gain, and
f : S R,
f (x) min
for x S
(6.3)
for a minimum problem, for example optimizing costs. The domain S is called the search space of
the optimization problem, and the function f is the objective function, or cost function. In the next
paragraphs, we will consider these notions in more detail.
6.2.1
The first thing to formulate an optimization problem is to specify the search space S. It is the set of
all feasible solutions, or candidate solutions. Typically, S is some subset of Rn (the n-dimensional
Euclidean space or hyperspace), or of Zn , where n denotes the number of parameters which have to
be adjusted to yield the optimum. That is, a feasible solution x S is then given as
x = (x1 , x2 , . . . , xn ).
55
(6.4)
Example 6.2 (TSP, continued). If we number the n cities which the traveling salesman has to visit
by 1, 2, . . . , n, then the search space of the TSP is given by the set
S = {(x1 , . . . , xn ) : xi {1, 2, . . . , n}, xi 6= x j for i 6= j}
(6.5)
Here the vector x = (x1 , x2 , . . . , xn ) of integers represents the round trip starting in city x1 , then
proceeding to city x2 , and so on, until reaching city xn as the last city before returning to the start x1 .
The condition xi 6= x j for i 6= j guarantees that no city is visited twice during the trip. In fact, the
search space of the TSP is the set of all permutations of {1, . . . , n}. Since furthermore S Zn , the
TSP is a discrete optimization problem.
Example 6.3 (Production planning, continued). What is searched are the n individual quantities
of the products. Let denote xi be the quantity per period which is produced of product Pi , and x =
(x1 , . . . , xn ). Then the capacities of the m1 machines and the restrictions of the m2 resources impose
m = m1 + m2 constraints f j (x) 5 f jmax , j = 1, . . . , m, as limits on x. Therefore, the search space of
this problem is
S = {x Rn : f1 (x) 5 f jmax , . . . , fm (x) 5 fmmax }
(6.6)
Note that we assumed that the quantity values are real values. In principle, we could restrict the
optimization problem to integer values, i.e., if we are interested only in the integer numbers of pieces.
This then is a combinatorial problem, and we will see below that it is much harder than the continuous
problem.
6.2.2
The search space specifies the structure of possible solutions of the optimization problem. However,
it does not distinguish an optimum solution. What we need is a function to evaluate each solution
56
with respect to the problem, i.e., a characteristic number by which different solutions are comparable.
This evaluation is done by the objective function f : S R which associates to each feasible solution
a real number. Depending on whether f (x) evaluates the quality of the feasible solution x S or its
defect, the optimum is a maximum or a minimum.
The choice of an objective function for a given optimum problem often is not obvious or unique.
Different functions may be equally plausible, but yield completely different optima.
Example 6.1 (Regression, continued). A reasonable objective function for the regression problem
is the the error function. It sums the distances p(tk ) yk of each sample point (tk , yk ), where p is
regression polynomial (6.1) and k = 1, . . . , N. Because a solution is better if and only if the error is
smaller, the regression problem is a minimum problem. But what does distance exactly mean? A
widely used distance measure is the mean squared error of the regression polynomial (6.1) and the
sample data,
1 N
(6.7)
f (x0 , . . . , xn ) = [ x0 + x1tk + + xn1tkn1 yk ]2 .
N k=1 |
{z
}
p(tk )
2
For instance, the error function of the linear regression is given by f (x0 , x1 ) = N1 N
k=1 [x0 + x1tk yk ]
cf. Figure 6.3. It can be solved by calculating the gradient and setting it to zero. We will not go into
-0.1
0.16
0.14
0.1 0.12
0.1
0.18
0.15
0.16
-0.4
(a)
-0.2
0.2
0.1
0.14
0.05
0.12
-0.15
0
0.4
0.1
-0.1
-0.05
(b)
Figure 6.3: (a) The squared error objective function of linear regression (6.7) for the sample (1, .07), (2, .15), (3, .28),
(4, .42), (5, .57); the search space is S = R2 , the minimum is reached at x = (x0 , x1 ) = (.00068, .1016); (b) the absolutedistance error objective function (6.8) for the same sample.
more detail of the solution of this problem here, this is done in statistics.2 Another important error
functions is the mean absolute distance
f (x) =
1 N
|p(tk ) yk |
N k=1
(6.8)
where x = (x0 , . . . , xn1 ). An objective function basing on the mean absolute distance is less influenced by extreme outliers than a objective function using the mean squared error. This is the reason
why the mean absolute distance is usually preferred in economical applications, for there often are
extreme outliers given as peaks in the sales figures (e.g., because of Christmas trade) or by production
downtimes.
Example 6.2 (TSP, continued). For the TSP, the objective function is quite obvious, it is the total
length of a round-trip. Usually, the distances between two directly connected cities are given by a
matrix G = (gi j ) where gi j denotes the distance from city i to city j; we have gii = 0, and gi j =
2 To
shortly mention at least the simplest case: The linear regression parameters are given by
x0 = y x1t,
x1 =
cov(T,Y )
2 (T )
where t = N1 Nk=1 tk and y = N1 Nk=1 yk are the mean values, respectively, cov(T,Y ) =
1
and 2 (T ) = N1
N1 (tk t)2 is the variance [12, 3.1], [41, 2.4].
57
1
N
N1 tk yk ty is the covariance,
if there is no edge between city i and j. The matrix G is often called the weight matrix. Then the
objective function of the TSP is f : S R+ ,
N
f (x) =
(6.9)
gxk1xk
k=1
where x = (x1 , . . . , xn ), and x1 is the index of the home town of the salesman.
Example 6.3 (Production planning, continued). For the production planning problem, the objective
function is naturally given, since it is given by the profit. If product Pk is produced with quantity xk
and yields a specific profit of ck currency units per quantity unit, then the total profit is given by
f : S R,
n
f (x) = ck xk .
(6.10)
i=1
Example 6.4. (Rastrigin function) The Rastrigin Function is an example of a non-linear function
with several local minima and maxima. It has been first proposed by Rastrigin as a 2-dimensional
function [40]. It reads
n
(6.11)
i=1
R+ ,
10
5
0
1.5
1
1.5
1
0.5
0.5
00
Figure 6.4: The Rastrigin function (6.11) with the search space S = [0, 2]2 R2 and the external parameter values a = 2,
= 2 . The global minimum is at x = (x1 , x2 ) = (0, 0), its global maximum at x = ( 23 , 32 ).
determined by the external variables a and , which control the amplitude and frequency modulation,
respectively.
Multi-criterion optimization
Many every-day optimization problems are to solve not only with respect to a single criterion but to
several criteria. If you want to buy a car, say, you try to optimize some criteria simultaneously, such
as a low price, a low mileage, and a high speed.
In general, a multi-criterion optimization consists of m objective functions fk : S R, where k
= 1, . . . , m. A usual way to combine these single criteria to a total objective function f : S R by
forming a weighted sum,
m
f (x) =
wk fk (x)
(6.12)
k=1
with the weights wk R. If the total objective function f is to be maximized, then those single
objective functions to be maximized also have a positive weight wk > 0, whereas those criteria to be
58
minimized get a negative weight wk < 0. In the case of a minimizing total objectiv function, the signs
are vice versa.
We only mention that there is often used another approach to solve multi-criterion problems,
namely Pareto optimality [15, 6.3.5]. However, we will not consider this subject in the sequel.
6.3
6.3.1
6.3.2
B
A
Figure 6.5: A greedy solution of a TSP, starting at A. Obviously, the path ACBDA is shorter.
city to be visited as the one nearest to the current visited city. A solution then could look like
as in Figure 6.5, i.e., a greedy algorithm does not guarantee to succeed. Examples of greedy
algorithms which are guaranteed to work correctly are Dijkstras algorithm to find a certain
class of shortest path in a network, or the Huffman coding algorithm.
Dynamical programming. Dynamic programming is a technique where in each iteration step a
bigger subproblem is constructed by using an optimal subproblem of the last step. It is a cleverly
arranged exhaustion without redundant computations. Examples of Dynamic programming
algorithms are the Floyd-Warshall algorithm for shortest paths in a directed graph, the WagnerWhitin method for optimizing the economic order quantity [39, D.3], or the standard solution
of the knapsack problem. There also exists a dynamic programming solution for the TSP.
6.3.3
Various optimization techniques are influenced by principles biological systems, basing on the observation that Nature always finds optimum solutions (although strictly speaking it does not seem to
seek one). These techniques all have in common that they use an ensemble of independent individuals
which communicate with each other.
Artificial neural networks. Such systems represents a network of simple processing elements
called neurons which can exhibit complex global behavior, determined by the connections between themselves and element parameters. They are inspired by the information processing of
the brain.
Evolutionary algorithms. Evolutionary algorithms are methods oriented at biological evolution
utilizing reproduction, mutation, recombination (crossover), and natural selection (survival of
the fittest) of individuals in a population. Special classes are genetic algorithms where individuals are elements of the (usually discrete) search space, evolution strategy where similarly
individuals are elements of a real search space and the mutation is adapting to certain criteria of the current population, and evolutionary programming where individuals are computer
programs with varying parameters to optimize the problem.
Computational swarm intelligence. Swarm intelligence methods are designed to find an optimum by the collective behavior of decentralized individual agents communicating with each
other. Examples are ant colony optimization for discrete problems, where each ant walks randomly and leaves slowly evaporating pheromones on its way influencing other ants, and particle
swarm optimization where particles fly through hyperspace having a memory both of their own
best position and of the entire swarms best position and communicating either to neighbor
particles or to all particles of the swarm.3
3 http://jswarm-pso.sourceforge.net
60
Chapter 7
Graphs and shortest paths
7.1
Basic definitions
3
2
1
1
4
E = {(1, 2), (2, 3), (2, 4), (3, 4), (3, 5), (4, 1), (4, 4), (5, 3)}
61
n
2
(7.2)
(c) A directed graph containing no self-loops: Since there are n2 pairs and each pair can have
two directions, a directed graph can have at most 2 n2 edges, or
n
= n(n 1)
(7.3)
|E| 5 2
2
(d) A directed graph containing self-loops can have at most n(n 1) + n = n2 edges, i.e.,
|E| 5 n2 .
7.2
(7.4)
Representation of graphs
How can a graph be represented in a computer? There are mainly three methods commonly used
[25, 4.1.8], two of which we will focus here. Let be n = |V | the number of vertices, and let be
V = {v1 , . . . , vn }.
1. (Adjacency matrix) The adjacency matrix A = (ai j ) of the graph G = (V, E) is an (n n)-matrix
defined by
1
if (vi , v j ) E,
ai j =
(7.5)
0
otherwise.
For the left graph in figure 7.1 we have vi = i (i = 1, . . . , 5). The adjacency matrix therefore is
the (5 5)-matrix
0 1 0 0 0
0 0 1 1 0
.
0
0
0
1
1
A=
(7.6)
1 0 0 1 0
0 0 1 1 0
2 adjacent:
angrenzend
62
2. (Adjacency list) In an adjacency list each vertex has a list of all his neihbors. Via an array v[]
of length n = |V | each list is randomly accessible. For the left graph in figure 7.1 we therefore
have
1
null
null
null
null
null
There are more possibilities to represent graphs, but adjacency matrices and adcjacency lists are the
most important. Both normally are used as static data structures, i.e., they are constructed at the start
and wont be changed in the sequel. Updates (insert and delete) play a minor part as compared to the
dynamic data structures we have studied so far.
7.2.1
An advantage representing a graph by an adjacency matrix is the possibility to check in running time
O(1), whether there exist an edge from vi to v j , i.e. whether (vi , v j ) E. (The reason is simply
because one only has to look at the entry (i, j) of the matrix.) A disadvantage is the great requirement
of memory storage of seize O(n2 ). In particular, if the number of edges is small compared to the
number of vertices, memory is wasted. Moreover, the running time for the initialization of the matrix
is (n2 ).
The advantage of this representation is the small effort of memory storage of O(n + e) for n = |V |
and e = |V |. All neighbors are achieved in linear time O(n). However, a test whether two vertices vi
and v j are adjacent cannot be done in constant running time, because the whole list of vi must be run
through to check the existence of v j . In worst case the whole adjacency list of vertex vi has to be run
through.
Thus an adjacency matrix should only be chosen, if the intended algorithms include many tests on
the existence of edges or if the number of edges is much larger than the number of vertices, e n.
memory
running time for (vi , v j ) E?
appropriate if
7.3
Traversing graphs
7.3.1
Breadth-first search
Aadjacency matrix
Adjacency list
O(n2 )
O(n + e)
O(1)
O(n)
en
e/n
many edge searches few edge searches
A first problem we will tackle and on whose solution other graph algorithms base upon is to look
systematically for all vertices of a graph. For instance, consider the graph in figure 7.2. The breadthfirst search (BFS) algorithm proceeds as follows. Find all neighbors of a vertex s; for each neighbor
find all neighbors, and so forth. Thus the search discovers the neighborhood of s, yielding a so-called
connected component. To have a criterion whether all vertices in this neighborhood are already
discovered, we mark the vertices which are already visited. We assume that we start with white
vertices which are colored black when visited during the execution of BFS. Since we successively
examine neighbors of visited vertices, a queue is the appropriate choice as the data structure for
subsequently storing the neighbors to be visited next.
63
Vertex adj[ ][ ]
color
BFS()
The graph contains the set V of vertices and the adjacency list adj where adj[i][ j] means that vertex
V [ j] is in the neighborhood of V [i]. Each vertex has a color. The method BFS(s) is the implementation
of the following algorithm. For the vertex s it blacks each neighbor of vertex V [s] in graph G. (It is
called by G.BFS(s)) BFS has as a local variable a queue q[] of integers containing the indices i of V [i].
algorithm BFS (int s)
// * visits systematically each neighbor of s in graph G = (V, E) and
// * colors it black. The m vertices are V [i], i = 0, 1, . . . , m 1
q[].empty (); q[].enqueue (s);
// initialize queue and append s
if ( V [s].color == white )
while (not q[].isEmpty() )
k q[].dequeue ();
if (V [k].color = white)
V [k].color black;
for (i 0; i 5 adj [k].length; i + +; );
q[].enqueue(i);
Complexity analysis. Initializing colors, distances and predecessors costs running time O(|V |).
Each vertex is put into the queue at most once. Thus the while-loop is carried out |E| times. This
results in a running time
TBFS (|V |, |E|) = O(|V | + |E|).
(7.7)
Since all of the nodes of a level must be saved until their child nodes in the next level have been
generated, the space complexity is proportional to the number of nodes at the deepest level, i.e.,
SBFS (|V |, |E|) = O(|V | + |E|).
(7.8)
In fact, in he worst case the graph has a depth of 1 and all vertices must be stored.
7.3.2
Depth-first search
Analogously to BFS, depth-first search (DFS) visits all vertices reachable from a given vertex V [s].
But different to BFS, DFS proceeds going deeper into the graph by each step, see Figure 7.3. It can
64
4
2
Complexity analysis. We observe that DFS is called from DFS maximally |E| times, and from the
main algorithm at most |V | times. Moreover it uses space only to store the stack of vertices, i.e.
TDFS (|V |, |E|) = O(|V | + |E|),
7.4
(7.9)
Cycles
Definition 7.2. A closed path (v0 , . . . , vk , v0 ) i.e., a path where the final vertex coincides with the start
vertex, is called a cycle. A cycle which visits each vertex exactly once is called a Hamiltonian cycle.
A graph without cycles is called acyclic.
Example 7.3. In the group stage of the final tournament of the Football World Cup, soccer teams
compete within eight groups of four teams each. Each group plays a round-robin tournament, resulting in 42 = 6 matches in total. A match can be represented by two vertices standing for the two
teams, and a directed edge between them pointing to the loser of the match or, in case of a draw, an
undirected edge connecting them. For instance, for group E during the world cup 1994 in the USA,
Group E
Ireland Italy
Norway Mexico
Italy
Norway
Mexico Ireland
Italy
Mexico
Ireland Norway
10
10
10
21
11
00
E
2:1
1:0
0:0
1:1
I
1:0
1:0
Team
Goals Pts
Mexico (M) 3:3 4
Ireland (E) 2:2 4
Italy (I)
2:2 4
Norway (N) 1:1 4
Figure 7.4: A cycle in the graph representing the match results of group E during the World Cup 1994
consising of the teams of Ireland (E), Italy (I), Mexico (M), and Norway (N), we have the graph given
in Figure 7.4. This group is the only group in World Cup history so far in which all four teams finished
on the same points.
65
7.4.1
A Hamiltonian cycle is a cycle in which each vertex of the graph is visited exactly once. The Hamiltonian cycle problem (HC)) is to determine whether a given graph contains a Hamilton cycle or not.
It is a decision problem, not an optimization problem, since it expects the answer yes or no but
not a quantity.
Let X be the set of all possible cycles beginning in vertex 1, i.e., x = (x0 , x1 , . . . , xn1 , xn ) where
x0 = xn = 1 and where (x1 , . . . , xn1 ) is a permutation of the (n 1) vertices x j 6= 1. In other words,
X contains all possible Hamiltonian cycles which could be formed with the n vertices of the graph.
Then a simple algorithm to solve the problem is to perform a brute force search through the space
X and to test each possible solution x by querying the oracle function
1 if x is a closed path,
(7.10)
(x) =
0 otherwise.
If the graph does not contain a Hamilton cycle, then (x) = 0 for all x X. The oracle function only
has to check whether each pair (x j1 , x j ) of a specific possible Hamiltonian cycle is an edge of the
graph, which requires time complexity O(n2 ) since |E| 5 n2 ; because there are n pairs to be checked in
this way, the oracle works with total time complexity T (n) = O(n3 ) per query. Its space complexity
is S (n) = O(log2 n) bits, because it uses E and x as input and thus needs to store temporarily only the
two vertices of the considered edge, requiring O(log2 n). In total this gives a time complexity THC-bf
and a space complexity THC-bf of the brute force algorithm of
THC-bf (n) = O(nn+3 ) = O(2n log2 (n+3) ),
(7.11)
since there are (n 1)! = O(nn ) = O(2n log2 n ) possible permutation (orderings) of the n vertices.
Remarkably, there is no algorithm known to date which is essentially faster. If you find one, publish
it and get US$ 1 million.3 However, for some special classes of graphs the situation is different. For
instance, according to a mathematical theorem of Dirac, any graph in which each vertex has at least
n/2 incident edges has a Hamiltonian cycle. This and some more such sufficient criterions are listed
in [9, 8.1].
7.4.2
A problem being apparently similar to the Hamiltonian cycle problem is the Euler cycle problem. Its
historical origin is the problem of the Seven Bridges of Konigsberg, solved by Leonhard Euler in
1736.
An Euler cycle is a closed-up sequence of edges, in which each edge of the graph is visited exactly
once, If we shortly denote (x0 , x1 , . . . , xm ) with x0 = xm = 1 for a cycle, then a necessary condiiton to
be Eulerian is that m = |E|.
The Euler cycle problem (EC) then is to determine whether a given graph contains an Euler cycle
or not. By Eulers theorem [9, 0.8], [25, 1.3.23], a connected graph contains an Euler cycle if and
only if every vertex has an even number of edges incident upon it. Thus EC is decidable in O(n3 )
computational steps, counting for each of the n vertices x j in how many of the at most n2 edges
(x j , y) or (y, x j ) E it is contained:
TEuler (n) = (n3 ).
(7.12)
7.5
Shortest paths
We now consider a so-called weighted graph which assigns to each edge a certain length or cost.
Consider for instance 7.5. Here the weights may have quite different meanings: They can express . . .
3
66
1
3
4
7
For an edge (v, w) E the weight is thus given by (v, w).4 The unweighted graphs which we
have seen so far can be considered as special weighted graphs where all weights are constantly 1:
(v, w) = 1 for all (v, w) E.
It is often convenient to write as a matrix, where vw = (v, w) denotes the weight of the edge
(v, w), where the weight is if the edge (v, w) does not exist; for convenience, such entries are often
left blank or are marked with a bar . For the weighted graph in Fig. 7.5 we thus obtain the weight
matrix
1 4
1 4
5 3 5 5 3 5
4 7 4 7
(v, w) = vw =
(7.13)
7 6 = 7 6 .
2 4 2 4
In this way, the weight matrix is a generalization of the adjacency matrix. With the weight we can
define the length of a path in graph G .
Definition 7.5. Let be p = (v0 , v1 , . . . , vn ) be a path in a weighted graph G . Then the weighted length
of p is defined as the sum of its weighted edges:
n
(p) = (vi1 , vi ).
(7.14)
i=1
A shortest path from v to w is a path p of minimum weighted length starting at vertex v and ending
at w. This minimum length is called the distance
(v, w).
If there exists no path between two vertices v, w, we define (v, w) = .
4 Note
67
(7.15)
7.5.1
Shortest paths
We will consider the so-called single-source shortest path problem: Given the source vertex s V ,
what is a shortest path from s to all other vertices v V ? In principle, this also gives a solution of the
following shortest-path problems.
(single-destination) What is a shortest path between an arbitrary vertex to a fixed destination
vertex? This problem is a kind of reflexion of the single-source shortest path problem (exchange
s and v).
(single-pair) Given a pair v, w V of vertices, what is a shortest path from v to w? This problem
is solved by running a single-source algorithm for v and selecting a solution containing w.
(all-pairs) What is a shortest path between two arbitrary vertices?
Some algorithms can deal with negative weights. This case poses a special problem. Look at the
weighted graph in Fig. 7.6. On the way from v to w we can walk through the cycle to decrease the
2
4
1
7.5.2
(7.16)
Theorem 7.6. (Triangle inequality) Let be G = (V, E, ), and (v, w) E. Denote (v, w) the minimum
length between v and w for v, w V . Then for any three vertices u, v, w V we have
(v, w) 5 (v, u) + (u, w).
(7.17)
Proof. The shortest path from v to w cannot be longer than going via u.
u
4
2
2
68
next [v, w] u;
Here the matrix entry dist[v][w] stores the information of the minimum distance between v and w, and
the matrix entry next[v][w] represents the vertex one must travel through if one intends to take the
shortest path from v to w. From the point of view of data structures, they are attributes of an object
vertex. This will be implemented consequently in the Dijkstra algorithm below. The Floyd-Warshall
algorithm implements them as attributes of the graph; therefore they are given as two-dimensional
arrays (matrices!)
7.5.3
Floyd-Warshall algorithm
We now consider the Floyd-Warshall algorithm which is fascinating in its simplicity. It solves the
all-pairs shortest paths problem. It has been developed independently from each other by R.W. Floyd
and S. Warshall in 1962.
Let the vertex be an object as an element of the graph G as given by the following diagram.
Graph
Vertex[ ] V
Vertex
double[ ][ ]
1
* int index
double[ ][ ] dist
int[ ][ ] next
floydWarshall()
(Note that the weight and the distance dist are given as two-dimensional arrays.) Then the FloydWarshall algorithm is called without a parameter.
algorithm FloydWarshall ()
//* Determines all-pairs shortest paths. The n vertices are V [i], i = 0, 1, . . . , n 1
for (v 0; v < n; v + +)
// initialize
for (w 0; w < n; w + +)
dist [v][w] [v][w]; next [v][w] 1;
for (u 0; u < n; u + +)
for (v 0; v < n; v + +)
for (w 0; w < n; w + +)
// relax:
if (dist [v][w] > dist [v][u] + dist [u][w])
dist [v][w] dist [v][u] + dist [u][w];
next[v][w] u;
Unfortunately, the simplicity of an algorithm does not guarantee its correctness. For instance, it can
be immediately checked that it does not work for a graph containing a negative cycle. However, we
can prove the correctness by the following theorem.
Theorem 7.7 (Correctness of the Floyd-Warshall algorithm). If the weighted graph G = (V, E, )
with V = {V [0],V [1], . . . ,V [n 1]} does not contain negative cycles, the Floyd-Warshall algorithm
computes all-pairs shortest paths in G . For each index v, w {0, 1, . . . , m 1} it yields
dist [v][w] = (V [v],V [w]).
69
Proof. Because the relaxation goes through all possible edges starting at V [v], in the first two loops
we simply have dist [v][w] = [v][w]. It represents the weights of all paths containing only two vertices.
In each next loop the value of u controls the number of vertices in the paths to be considered:
For fixed u all possible paths p = (e0 , e1 , . . . , eu ) connecting each pair e0 = V [v] with eu = V [w] are
checked. Since there are no negative cycles, we have
u 5 n 1,
because for a shortest path in a graph without negative cycles no vertex will be visited twice. Therefore, eventually we have dist [v][w] = (V [v],V [w]).
This elegant algorithm is derived from a common principle used in the area of dynamic programming5 , a subbranch of Operations Research [11]. It is formulated as follows.
Bellmans Optimality Principle. An optimum decision sequence has the property that independently from the initial state and the first decisions already made the remaining decisions starting
from the achieved (and possibly non-optimum) state yield an optimum subsequence of decisions to
the final state.
An equivalent formulation goes as follows. An optimum policy has the property that independently from the initial state and the first decisions already made the remaining decisions yield an
optimum policy with respect to the achieved (and possibly non-optimum) state.
Thus if one starts correctly, Bellmans principle leads to the optimum path.
Complexity analysis. The Floyd-Warshall algorithm consists of two cascading loops, the first one
running n2 = |V |2 times, the second one running at most n3 times [25, 6.1.23]:
TFW (|V |) = O(|V |3 ).
(7.18)
In a dense graph where almost all vertices are connected directly to each other, we achieve approximately the maximum possible number of edges e = O(n2 ), cf. (7.3). Here the Floyd-Warshall
algorithm is comparably efficient. However, if the number of edges is considerably smaller, the three
loops are wasting running time.
7.5.4
Dijkstra algorithm
We now will study an efficient algorithm that solves the single-source shortest path problem. It thus
answers the question: What are the shortest paths from a fixed vertex to all other vertices? This
algorithm was developed by E.W. Dijkstra in 1959. It works, however, only if there are no negative
weights.
Similarly to the Floyd-Warshall algorithm, the Dijkstra algorithm decreases successively a distance array dist [ ] denoting the distance from the start vertex by relaxation. But now it is not the
distance of a special pair of vertices which is relaxed, but the absolutely smallest distance value. To
get this value, a heap with the distance as key is used.
Unfortunately the distance may often be changed performing the algorithm. Therefore a minimum
heap is used as temporary memory and has to be updated frequently. Including this function we speak
of a priority queue. It possesses the following methods:
insert(int vertex, int distance): Inserts vertex along with its distance form the source into the
priority queue and reheaps the queue.
5 in
70
int extractMin(): Returns the index of the vertex with the current minimal distance from the
source and deletes it from the priority queue.
int size(): Returns the number of elements in the priority queue.
decreaseKey(vertex, newDistance): If the input newDistance < current distance of the vertex
in the queue, the distance is decreased to the new value and the priority queue is reheaped from
that position on.
The data structures to implement Dijkstra algorithm thus are as in the following diagram.
PriorityQueue
Vertex
Vertex[] vertex
int index
Graph
int size
Vertex[] adjacency
1
1
1
*
insert (index, dist)
double[ ][ ] double distance
Vertex predecessor
extractMin ()
dijkstra (s)
int queueIndex
size ()
decreaseKey (vertex, newDist)
Here adjacency denotes the adjacency list of the vertex. As usual, a vertex V [i] V is determined
uniquely by its index i. The algorithm Dijkstra is called with the index s of the source vertex as
parameter. It knows the priority queue h (i.e., h is already created as an object.) The vertex attribute
queueIndex will be used by the Dijkstra algorithm to store the current index position of the vertex in
the priority queue. The algorithm is shown in Figure 7.8. There are some Java applet animations in
algorithm dijkstra (s)
/** finds shortest paths from source s in the graph with vertices V [i], i = 0, 1, . . . , n 1.*/
// initialize single source:
for (int i = 0; i < n; i++) {
V[i].setPredecessor(null);
V[i].setDistance(INFINITY);
}
Vertex source = V[s];
source.setDistance(0);
Vertex[] adj = source.getAdjacency();
for (int i = 0; i < adj.length; i++) {
adj[i].setPredecessor(source);
adj[i].setDistance(weight[source.getIndex()][adj[i].getIndex()]);
}
PriorityQueue q = new PriorityQueue(vertices);
while( q.size() > 0 ) {
u = q.extractMin();
for( Vertex v : u.getAdjacency() ) {
// relax:
d = u.getDistance() + [u.getIndex()][v.getIndex()]);
if (v.getDistance() > d) {
v.setPredecessor(u);
q.decreaseKey(v,d); // decrease distance and reorder priority queue
}
}
}
71
Algorithmic analysis
For the correctness of the Dijkstra algorithm see e.g. [22, Lemma 5.12].
Theorem 7.8. The Dijkstra algorithm based on a priority queue realized by a heap computes the
single-source shortest paths in a weighted directed graph G = (V, E, ) with non-negative weights in
maximum running time TDijkstra (|V |, |E|) and with space complexity SDijkstra (|V |) given by
TDijkstra (|V |, |E|) = O(|E| log |V |),
(7.19)
Proof. First we analyze the time complexity of the heap operations. We have n = |V | insert operations,
at most n extractMin operations and e = |E| decreaseKey operations.
Initializing the priority queue costs at most O(e log n) running time, initializing the vertices with
their distance and predecessor attributes requires only O(n). Determining the minimum with extractMin costs at most O(e log n), because it is performed at most e times and the reheap costs O(log n).
The method decreaseKey needs O(log n) since there are at most O(n) elements in the heap.
To calculate the space requirement S(n) we only have to notice that the algorithm itself needs the
attributes dist and pred, each of length O(n), as well as the priority queue requiring two arrays of
length n to store the vertices and their intermediate minimum distances from the source, plus a single
integer to store its current length. In total, this gives S(n) = O(n), and even S(n) = (n) since this is
the minimum space requirement.
Q.E.D.
We remark that the Dijkstra algorithm can be improved, if we use a so-called Fibonacci heap.
Then we have complexity O(|E| + |V | log |V |) [22, 5.4 & 5.5].
72
Chapter 8
Dynamic Programming
Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems.1 Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original
problem. In contrast, dynamic programming is applicable when the subproblems are not independent,
that is, when subproblems share subsubproblems. In such a context, a divide-and-conquer algorithm
would do more work than necessary, repeatedly solving the common subsubproblems. A dynamic
programming algorithm solves every subsubproblem just once and then saves its answer in a table,
thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered.
Dynamic programming is typically applied to optimization problems. In such problems there can
be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal
(minimum or maximum) value.
The development of a dynamic-programming algorithm can be broken into a sequence of four
steps.
1. Characterize the structure of the optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom-up fashion.
4. Construct an optimal solution from computed information.
Steps 13 form the basis of a dynamic-programming solution to a problem. Step 4 can be omitted if
only the value of an optimal solution is required.
8.1
An optimum-path problem
To clarify the basic notions of dynamic programming, we consider a simple example. Suppose the
path network graph in figure 8.1, with the costs of each edge (subpath) being indicated. We are
searching for a path from A to O minimizing the costs. Such a path is called an optimum path.
The problem can be solved by exhaustion, i.e., by computation of all possible paths from A to O.
However, the method of dynamic programming provides essential simplifications.
In figure 8.1, the path network is divided into several stages. For instance, the points D, E, and
F belong to stage 2. The points are also called states. Hence at stage 2, the system is either in state
D, E, or F. From a state in a given stage the system can change to a state in the next stage, due to
a decision. Being in state E at stage 2, the possible decisions are to take either state G or state H at
stage 3.
1 Programming
73
D4
I
2
B1
E2
C4
M
4
K2
O
2
F
2
L
4
state
A
C
D
G
L
N
O
decision
go to C
go to D
go to G
go to L
go to N
go to O
total costs
cost
1
3
4
4
3
2
17
I
2
B1
E2
C4
M
4
K2
O
2
F
2
L
4
8.1.1
General observations
From the above example we can derive general features of dynamic programming models.
A dynamic programming problem is divided into n + 1 stages (or subproblems), if there are
n decisions to be made.
At each stage there are several states xt , exactly one of which at each stage has to be run through
by a solution of the problem.
74
Being in state xt at stage t (t = 0, . . . , n), a decision (or action) at has to be made to achieve a
state xt+1 at stage t + 1.
The total decision x consists of a decision sequence a = (a0 , a1 , . . . , an1 ). Here at is the decision at stage t, or in other words, the solution of subproblem t. a is also called the decision
vector.
These connections can be summarized by the formula
t = 0, . . . , n 1.
(8.1)
Here f is the transition function, which changes state xt at stage t into state xt+1 depending on the
decision at . We call equation (8.1) the transition law, or law of motion, cf. [2]. It is important to
notice that state xt+1 at stage t + 1 solely depends on stage t, state xt , and decision at . Other states or
actions at stage t have no influence on xt+1 .
But what has to be optimized at all? We have to minimize the sum of the costs ct that are caused
by each decision at changing from state xt to xt+1 . Formally we write
n1
c(x0 , a) =
ct (xt , at )
t=0
min .
a
(8.2)
where a = (a0 , a1 , . . . , an1 ) is the decision vector, and xt with t > 0 results from the decision at1
and the state xt1 . The function c is called the (total) cost function. It is separable, since it can be
separated into the sum of each stage costs, c(x0 , a) = ct (xt , at ). For a state sequence xs xs+1 . . . xt ,
with 0 5 s < t < n, where each state xk results from a decision according to the recursive transition
law (8.1), we will also denote the cost function of by c(xs , xs+1 , . . . , xt ), and the minimum cost value
from state xs to xt simply by c(xs , xt ).
The dynamic programming method now consists of stepwise recursive determinations of optimal
subpaths. All optimal subpaths are computed recursively, i.e. by using previously computed optimal
subpaths. The recursion relies on the following fundamental principle.
Bellmans Optimality Principle. An optimum decision sequence has the property that independently from the initial state and the first decisions already made the remaining decisions starting
from the achieved (and possibly non-optimum) state yield an optimum subsequence of decisions to
the final state.
An equivalent formulation goes as follows. An optimum policy has the property that independently from the initial state and the first decisions already made the remaining decisions yield an
optimum policy with respect to the achieved (and possibly non-optimum) state.
For our optimum-path problem the Bellman principle thus means: Each subpath to O, e.g., from D
to O, must be an optimum connection from D to O, no matter whether the actual total optimum path
from A to O runs through D or not.
8.1.2
We start at the end of the path at stage 6 and follow the path from the 5th to the 6th stage. On stage 5
the optimum path only can take the states M and N, i.e. x5 {M, n}, and the only possible decision
x5 = O. Thus we have
c5 (M,C) = 1,
c5 (N,C) = 2.
These are the optimum costs from M and N, repectively. We write these values directly above them.
as in fig. 8.3 (left).
75
D4
B1
1
M
4
E2
F
2
E2
C4
2 1
F
2
5 3
L
4
O
2 2
K2
1
L
4
M
4
2 2
B1
K2
1
C4
D4
Figure 8.3: The optimum subpaths from stage 5 to stage 6 (left), and from stage 4 to stage 6 (right).
Back at stage 4, there are the possible states I, K, and L which can be achieved by the optimum
path. From I there is only one state achievable to reach O, namely M. This yields the optimum costs
c(I, O) = c4 (I, M) + c5 (M, O) = 3 + 1 = 4, written above the letter I.
From K there are two possible decisions, KM or KN. Hence the total costs from K to O are
either c(K, M, O) = 2, or c(K, N, O) = 4, i.e. the minimal costs from K to O are
c(K, O) = min[c(K, M, O), c(K, N, O)] = min[2, 4] = 2.
Analogously,
c(L, O) = min[c(L, M, O), c(L, N, O)] = min[7, 5] = 5.
This concludes the computation of the optimum subpaths from stage 4 to stage 6, cf. fig. 8.3 (right).
In a similar manner we achieve recursively the optimum subpaths from stage 3, 2, 1, and 0,
resulting in the optimum path ABEHKMO, with total costs c(A, O) = 8.
D4
2
B1
A
E2
C4
10
D4
6 2
M
4
E2
C4
2 2
H
3
M
4
5 3
6 3
L
4
F
2
2 1
K2
3 1
6 2
5 3
K2
B1
2 1
3 1
F
2
2 2
H
3
5 3
L
4
Figure 8.4: The optimum subpaths from stage 3 to stage 6 (left), and from stage 2 to stage 6 (right). Note that by stage
3, LNO cannot belong to the total optimum path!
Remark. In the consideration of the transition from stage 3 to stage 4 it is obvious that the cheapest
subpath between two stages (here FG with cost 1) need not necessarily lie on an optimum subpath.
8.2
To sum up, dynamic programming can be applied to a system which moves through a sequence of
states x0 , x1 , . . . , xn at the stages (or times) t = 0, . . . , n, where the transition from a state xt at stage t
depends on the decision (or action) at and the time t according to the transition law
t = 0, . . . , n 1.
(8.3)
10
D4
D4
6 2
B1
M
4
5 3
3
E2
2 1
K2
6 2
B1
8 2
5 3
E2
6 2
2 1
K2
10
3 1
2 2
10
3 1
2 2
C4
C4
6 3
6 2
2
10
F
2
5 3
6 3
L
4
F
2
5 3
L
4
Figure 8.5: The optimum subpaths from stage 1 to stage 6 (left), and the optimum path from A to O (right).
The problem is to minimize the total costs c(x0 , a) caused by the decision sequence a = (a0 , . . . , an1 )
on the initial state x0 ,
n1
c(x0 , a) =
ct (xt , at )
min .
t=0
(8.4)
Such a problem is called a sequential or multistage decision problem, where the time variable t (or an
arbitrary index i) is used to order the sequence. The dynamic programming approach then consists of
solving the recursive equation
h
i
gt (xt ) = min ct (xt , at ) + gt+1 ( f (xt , at ,t))
(8.5)
at A (xt )
for each state xt at stage t. A (xt ) is the decision space, i.e. the set of all possible transitions (paths)
from xt to a state xt+1 . Equation (8.5) is called the Bellman functional equation. The minimum value
gt (xt ) thus yields the optimum path from xt to the end xn .
This value has to be computed for all possible states xt . Starting at the end, i.e. with t = n 1, one
computes successively the optima for all n stages. The optimum path from stage 0 with initial state
x0 to the final state xn at stage n then is the solution of the problem.
Hence it is natural to treat a sequence of decisions by reversing the order. It is for this reason that
the method is also called backwards induction.
8.3
Production smoothing
Production smoothing problems occur frequently, if the demand varies over several periods and the
production has to be adjusted because of limited storage capacity. In the sequel we will consider a
concrete problem, formulate the corresponding dynamic programming problem and solve it according
to section 8.1.
8.3.1
The problem
A firm plans the production of four successive periods. The total demand for the produced commodities of btotal = 90 qu (quantity units) distributes on the four periods t = 0, 1, 2, 3 as follows.
b0 = 10 qu,
b1 = 20 qu,
b2 = 20 qu,
b3 = 40 qu.
(8.6)
0
0
10
5
20 30
11 26
This yields the total production costs c p (x) as the sum of fixed and variable costs as
0
x
c p (x) 11
10 20 30
16 22 37
c(l0 , x) =
.
(c p(xt ) + cs(lt )) min
x
(8.7)
t=0
8.3.2
(8.8)
This is the classical storage balance equation of discrete-time production planning, saying that the
storage inventory at the beginning of the period t is given by the storage inventory at the beginning of
the preceding period, increased by the production quantity xt and decreased by the demand bt . In the
78
x
(qu)
90
90
80
70
60
50
50
50
40
30
30
30
20
10
10
i=0 bi
demand (cum.)
phase
stage t + 1
stage t
period t 1
...
storage
lt1
production xt1
period t
storage
lt
production xt
call of demand bt
lt+1 = lt + xt bt
state
lt+1
decision xt+1
period t + 1
...
storage
lt+1
production xt+1
Figure 8.7: Course of production and storage, and the relationship of periods and stages.
context of the dynamic programming method this means that state lt+1 at stage t + 1 depends only on
the decision xt in the preceding stage. Figure 8.7 illustrates the problem.
The costs of a period t consist of the production costs c p (xt ) and the storage costs cs (lt ) of the
period,
ct (lt , xt ) = c p (xt ) + cs (lt ) = c p (xt ) + 0.2 lt .
According to equ. (8.7) the total cost minimization problem is stated by
3
c(l0 , x) =
.
ct (lt , xt ) min
x
(8.9)
t=0
Let gt (lt ) be the minimum cost necessary to reach the final sought for storage inventory state l4 ,
starting with the inventory lt , without violating one of the constraints 0 5 xt 5 30, 0 5 lt 5 lmax . With
(8.8) this yields the recursion formula
gt (lt ) = min
xt At (lt )
h
i
ct (lt , xt ) + gt+1 (lt + xt bt )
(8.10)
with the decision spaces At (lt ) {0, 10, 20, 30}. The equation says that from the storage inventory
lt at the beginning of period t, the target l4 is reached at minimum costs by minimizing the sum of
production and storage costs to reach lt+1 and the minimum costs to reach l4 from lt+1 .
Equation (8.8) is the transition law of our multistage decision problem, i.e. f (xt , lt ) = lt + xt bt ,
and (8.10) is its Bellman functional equation.
8.3.3
To solve the production-smoothing problem, we sketch its decision graph. For this it is convenient
first to list the admissible states lt at each stage t. Since at the beginning the storage has to be empty,
we have state l0 = 0 at stage t = 0 must be zero, l0 = 0. At stage t = 1, the possible production
quantities are 0, 10, 20, or 30, but the demand is b1 = 10; this yields the possible storage states l1 = 0,
10, or 20, having produced lots of 10, 20, or 30 qu, respectively. Analogously, we obtain the possible
states at stages t = 2 and t = 3, restricted by the storage capacity of lmax = 20 qu; the state l4 = 0
79
10
10
10
0
10
10
10
20
20
(a)
20
20
20
(b)
20
Figure 8.8: (a) The admissible states at each stage, given by the decision space A =
lt
possible decisions.
is prescribed by the condition that the storage has to be empty at the end of the period. This yields
Fig. 8.8 (a). The possible decisions then are drawn, yielding Fig. 8.8 (b).
In a tedious way we then attach the total cost c p (xt ) + cs (lt ), with cs (lt ) = 0.2 lt , of each decision
xt . At stage t = 0, the storage costs cs (l0 ) are zero, since the storage is empty, cs (l0 ) = 0. There are
three possible decisions, x0 = 10, 20, or 30, each causing production costs of c p (x0 ) = 16, 22, or 37
cu, respectively. At the next stage t = 1, the possible decisions for state l1 = 0 are x1 = 20 or 30,
yielding total costs of c p (x1 ) + cs (0) = 22 or 37 cu. For state l1 = 10 we have storage costs cs (10) =
2, and thus the total costs of the three decisions are 16 + 2, 22 + 2, or 37 + 2, respectively. For l1
= 20, the storage costs are cs = 4, i.e. the total costs are 15, 22, or 26. Analogous calculations yield
Fig. 8.9 (a). The optimal subpaths are achieved by computing backwards and attaching the minimal
0
22
98
37
37
16
22
10
37
(a)
20
18
24
39
15
20
26
0
10
10
24
10916
39
22
37
20
20
20
76
37
37
26
39
22
87 18
10 24
39
78 1520
26
(b)
20
26
63 24
39
10
39
26
26
10
39
20
52 20
20
26
Figure 8.9: (a) The decision graph with the respective costs. (b) The optimal paths from each state to the final state are
marked.
8.3.4
We begin at stage 4 (the end of period 4). Since the state l4 of stage 4 is the final state l4 = 0 and thus
already reached, we have trivially g4 (0) = 0.
At stage 3, i.e. the end of period 3, the demand b3 = 40 has to be covered by the decision x3 .
By the storage balance equation (8.8) we have l4 = l3 + x3 b3 , or l3 + x3 = 40. Together with the
Bellman equation this yields
h
i
x3 = 40 l3 ,
g3 (l3 ) =
min
c p (x3 ) + 0.2 l3 + g4 (l3 + x3 40) .
x3 {0,10,20,30}
In the following table the production costs c p (x3 ) for each decision x3 is listed in the right column,
and the storage costs cs (l3 ) = 0.2 l3 for each storage quantity l3 in the lowest row. The aim of the
first iteration is to list the values of g3 (l2 + x2 20) for the admissible combinations of x3 and l3 (i.e.,
x3 {0, 10, 20, 30} and 0 5 l3 5 lmax = 20). First the values of the squared brackets are computed
(note that g4 (l4 ) = 0). For this purpose we determine the term 0.2 l3 by the following table.
80
l3
x3
0
10
20
30
0.2 l3
10
20
c p (x3 )
0
2
11
16
22
37
g3 (0) is not defined, because x3 = 40 is not in the decision space, the production capacity is maximally
x3 = 30. To obtain the optimum decision x3 in dependence from the storage quantity l3 , i.e. x3 (l3 ),
as well as the values g3 (l3 ), we have to determine the minimum of each l3 -column (bordered in the
following table).
l3
x3
0
10
20
30
x3 (l3 )
g3 (l3 )
10
20
39
30
39
26
20
26
Thus the values of g3 (l2 + x2 20) for the admissible combinations of x3 and l3 are 39 and 26.
Analogously, for stage 2 we have l3 = l2 + x2 20, and therefore
h
i
g2 (l2 ) =
min
c p (x2 ) + 0.2 l2 + g3 (l2 + x2 20) .
x2 {0,10,20,30}
We obtain the following left table for the values of g3 (l2 + x2 20) in dependence of l2 and x2 , viz. 39
or 26. This yields the right table for the optimum decision x2 (l2 ) for l2 and the corresponding values
of g2 (l2 ).
l2
x2
0
10
20
30
0.2 l2
10
20
c p (x2 )
39
39 26
39 26
0
2 4
l2
x2
0
10
20
30
x2 (l2 )
g2 (l2 )
11
16
22
37
10
20
76
30
76
63
65
20
63
59
52
20
52
We obtain the following left table for the values of g2 (l1 + x1 20) in dependence of l1 and x1 , and
the right table for the optimum decision and the corresponding values of g1 (l1 ).
l1
x1
0
10
20
30
0.2 l1
10
20
76
76 63
76 63 52
63 52
0
2 4
c p (x1 )
l1
x1
0
10
20
30
x1 (l1 )
g1 (l1 )
11
16
22
37
81
10
20
98
100
20
98
94
87
91
20
87
91
83
78
20
78
We obtain the following tables for the values of g1 (l0 + x0 10), as well as for the optimum decision
and the corresponding values of g0 (l0 ).
l0
x0
0
10
20
30
0.2 l1
10
20 c p (x0 )
98
87
78
0
10
20
114
109
115
20
109
l1
x1
0
10
20
30
x1 (l1 )
g1 (l1 )
11
16
22
37
lt
0
10
10
10
0
xt
20
20
20
30
bt
10
20
20
40
x
(qu)
0
90
80
26
70
52
50
40
30
20
10
109
b4
39
60
63
76
78
87
b3
b2
98
time
b1
1
8.4
In the travelling salesman problem a salesman must visit n cities. There is a cost ci j to travel from city
i to city j, and the salesman wishes to make a tour whose total cost is minimum (where the total cost
is the sum of the individual costs between the cities). (Besides: cost may be replaced by distance
or by or time.)
In the general formulation of the problem we admit the given cost matrix (ci j ) to be unsymmetric
(it need not hold ci j = c ji , i.e. it may be cheaper to travel from city i to j than to come from j to i).
Moreover, the triangle inequality (cik 5 ci j + c jk ) need not hold as well. Also, the costs ci j may by ,
which means that there is no direct way from city i to city j.
82
V
2
4
1
Figure 8.11: A travelling salesman problem and its solution, a minimum-cost tour with cost 7.
The travelling salesman problem then is to search for a permutation : {1, . . . , n} {1, . . . , n}
that minimizes the cost function
n1
c() =
c(i),(i+1) + c(n),(1)
min .
(8.11)
i=1
Since there may be -entries in the cost matrix (ci j ), it is possible that a solution does not exist at all,
i.e. that each permutation leads to an infinite cost value c().
A nave algorithm would go through all n! permutations and compute c() each time. (It
suffices to consider only permutations with (1) = 1; there are (n 1)! of those ones.) Therefore,
this algorithm has complexity O(n!), even worse than an exponential complexity.
By dynamic programming this nave ansatz can be improved. However, the resulting algorithm
will still have an exponential complexity. It is widely believed that there does not exist an algorithm
for the travelling with better complexity at all! This problem is one of a few which are called NPcomplete.
8.4.1
If an optimum round tour starts at city 1 and then visits city k, then the subtour from city k through
the cities {2, . . . , n} \ {k} back to city 1 must be optimal, too. Thus the optimality principle shines
through, which we will apply as follows.
For i {1, . . . , n} and S {1, . . . , n}, let g(i, S) be the length of the shortest path that starts at
city i and then visits each city in S exactly once and terminates at city 1. Note that a solution of the
travelling salesman problem is given if the length g(1, {2, . . . , n}) is computed. The function g(i, S)
can be described recursively:
(
ci1 h
/
i if S = 0,
g(i, S) = min c + g( j, S \ { j} if S 6= 0.
(8.12)
/
ij
jS
Our dynamic programming algorithm needs a table for the g(i, S)-values, where the combinations
with 1 Si are not needed. The algorithm works as follows.
for ( i = 2; i 5 n; i + + ) g(i, 0)
/ = ci1 ;
for ( k = 1; k 5 n 2; k + + ) {
for ( S, |S| = k, 1
/S ) {
for ( i {2, . . . , n} \ S ) {
compute g(i, S);
}
}
}
compute g(1, {2, 3, . . . , n);
83
The table magnitude is determined by (number of the is) (number of the Ss) 5 n2n . To compute
a table entry a loop has to be programmed which searches the minimum of all j S. This has
complexity of O(n). Hence the total complexity is given by
TTSP (n) = O(n2 2n ) = O(2n+2 log2 n ).
(8.13)
This is still an exponential complexity, but it is faster than O(n!), cf. 7.4.1.
Example 8.1. Let be given the cost matrix
0 10 15 20
5 0 9 10
C=
6 13 0 12
8 8 9 0
(8.14)
20
15
10
10
9
13
12
8
15
6
5
10
10
12
8
9
13
Figure 8.12: A travelling salesman problem and its solution, a minimum-cost tour 12431 with cost 35.
g(2, 0)
/ = c21 = 5,
g(3, 0)
/ = c31 = 6,
g(4, 0)
/ = c41 = 8,
as well as
g(2, {3}) = c23 + g(3, 0)
/ = 15,
g(2, {3, 4}) = min[c23 + g(3, {4}), c24 + g(4, {3})] = 25,
g(3, {2, 4}) = min[c32 + g(2, {4}), c34 + g(4, {2})] = 25,
g(4, {2, 3}) = min[c42 + g(2, {3}), c43 + g(3, {2})] = 23,
g(1, {2, 3, 4}) = min[c12 + g(2, {3, 4}), c13 + g(3, {2, 4}), c14 + g(4, {2, 3})] = min[35, 40, 43] = 35.
Storing the subpaths that yield the respective solutions, we achieve the optimum tour: it is 124
31.
84
Chapter 9
Simplex algorithm
Let us introduce the simplex algorithm by applying it to a concrete optimization problem. Afterwards
we will take a look at some of its general properties.
Example 9.1. (Production scheduling) A firm gains profit of 2 ke1 with product 1, and profit of
2.2 ke with product 2. To produce these products there are two machines A and B available. Machine
A can only be used up to 100 hours, and machine B up to 80 hours. (The remaining hours are needed
for maintaining.) To produce product 1, machine A is needed 1 hour a week and machine B 2 hours;
the respective numbers for product 2 are 2 hours on A and 1 hour a week on B. There are two resource
materials R and S needed. R is available only up to 960 kg a week, and material S only 1200 kg a
week. Producing product 1 requires 16 kg of R and 20 kg of S, whereas product 2 requires 15 kg of
R and 16 kg of S. The production schedule maximizing the profit is searched.
9.1
Mathematical formulation
1. (Determination of the objective function) What is the function that has to be optimized? In the
example it is the profit function
z = 2x1 + 2.2x2 ,
(9.1)
where x1 is the number of instances of product 1 and x2 the respective number of product 2.
It is called the objective function2 of the problem. Observe that quite naturally x1 and x2 are
nonnegative,
x1 , x2 = 0,
(9.2)
These inequalities are called primary constraints.
2. (Determination of the constraints). Once the quantities x1 and x2 are defined, the constraints
are directly derived:
x1
+ 2x2 5 100
2x1 +
x2 5
80
(9.3)
16x1 + 15x2 5 960
20x1 + 16x2 5 1200
We can rewrite equations (9.1), (9.2) and (9.6) in matrix notation to reformulate the optimization
problem as
z = c x max,
under the constraints Ax 5 b, x = 0,
(9.4)
11
ke = 1000 e
function in German: Zielfunktion
2 objective
85
where
c = (2, 2.2),
x=
x1
,
x2
1 2
2 1
A=
16 15 ,
20 16
100
80
b=
960 ,
1200
(9.5)
(Note that c is a row vector and denotes the transpose of the column vector c.) The crucial trick of
the simplex algorithm now is to introduce some extra slack variables yi to transform the inequalities
(9.3) into equalities:
x1
2x1
16x1
20x1
+ 2x2 + y1
+
x2
+ y2
+ 15x2
+ y3
+ 16x2
+ y4
= 100
=
80
= 960
= 1200
(9.6)
It is notationally convenient to record the information content of (9.4) in a so-called simplex tableau,
as follows.
x1 x2
z 2 2.2
0
y1 1 2
100
(9.7)
y2 2 1
80
y3 16 15 960
y4 20 16 1200
9.2
We now write the formulation in a more general manner. Let be given the optimization problem
z = c x max,
Ax 5 b,
x = 0,
a11 a1n
.. ,
..
A = ...
.
.
am1 amn
(9.8)
where
c = (c1 , c2 , . . . , cn ),
x1
x = ... ,
xn
b1
b = ... ,
(9.9)
bm
An optimization problem in the form of equation (9.8) is called a linear optimization problem. It is
linear, because both the objective function z and the constraints depend linearly on x. Methods to
solve this problem are put together under the notion of linear programming or linear optimization.
The simplex tableau of (9.8) is given by
z
y1
..
.
ym
x1 xn
c1 cn
A
b0
b1
..
.
bm
z
y1
..
.
ym
x1 xn
c1 cn b0
a11 a1n b1
..
..
..
...
.
.
.
am1 amn bm
(9.10)
At the start we have b0 = 0. We will call the top row of this tableau the z-row. The simplex algorithm
now consists of the following steps which are repeated until all the entries in the z-row of the tableau
are negative.
1. Determine the pivot column. The pivot column is the column of a maximum positive value
in the z-row,
j p { j : c j > 0 c j = max[ck ]}.
(9.11)
k
2. Determine the pivot row and the pivot element. The pivot row is the row i p for which the
quotient bk /ak, j p with ak, j p > 0 is minimal,
n
h b io
k
.
i p i : ai, j p > 0 ai, j p = min
k
ak, j p
(9.12)
If there is no positive matrix entry ai, j p > 0, the algorithm stops, there does not exist a solution.
The matrix entry ai p , j p is the pivot element.3 Save the value of the pivot element d ai p , j p .
3. Exchange the pivot row and column variables. Exchange yi p x j p .
4. Change the z-row values. The z-row values c j are changed according to the following cases:
c j /d
if j = j p , (c j is in the pivot column)
cj
(9.13)
c j c j p ai p j /d (rectangle rule) otherwise.
5. Change the matrix entries. The matrix entries ai j are changed according to the following cases:
1/d
if i = i p and j = j p , (ai j is the pivot element)
ai j /d
if i = i p and j 6= j p , (ai j is in the pivot row)
ai j
(9.14)
a
/d
if i 6= i p and j = j p , (ai j is in the pivot column)
ij
uv
w w
d
(9.16)
Here d = ai p j p is the (old) value of the pivot element, i 6= i p , j 6= j p , and alternatively one of each of
the following rows holds:
ai p j
ai j p
ai j
bi ,
ai j ,
bi
(9.17)
u=
v=
w=
p
p
cj
ai p j
c jp
To summarize, the simplex algorithm consists of the following major parts,
1. find the pivot element;
2. change the pivot variables yi p x j p ;
3. replace the pivot element by its reciprocal value 1/d;
4. multiply the rest of pivot row by the reciprocal pivot value 1/d;
5. multiply the rest of the pivot column by the negative of the reciprocal pivot value 1/d;
3 pivot
literally in German: Angel, Dreh-, Angelpunkt; in the context of the simplex method, however: Pivot
87
z
y1
y2
y3
y4
x1 x2
2 2.2
0
1 2
100
2 1
80
16 15 960
20 16 1200
z
y1
y2
y3
y4
x1 x2
2 2.2
0
1 2
100
2 1
80
16 15 960
20 16 1200
50 =
80
64
75
z
y1
y2
y3
y4
x1 x2
2 2.2
0
1 2
100
2 1
80
16 15 960
20 16 1200
We look for the maximal positive value in the z-row, marked by . This determines the pivot column.
Then the right column (the b-values) are divided by the pivot column; the results are listed right of the
tableau. The smallest of them, marked by , determines the pivot row. The pivot element is boxed,
ai p j p = 2 with i p = 1 and j p = 2.
Now the entry transformations are executed. The saved value is d = 2. Then the first z-row value,
c1 = 2, is computed by the rectangle rule (9.16) as c1 = 2 2.2 21 = 0.9; c2 is in the pivot column,
hence it gets the value c2 = 12 . Similarly, the pivot row is divided by 2. To the rest we apply the
rectangle rule. Exchanging the indices i p + n = 3 and j p = 2 yields the left tableau below.
z
x2
y2
y3
y4
x1
y1
0.9 1.1 110
0.5 0.5
50
1.5 0.5
30
8.5 7.5
210
12 8
400
z
x2
y2
y3
y4
x1
0.9
0.5
1.5
8.5
12
y1
1.1 110
0.5
50
0.5
30
7.5
210
8
400
100
20
24.7
33.3
Now the procedure repeats. The pivot column is determined by the only (hence maximal) positive
value in the z-row, c1 = 0.9; dividing the right b-values by the pivot column yields the values right of
the tableau. By the algorithm we find the following tableau.
z
x2
x1
y3
y4
y2
y1
0.6 0.8 128
0.33 0.67
40
0.67 0.33
20
5.67 4.67
40
8
4
160
(9.18)
All elements c j of the z-row are now negative. Hence the algorithm stops, the optimum is achieved.
What does this tableau tell us? Its interpretation gives the following results:
x2 = 40: produce 40 units of product 2;
x1 = 20: produce 20 units of product 1;
y3 = 40: 40 kg of raw material R are left over;
y4 = 160: 160 kg of raw material S are left over;
y2 = 0: machine A works to capacity;
y1 = 0: machine B works to capacity;
b0 = 128: the profit is 128 ke.
88
9.3
To analyze the simplex algorithm, we consider the constraints of example 9.1 in more detail. First,
we rewrite the inequalities (9.3) as inequalities with respect to straight lines:
x2 5 50 x21 ,
esp.: x1 = 0 x2 5 50,
x1 = 100 x2 5 0
x2 5 80 2x1 ,
esp.: x1 = 0 x2 5 80,
x1 = 40 x2 5 0
x2 5 64 16
15 x1 ,
esp.: x1 = 0 x2 5 64,
x1 = 60 x2 5 0
x2 5 75 54 x1 ,
esp.: x1 = 0 x2 5 75,
x1 = 60 x2 5 0.
On the right hand there are given the intersections of the straight lines with the x1 - and x2 -axes.
Graphically, the situation is given in figure 9.1. Each constraint is represented by its straight line. All
x2
x2
x2 = 80 2 x1
x2 = 80 2 x1
16
x2 = 64 15
x1
50
x2 = 64 16
15 x1
x2 = 75 45 x1
50
x2 = 75 54 x1
x2 = 50 12 x1
x2 = 50 12 x1
x1
50
x1
100
50
100
Figure 9.1: Left figure: the constraint lines of example 9.1; the respective inequality 5 is geometrically represented
by the region below the line, the inequalities x1 , x2 = 0 by the positive quadrant above the x1 -axis and left of the x2 -axis.
The shaded region therefore denotes all possible solutions (x1 , x2 ). Right figure: the same sketch, added some possible
z-lines (dashed); the maximum meeting the shaded region (dashed-dotted line) is the solution.
together, they form the shaded region of possible solutions (x1 , x2 ). (Note that a solution is a point in
the diagram!) Also you find various parallel lines for the profit value z (contour lines). On each line
the profit is equal. The highest line meeting the shaded region is the maximum profit.
9.3.1
89
...
Figure 9.2: A simplex in R, in R2 , and in R3 . A simplex in Rn has at most n + 1 vertices.
9.4
Duality
What do we have to do if we want to solve a linear minimum problem? The simplex algorithm
only can be applied to linear maximum problems, z max. A first idea is to consider the modified
objective function z0 = z, but this leads into a dead-end street: usually the relevant coefficients then
are negative and we have no chance to choose a pivot column.
The solution is duality. In general, duality is a fascinating and powerful relation between two
objects being in different classes (or contexts) but having the same or equivalent properties. For
example, in every-day life the mirror reflection is a duality, since the mirror image of a geometrical
object can be uniquely mapped to its original . . . and vice versa, by the same operation. Another,
dual problem
constraints:
A ~x 5 ~b
A ~y =~c
variables:
~y = 0
(9.19)
variables:
~x = 0
Here A denotes the conjugate4 (that is, since we only deal with real matrices, simply the transpose5 )
of A, i.e. the matrix resulting from interchanging its rows and columns. In particular, the following
4 in
5 in
90
correspondences between the variables and the dual slack variables can be shown [10].
minimum problem
dual problem
variable yi
variable x j
(9.20)
Example 9.2. We first consider a trivial minimum problem to clarify the principles. Assume a company has to buy 5 qu of a product whose production costs are limited by the condition that three qu
of it cannot be got for less than 12 e. What is the price y per qu of the product to minimize the
total costs? (It is immediately clear that the solution is y = 4, but let us see how the solution startegy
works!)
Mathematical formulation: The objective function expressing the total costs is z(y) = 5y, and the
constraint is given by 3y = 12. In matrix notation we thus achieve
z (y) = by min,
A y = c,
(9.21)
with the one-dimensional vectors y (= (y)), b = 5, c = 12. and the (1 1)-matrix A = 3. Since
A = A = 3, we achieve the dual problem
z(x) = 12x max,
3x 5 5.
(9.22)
z
y
x
12 0
3 5
z
x
y
4 20
1
3
5
3
The maximum problem thus is solved for x = 53 , and the original minimum problem for y = 4, yielding
total costs of z(3) = 5 4 = 20.
9.4.1
(9.23)
is optimized with respect to two different point of views: the minimum problem (9.21) searches to
minimize the production costs for a given quantity (x = 5 qu) and a lower price limit (3y = 12 e); the
dual maximum problem (9.22) aims to maximize the quantity x for a given price (y = 12 e/qu) and
an upper quantity limit (3x 5 5 qu).
Therefore, the minimum problem (9.21) is related to the demand-sided viewpoint (the producer
pays for ressources, i.e. has costs), whereas the dual problem (9.22) is viewed by the supplier (who
gains the production prices). Duality in economical optimization problems often reflects the different
points of view of a demander and a supplier.
Example 9.3. Let be given the minimum problem
z (~y) = 100y1 + 80y2 + 960y3 + 1200y4 min
(9.24)
(9.25)
91
In other words, we have z (~y) = ~b ~y under the constraints A~y =~c, where
~b = (100, 80, 960, 1200), A = 1 2 16 20 , ~c = 2 .
2 1 15 16
2.2
Thus we see immediately that the dual problem is exactly example 9.1 (p. 85). Solving it by the
simplex algorithm yields the final tableau (9.18), from which we can read the solution for ~y in the
z-row:
y1 = 0.8, y2 = 0.6, y3 = y4 = 0,
or ~y = (0.8, 0.6, 0, 0) .
Example 9.4. A firm produces a product from two raw materials, where for each 2 kg of the first
material there is always at least 1 kg of the second material. It should be bought at least 50 kg of the
first material, and at least 100 kg of both in total. The price of the first material is 6 e /kg, whereas
the price of the second material is 9 e /kg. What quantities of both materials must be bought such that
the costs are minimal?
Solution. First we formulate the problem mathematically. Denote the quantity in kg of the first
material by y1 , and the quantitiy of material 2 by y2 . The first condition mentioned means that we
always have at most twice as much of material 1 than of material 2, i.e., y1 5 2y2 . This can be
rewritten to a restriction in the form: 2y2 y1 = 0. Together with the two other mentioned restrictions
we thus have the systems of inequalities
y1 + 2y2 = 0,
y1
= 50,
y1 + y2 = 100.
The objective function reads
z (y1 , y2 ) = 6y1 + 9y2
min .
In matrix notation, this is z (~y) = ~b~y min, under the constraints A~y =~c, where
1 2
0
6
~
A = 1 0 ,
b = 50 ,
~c =
,
9
1 1
100
This is a minimum problem. Its dual is z(~x) = ~c~x max, under the constraints A~x 5 ~c. This gives
the following simplex tableau.
z
y1
y2
x1 x2 x3
50 100 0 0
1
1 1 6
0
1
2 9
z
x2
x3
z
x2
y2
x1
y1
x3
50 100 100 600
1
1
1
6
1 1
3
3
x1
y1
y2
200
50
100
700
3 3
3
1
1
1
7
3
13 13
1
1
Therefore, 200/3 = 66.6 kg of the first raw material and 100/3 = 33.3 kg of the second raw material
should be bought. This yields total buying costs of 700 e.
92
Chapter 10
Genetic algorithms
10.1
Evolutionary algorithms
There is some confusion with respect to the terminology of genetic algorithms and related areas. This
is mainly due to the fact that different ideas in the field developed independently from each other at
different times. The idea of applying evolutionary principles to computer sciences first emerged in the
1960s in the context of finite automata. For this approach the term evolutionary programming was
established. Also in the 1960s and in a different context, experimental optimization was summarized
to the notion evolution strategies, done by Schwefel and Rechenberg in Berlin.1 In the 1970s, genetic
algorithms was introduced by Holland and Fogel in the USA. Nowadays, all these terms are comprised
to the branch evolutionary algorithms. Whereas in evolutionary programming the parameters of the
evolutionary computation
`
```
``
evolutionary algorithms
`
```
``
evolutionary programming
evolution strategies
```
```
swarm intelligence
```
```
genetic algorithms
program executing the optimization are varied, evolution strategies and genetic algorithm differ only
in that the search space of evolution strategies is real hyperspace and in that the mutation is adapted
during the evolution process. For details and further historical remarks see [15, 42].
10.2
Basic notions
There is a large class of interesting problems for which no reasonably fast algorithm could be developed so far. Many of these problems are optimization problems. Given such a hard optimization
problem it is sometimes possible to find an efficient algorithm whose solution is approximately optimal. For some hard optimization problems we can use probabilistic algorithms as well these
algorithms do not guarantee the optimum value, but by randomly choosing sufficiently many witnesses the probability of error may be made as small as wished. Examples of such algorithms are
Monte Carlo techniques and especially simulated annealing. A good introduction to these topics
and to the subject of genetic algorithms in general is given in [15].
In general, any abstract task to be accomplished can be thought of as solving a problem which,
in turn, can be perceived as a search through a space of potential solutions. Since we are searching
for a best solution, we can view this task as an optimization process. For small spaces, classical
1 http://geneticargonaut.blogspot.com/2006/03/evolutionary-computation-classics-vol.html
93
exhaustive, or brute-force, methods usually suffice, for larger spaces special techniques from the area
of artificial intelligence must be employed.
Genetic algorithms are among such techniques. They are stochastic algorithms whose search
methods model the natural phenomenon of evolution: genetic inheritance, mutation, and selection. In
evolution, the problem each species faces is one of searching for beneficial adaptions to a complicated
and changing environment. The knowledge that each species has gained is embodied in the makeup
of the chromosomes of its members.
Example 10.1. (The rabbit example) [30]. At any given time there is a population of rabbits, some
of which are smarter and faster than the others. The faster and smarter rabbits are less likely to be
eaten by foxes, and therefore more of them survive to do what rabbits do best: make more rabbits. Of
course, some of the slower and dumber rabbits will survive just because they are lucky.
This surviving population of rabbits starts breeding. The breeding results in a good mixture of
rabbit genetic material: some slow rabbits breed with fast rabbits, some fast with fast, some smart
rabbits with dumb rabbits, and so on. And on the top of that, Nature throws in a wild hare every
now and then by mutating some of the rabbit genetic material.
What is the ultimate effect? The resulting baby rabbits will, on average, be faster and smarter than
those in the original population because more faster and smarter parents survived the foxes.2
A genetic algorithm follows a step-by-step procedure that closely matches the story of the rabbits.
Genetic algorithms use a vocabulary borrowed from natural genetics:
We talk about individuals or genotypes in a population
An individual is determined by its chromosomes. Each cell of an organism of a given species
carries a certain number of chromosomes (a human being, e.g., has 46 of them); however, we
talk about one-chromosome individuals only (haploid chromosomes). Often you will thus
find the notions chromosome, individual, and genotype being used as synonyms.3
Chromosomes are made of units, the genes. They are arranged in a linear succession. Genes
are located at certain places of the chromosome, called loci.
Any feature of individuals (such as hair color) can manifest itself differently; the gene is said to
be in several states called alleles, i.e., values.
Table 10.1 shortly compares the original biological meaning of different notions and their meaning
with respect to genetic algorithms.
An evolution process running on a population of chromosomes corresponds to a search through a
search space S of feasible solutions. Such a search requires balancing two apparently conflicting objectives: exploiting the best solutions and exploring the solution space sufficiently. Random search is
a typical example of a strategy which explores the solution space ignoring the exploitations of promising regions of the space. Genetic algorithms are a class of general purpose (domain independent)
search methods which strike a remarkable balance between the conflicting strategies of exploring and
exploiting.
10.3
A genetic algorithm for a paricular problem must specify the following components.
2 Of
course the foxes undergo a similar process, otherwise the rabbits might become too fast and smart for them.
speaking, in biology there is the hierarchy chromosome genotype phenotype = individual. Especially, there may be several genotypes resulting in the same phenotype, mainly because a phenotype is influenced by the
environment. Usually, these distinctions are not adhered to in evolutionary algorithms.
3 Strictly
94
Notion
chromosome
gene
allele
locus
phenotype
genotype
generation
fitness
Biology
blueprint of the individual organism, package
for carrying DNA
inheritance unit consisting of DNA, occupying a segment in a chromosome and determining a characteristic of the organism (a
gene for the eye color)
form of the gene, specifying a characteristic
of the organism (eye color green)
place of a gene in the chromosome
outward appearance of the organism
structure of the chromosome
cohort, set of organisms born at the same time
capability of an individual of certain a genotype to reproduce
Genetic Algorithms
number array or string, specifying the data
structure of solutions in search space S
region of the chromosome
Thus usually, S {0, 1}n Zn , i.e., genetic algorithms apply to combinatorial optimization
problems. Usually, each single bit is a gene.
An objective function f : S R, called fitness function in case of evolutionary algorithms,
which plays the role of the environment, the selection. Therefore, f (x) is the fitness of the
feasible solution x S.
Genetic operators which generate and alter the chromosomes of the children (alteration).
There are two types of genetic operators:
A crossover operator or recombination operator, CX : Sk S which combines genes of
two (S2 = S S) or more (Sk = S| {z
S}) individuals, called the parents:
ktimes
CX
Often, a crossover operator simultaneously creates two children from two parents, with
the roles of the two parents exchanged. There may be several crossover operators acting
in a genetic algorithm.
A mutation operator M : S S which changes one or several genes randomly,
Various parameter values used by the genetic algorithm, such as population size, probabilities
of applying the genetic operators, etc.
Moreover, a design of a genetic algorithm has to specify the following subroutines.
95
Initialization. This subroutine creates an initial population P(0) of feasible solutions. Usually,
the population size p remains constant over the execution of the algorithm,
..
p individuals
.
(Note that an individual here is identified with its chromosome.) Usually, the initial population
is given by creating p solutions by random.
Selection. (Survival of the fittest) This subroutine selects the parents from the current population to reproduce the next generation. There are three popular selection principles, truncation
selection where a fixed percentage of the best individuals are chosen, roulette-wheel selection
where individuals are selected with a probability proportional to their fitness, and tournament
selection where a pool of individuals is chosen at random and then the better individuals are
selected with predefined probabilities. Most selection are stochastic, such as the latter two, so
that a small proportion of less fit solutions are selected. This helps keep the diversity of the
population large, preventing premature convergence on local but non-global optima.
Reproduction. This subroutine generates the next generation from the selected parents of the
current population. Besides the application of the genetic operators to the parents, the routine
must define to what extend the parent generation and the children survive. A genetic algorithm
with a reproduction scheme in which the best individual of a generation is guaranteed to survive
is said to obey the elite principle [15, 5.2].
Termination. A terminating condition has to be specified. In difference to a deterministic optimization algorithm, a genetic algorithm running a finite time can never yield a global optimum
with certainty. So there has to be implemented a highhanded terminating condition, e.g., if the
number of generations has reached a fixed limit, or if the highest ranking solutions fitness has
reached a plateau such that successive iterations no longer produce better results.
A genetic algorithm therefore is a probabilistic algorithm which maintains a population of p individuals,
P(t) = (x1 (t), . . . , x p (t)) S p
in each iteration step t, called the t-th generation. For each individual x j (t) of this generation, its
fitness f (x j (t)) is evaluated. Then a new population, the next generation t + 1, is formed by selecting
the fitter individuals (selection step). Some members of the new population undergo transformations by means of the genetic operators to form new solutions (reproduction step). After some
number of generations the program generates better and better individuals, hopefully the best individual represents a near-optimum solution. To summarize, the canonical genetic algorithm reads in
pseudocode:
algorithm genetic {
t 0;
initialize P(t);
evaluate P(t);
while ( !terminating condition ) {
t++;
select P(t) from P(t 1);
reproduce P(t);
evaluate P(t);
}
}
96
(10.1)
10.4
A wanderer has a knapsack with a limited capacity of total weight. He has to select among a finite set
of objects each of which has a certain value and a certain weight. Which of the items should he pick
in his knapsack to maximize the total value, without violating the maximum weight?
Example 10.2. Assume that the wanderer has a knapsack with a maximum load of 5 kg and wants to
select the objects given by the following table.
Object
Weight
A
2 kg
B
3 kg
C
1 kg
Maximum load
5 kg
Value
8e
10 e
3e
To construct a genetic algorithm for this problem, we first recognize that it is an optimization
problem, namely a maximum problem. What is its search space, what is its objective function?
Given n objects, the search space S is most naturally given by a binary string x {0, 1}n of
length n, where the k-th bit indicates whether the k-th object is picked into the knapsack: xk = 0
means that it is not, xk = 1 means that it is. For instance, Example 10.2 implies a search
space which consists of vectors (x1 , x2 , x3 ) with x1 , x2 , x3 {0, 1}, and the candidate solution
x = (0, 1, 0) means that only object B is put into the knapsack. However, we have the constraint
that the maximum load of the knapsack must not be exceeded: this is most easily expressed by
a weight vector w = (w1 , . . . , wn ) where wk denotes the weight of object k, and thus the weight
constraint reads nk wk xk 5 wmax . Therefore, the search space is determined by
n
S = x {0, 1}n :
wk xk 5 wmax
(10.2)
k=1
where x = (x1 , . . . , xn ). Formally, we can take the n components xk and wk as (column) vectors
x and w, and the constraint may be written as w x 5 wmax .
The objective function for the knapsack problem is obvious, it is the total value of the load.
Defined on the search space, we therefore have f : S R+ ,
n
f (x) =
vk xk
(10.3)
k=1
where v = (v1 , . . . , vn ) is the value vector of the n objects, vk denoting the value of the k-th
object. In Example 10.2 we have v = (8, 10, 3), and the candidate solution x = (0, 1, 0) has a
fitness of f (x) = 10. With it, each generation can be evaluated.
Creating the initial population by random, we have to tackle the problem that a random binary
string x {0, 1} is not necessarily a feasible solution, i.e., it may be x
/ S, because it exceeds
the maximum load. There are two strategies for this case, first to repair the chromosome such
that the corresponding solution obeys the constraint, or second to give it a penalty fitness, say
a vanishing value.
97
10.5
10.5.1
Premature convergence
For practical applications of genetic algorithms, the question is essential: How fast does the algorithm
converge to a global optimum? So far, little is theoretically known about the convergence of a general
genetic algorithm. There are too many degrees of freedom in the effects of the various genetic and
selection operators. What is known is that the canonical genetic algorithm above with elite principle
(the fittest individual(s) of each generation survives certainly) does converge, but that the user
should have some time to wait, because the optimum is reached for t .
Thus, as for any probabilistic algorithm, breaking up a genetic algorithm after finite time, one
can never be sure to have reached a global optimum. Even if there is not made any improvement for
some number of generations, it is a good guess that the algorithm has found a local optimum, i.e., a
suboptimum. This behavior is called premature convergence. It can happen in particular if the region
of attraction of a global optimum is small as compared to region of attractions of suboptima.
It is the selection operator which has main responsibility for premature convergence. The roulettewheel selection empirically leads to a smaller selection pressure [15, 5.3]. In this way, also chromosomes which initially are less fit get the chance to evolve further. So, this is the general problem of
any genetic algorithm: to adjust the parameters and the operators in such a way that a optimal balance
between exploration and exploitation is found, that is, between wide scanning of the search space and
simultaneous support to further develop good individuals in the population.
10.5.2
Coding
Most genetic algorithms act on search spaces containing binary strings as chromosomes, i.e., S
{0, 1}n . Usually, such a string is the binary representation of an integer which expresses parameter
values of the optimization. However, the binary code has the property that successive numbers may
have a large Hamming distance. The Hamming distance dH between two binary strings is defined by
the number of positions in which they differ. For example,
dH (01111, 10000) = 5.
(10.4)
Often the fitness depends on the numerical value of a chromosome, the mutation of single bits in
a chromosome whose fitness is close to the optimum may lead to negative effects, cf. Table 10.2.
For instance, consider a population of 3-bit strings x and a fitness function f (x) = 8x x2 . Then the
Decimal Binary code Gray code
number Code dH Code dH
0
000
000
1
1
1
001
001
2
1
2
010
011
1
1
3
011
010
3
1
4
100
110
1
1
5
101
111
2
1
6
110
101
1
1
7
111
100
Table 10.2: The numbers 0, . . . , 7 represented in standard (natural) binary code and in Gray code. The Hamming
distance dH between two successive numbers is always 1 for the Gray code, whereas it varies for the binary code.
maximum is achieved for x = 1002 . If we had a population P = {0, 3, 5} (in decimal notation) then by
f (3) = f (5) = 15 and f (0) = 0 we would have the curious situation that although the chromosomes
98
x = 011 and x = 101 have the same good fitness, a mutation of a single bit could possibly change 101
to the optimum 100, but 011 has to change all three bits. Even the much worser solution x = 000 only
needs to change a single bit to become the optimum.
A commonplace solution to this problem is the use of a Gray code. It is a one-to-one mapping
from the natural binary code and is defined as g = (b), where g = gn1 . . . g1 g0 is the Gray code
string, b = bn1 . . . b1 b0 is the binary code string, and
gi = bi+1 bi
(i = 0, . . . , n 1)
(10.5)
with bn = 0. Here denotes the bitwise XOR operation (in Java: ). Thus the Gray code is calculated
from the binary code by the following scheme:
bn1 bn2 b1 b0
bn1 b2 b1
(10.6)
= gn1 gn2 g1 g0
With logical bit operators, this is simply expressed as
g = b(b >> 1)
In Java, a method converting a long integer into a Gray code string may therefore be implemented as
follows:
public static String grayCode(long b) {
return Long.toBinaryString( b(b >> 1) );
}
bi = bi+1 gi
(i = n 2, . . . , 0).
(10.7)
Accordingly, the inverse method converting a Gray code into a long integer could read as follows.
public static long toLong(String grayCode) {
long b = Long.parseLong(grayCode,2), g = 0;
for(int i = grayCode.length() - 1; i >= 0; i--) {
g += ( (g & (1 << i+1) ) >> 1) (b & (1 << i));
}
return y;
}
99
10.6
In the 1990s there have been several attempts to approximate the TSP (Example 6.2) by genetic
algorithms; here one of them is presented.
First we note that a binary string is not an appropriate chromosome representation. In a binary
representation of an n city TSP, each city could be coded as a string of blog2 nc + 1 bits; thus a
chromosome as a complete tour is a string of n(blog2 nc + 1) bits. A mutation now can result in
sequence of cities which is not a tour: we can get the same city twice in a sequence. Moreover, for a
TSP with 20 cities, where we need 5 bits to represent a city, some 5-bit sequences (10101, e.g.) do not
correspond to a city. Similar considerations are present when applying crossover operators. Clearly,
if we use mutation and crossover operators as random operators, we would need some sort of repair
operator which would move a chromosome back into the solution space S.
But there is a better representation, the integer vector representation. Instead of using repair operators, we can incorporate the knowledge of the problem into the representation. This intelligently
avoids building an illegal individual. Let the vector
~v = (i1 , i2 , . . . , in )
with i j {1, 2, . . . , n}
( j = 1, . . . , n)
(10.8)
represent the tour from i1 to i2 , from i2 to i3 , . . . , form in1 to in , and from in back to i1 ,
i1 i2 . . . in1 in i1 .
(~v is a so-called permutation). We can initialize the population by a random sample of ~v. (We can
alternatively use a heuristic algorithm for the initialization process to get preprocessed outputs.)
The evaluation of a chromosome is straightforward. Given the cost of travel ci j between cities i
and j, we can easily calculate the total cost of the entire tour with the fitness function f : Zn n R,
n1
f (~v) =
ci j ,i j+1 + cin,i1 .
(10.9)
j=1
(cf. equation (8.11), p. 83). In the TSP we thus search for the best ordering of cities in a tour. It is
relatively easy to come up with (unary) mutation operators. However, there is little hope of finding
good orderings (not to mention the best ones), because a good ordering needs not to be located near
another good one. The strength of genetic algorithms especially arises from the structured information
exchange of crossover combinations of highly fit individuals. So, what we need is a crossover operator
that exploits important similarities between chromosomes. For that purpose we need an OX operator.
Given two parents ~v and ~w, OX builds offspring ~u by choosing a subsequence of a tour from one
parent and preserves the relative order of cities (without those of the subsequence) from the other
parent. For example,
~v = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12),
(10.10)
If the chosen part from parent ~v is (4, 5, 6, 7), we have to cross out the cities in ~w, i.e. ~w0 = (3, 1, 11,
12, 2, 10, 9, 8), start at city 1, and insert the chosen subsequence at the same position as in parent ~v.
This gives the child
~u = (1, 11, 12, 4, 5, 6, 7, 2, 10, 9, 8, 3).
(10.11)
As required from a genetic algorithm, the child bears a structural relationship to both parents. The
roles of the parents ~v and ~w can then be reversed to construct a second child.
A genetic algorithm based on the above operator outperforms random search, but leaves much
room for improvements. Typical results from the algorithm (average over 20 random runs) as applied
to 100 randomly generated cities gave, after 20 000 generations, a value of the whole tour 9.4% above
the optimum.
100
10.7
( 12 ; 12 )
(0; 10)
D
(10; 0)
(2; 2)
Table 10.4: Strategy table for prisoners dilemma game. The first entry a in each box (a, b) is player As payoff for the
corresponding strategy profile; the second one b is player Bs payoff.
The CC strategy in a multi-move prisoners dilemma game is a so-called Nash equilibrium.
That means that each players strategy is an optimal response to the other players strategies.
Example 10.3. (Multi-move prisoners dilemma in economics) Consider two oligopolists A and B
competing on a single market. In each seasonal period, each of them has the choice to cooperate
(C) and to make a fair price for a given product, or to dump (D) the product and make a dirt-cheap
price for it. The expected payoff (in Mio e) for one firm depends on the simultaneous decision of the
C
D
C
(3; 3)
(5;2)
D
(2; 5)
(1; 1)
Table 10.5: Strategy table for a single move in the economic prisoners dilemma game, where C implies the decision to
make a fair price for a product, and D refers to dumping the product.
competitor according to Table 10.5. How should each firm decide to make the price?
101
to defect
(zur anderen
Seite)
u berlaufen
A strategy in game theory is a plan of unique moves to be made after each possible past constellation of moves; such a constellation can depend on only the last move of the rival, but also on a series
of past moves. In other words, a strategy is a collection of precise answers to all possible questions.
We will now consider how a genetic algorithm might be used to learn a strategy for the prisoners
dilemma. We have to maintain a population of players, each of whom has a particular strategy.
Initially, each players strategy is chosen at random. Thereafter, at each step, players play games and
their scores are noted. Some of the players are then selected for the next generation, and some of
those are chosen to mate. When two players mate, the new player created has a strategy constructed
from the strategies of its parents (crossover). A mutation, as usual, introduces some variability into
players strategies by random changes on representations of these strategies.
(a3 ,
b3 )
|
(a2 ,
b2 )
{z
3 previous moves
(a1 ,
b1 )
a0
}
| {z }
next move
with a j , b j {C, D}, for i, j = 3, . . . , 0; the as are this players move (C for cooperate, D for defect)
and the bs are the other players moves.
102
Experimental results
Running this program, Axelrod obtained quite remarkable results. From a random start, the genetic
algorithm evolved populations whose median member was as successful as the best known heuristic
algorithm. Some behaviorial patterns evolved in the vast majority of the individuals:
strategy
history
Dont rock the boat (CC)(CC)(CC)
Be provokable
(CC)CC)CD)
Accept an apology (CD)(DC)(CC)
Forget
(DC)(CC)(CC)
Accept a rut
(DD)(DD)(DD)
10.8
next move
C
D
C
C
D
Conclusions
The examples of genetic algorithms in this chapter show their wide applicability. At the same time we
observed first signs of difficulties. The representation issues of the traveling salesman problem were
not obvious, and the new operator (OX crossover) was far from trivial. What kind of representation
difficulties may exist for other problems? On the other hand, how should we proceed in a case where
the fitness function is not well defined?4
4 For
example, the famous Boolean Satisfiability Problem (SAT) seems to have a natural string representation (the i-th
bit represents the truth value of the i-th Boolean variable), however, the fitness function is far from being obvious.
103
Appendix
104
Appendix A
Mathematics
A.1
(a > 0, x, y R)
(A.1)
Let ln denote the natural logarithm, i.e. ln x = loge x. The logarithm is the inverse of the exponential
function,
eln x = x.
ln ey = y.
(x > 0, y R)
(A.2)
Moreover,
c loga x = loga xc .
(x, y > 0, c R)
(A.3)
A.2
1
ln a
logb x
.
logb a
(a, b N, x > 0)
(A.4)
ln x.
Number theory
Integers play a fundamental role in mathematics as well as in algorithmics and computer science. So
we will start with the basic notation for them. As usual, N = {1, 2, 3, . . .} is the set of natural numbers
or positive integers, and
Z = {. . . , 3, 2, 1, 0, 1, 2, . . .}
is the set of integers. The rational numbers q = m/nfor m, n Z are denoted by Q. The holes that
are still left in Q (note that prominent numbers like 2 or are not in Q!) are only filled by the real
numbers denoted by R. Thus we have the proper inclusion chain N Z Q R. (Proper inclusion
means that there are always numbers that are in a set but not in its subset. Do you find an example for
each subset-set pair?)
A very important topic in mathematics, especially for the growing area of cryptology, is number
theory. We will list here some fundamentals. In this chapter lower case italic letters (such as m, n, p,
q, ...) denote integers.
Definition A.1. We say that m divides n, in symbols m | n, if there is an integer k such that n = km.
We then call m a divisor of n, and n a multiple of m. We also say that n is divisible by m. If m does
not divide n, we write m - n.
Example A.2. We have 3 | 6 because 6 = 2 3. Similarly, 5 | 30 because 30 = (6) (5).
105
Any integer n divides 0, because 0 = n 0. The only integer that is divisible by 0 is 0, because
n = 0 k implies n = 0. Furthermore, every integer n is divisible by 1, for n = n 1.
Theorem A.3. For m, n, r, s,t Z we have the following:
(i) If m | n and n | r then m | r.
(ii) If m | n, then mr | nr for all r.
(iii) If r | m and r | n, then r | (sm + tn) for all s,t.
(iv) If m | n and n 6= 0, then |m| 5 |n|.
(v) If m | n and n | m, then |m| = |n|.
Proof. [4, p. 3]
The following result is very important. It shows that division with remainder of integers is possible.
Theorem A.4. If m and n are integers, n > 0, then there are uniquely determined integers q and r
such that
m = qn + r
and
0 5 r < n,
(A.5)
namely
q=
jmk
n
and
r = m qn.
(A.6)
The theorem in fact consists of two assertions: (i) There exist integers q and r such that . . . , and
(ii) q and r are unique. So we will divide its proof into two parts, proof of existence and proof of
uniqueness.
Proof of Theorem A.4. (i) Existence. The numbers m and n are given. Hence we can construct
q = bm/nc, and thus also r = m qn. These are the two equations of (A.6). The last equation is
equivalent to m = qn r, which is the first equation of (A.5). The property of the floor bracket
implies
m/n 1 < q 5 m/n | (n)
(n>0)
= n m > qn = m | +m
n > m qn = 0.
This implies that r = m qn satisfies the inequalities 0 5 r < n.
Uniqueness. We now show that if two integers q and r obey (A.5), they also obey (A.6). Let be
m = qn + r and 0 5 r < n. Then 0 5 r/n = m/n q < 1. This implies
m
m
0 5 q < 1
|
n
n
m
m
= 5 q < + 1 | (1)
n
n
m
m
=
= q > 1
n
j mnk
=
q=
.
n
To summarize, the proof is divided in two parts: The first one proves that there exists two numbers
q and r that satisfy (A.5). The second one shows that if two numbers q and r satisfying (A.5) also
satisfy (A.6). Thus q and r as found in the first part of the proof are unique.
106
A.3
In this section it is proved that searching one of m entries in an unsorted data structure requires (N)
queries on average, where N denotes the size of the data structure. Such an unsorted data structure
may be represented by a database, or a data collection such as an array.
Let U be a given enumerable set, the universe. Usually U is a subset of all words, or
strings, being constructable from a given alphabet . Suppose we wish to search through a finite set
X U of N elements, the search space or database. We call X unsorted or unstructured if there is
not imposed any further conditions on X or any order of its elements, i.e., they are considered to be
distributed completely by random. Let moreover S U be finite, the solution set. Then an oracle is
denoted as the characteristic function f : X {0, 1} of the solution set S, i.e.,
1 if x S,
f (x) =
(A.7)
0 otherwise.
In turn, the solution set S X is uniquely determined by the oracle , viz., S X = {x X : (x) = 1}.
Calling f an oracle we mean that we may have neither access to its internal working, nor immediate
access to all argument-value pairs (x, f (x)). We only can query it as many times as we like, but with
each query comes a computational cost. An oracle does not necessarily know the solutions, but it can
recognize them.1 Then the SEARCH problem is defined as the problem to find one of the m = |S|
items in a database X, given an oracle f which is supposed to be computable in time complexity
T f (n) = O(nk ) for some k N with respect to the maximum length n of a string coding an element in
X. Typically, n = dlogc Ne with N = |X| and c = ||, where is the underlying alphabet.
Theorem A.5. Let QN,m denote the number of queries on an unsorted database X with N = |X| entries
to find one of m searched elements, but where m is not known to the searcher. Then the expected value
E[QN,m ] of queries is given by
if m = 0,
N
(A.8)
E[QN,m ] =
N + 1 if m > 0.
m+1
Proof. The case N > 0, m = 0 is clear, the case N > 0, 1 5 m 5 N can be proved by induction over
N. First, N = 1 and m = 1 is trivial. For N > 1, we notice with Figure A.1 that the first query yields a
N = 3, m = 2:
2
3
1
3
N = 4, m = 2:
2
4
2
4
2
3
N, m:
...
m
N
Nm
N
m
N1
1
3
Nm1
N1
..
m
m+1
.
1
m+1
Figure A.1: Probability tree diagrams for each search strategy on an unsorted database with N entries, m = 0 of which
are marked. Each left branch represents an event of finding a marked item, the last right branch leads to the sure finding
in the next step if m > 0.
theory, however, an oracle (or more precisely, an oracle Turing machine M A for the oracle A) may be a much more
general algorithm transcending worlds [34, 14.3]. In this sense, an oracle rather plays the role of a proof checker or
a verification algorithm [5, 34.2] in the terminology of complexity theory.
107
for 0 < m 5 N 1. Thus the case m = N remains to be determined: but it is simply given by E[QN,N ]
N+1
= 1 = N+1
, i.e., (A.8) holds for all 0 5 m 5 N.
Q.E.D.
Remark A.6. Varying the above problem, we want to search for the position of one of m marked items
in an unsorted database with N = |X| = 2 entries where we know the number m satisfying 0 < m < N.
In other words, we are guaranteed a previously known number of marked items. Then we find the
position of one of the searched items in
Nm
k (N m)k1
(N m)(N m)!
+m
(N)Nm
(N)k
k=1
pos
E[QN,m ] =
(N m)m!
m Nm
+
k (N k)m1
(N)m
(N)m k=1
(A.9)
n!
queries on average, where (n)k := (nk)!
for n, k N, especially (n)0 = 1, (n)n = (n)n1 = n!.
pos
Eq. (A.9) follows directly from Figure A.1. Thus we have E[QN,m ] = (N). Some special cases
are the following: For m = 1, we obtain
pos
E[QN,1 ] =
For m = 2, by N2
k2 =
1
(N2)(N+3)
,
3N
N 1 1 N1
N2 + N 2
+ k=
.
N
N k=1
2N
(2N3)(N2)(N1)
,
6
i.e.,
2
(N)2
N2
k=1 k (N k) =
2
(N)2
(A.10)
N2 2
N2
k =
k k=1
N k=1
we obtain
pos
E[QN,2 ] =
=
3
For m = 3, remembering N3
k=1 k =
pos
E[QN,3 ] =
N2
2
2 (N 2)
+
k (N k)
N(N 1) N(N 1) k=1
(N 2)(N 2 + 2N + 3)
.
3N(N 1)
N22
,
2
(A.11)
we have
6 (N 3)
3 N3
+
k (N k)2
(N)3
(N)3 k=1
(A.12)
pos
The direct evaluation for higher m is not obvious. Especially, m = N 1 yields E[QN,N1 ] =
N1
N = 1.
1
N
+
You may ask whether Theorem A.5 is important in computer science. Eventually, all important
databases are sorted, so the result is irrelevant for usual data applications. But far from it! In fact,
any database containing datasets with more than one data field is unsorted with respect to at least one
field. Take a phone book, containing mainly the name and the corresponding phone number as data
fields: any phone book is unsorted with respect to the phone numbers.
Example A.7. (Searching number in a phone book) Let U = {0, 1, . . . , 1010 1} be the set of
all 10-digit decimal numbers. Consider a phone book X U containing N numbers, and let S =
108
{1234567890}. Then SEARCH is the decision problem to determine whether the phone number
x0 = 1234567890 is contained in the phone book. The oracle then is given as
1 if x = x0 ,
f (x) =
0 otherwise.
Since a 10-digit number needs n = dlog2 1010 e = 34 bits, any number x in X or S, as subsets of
with = {0, 1}, has a length satisfying |x| 5 n. Thus the oracle can work efficiently with at most
queries
n = 34 steps, comparing successively each possible binary digit. Classically, one needs N+1
2
on average by Eq. (A.8), whereas Grovers quantum search algorithm requires only N queries on
average.
Example A.8. (Known-plaintext attack on a cryptosystem by brute force) Assume that you have
received a plaintext/ciphertext pair of a given cryptosystem and you want to find the secret key. The
cryptosystem might be a symmetric cipher, like AES, or a public key cipher, as RSA [6]. They all
have in common that their strongness relies on the difficulty to find the key. A brute force attack tries
to break the cryptosystem by searching the secret key querying the encryption function successively
with all possible keys K until
EK (M) = C,
where M is the plaintext and C is the ciphertext. To formulate a brute force attack as a search problem,
let X = U = {0, 1}n denote the set of all keys of length n bits, and S = {K U : EK (M) = C}. Then
for each K X, the oracle f : X {0, 1} is given by
1 if EK (M) = C,
f (K) =
0 otherwise.
By construction of the encryption function, the oracle is polynomial-time with respect to n = max
{|K|: K X}. For instance, AES uses keys of length up to n = 256 bits, i.e., the search space X
255
contains N = 2256 elements. A classical brute force attack thus
takes128on average about N/2 = 2
steps. Grovers quantum algorithm [17] requires only about N = 2 steps [7, 6.2.1]. Moreover,
as a decision problem the known-plaintext attack is always true in practice, since the ciphertext C
has been computed from a given plaintext M, more interesting, of course, is the position of M in the
search space X.
A very important example of a search problem is SAT.
Example A.9. (SAT) [34, 4.2] The satisfiability problem for propositional logic, denoted SAT,
asks whether a given Boolean expression f : {0, 1}n {0, 1} in conjunctive normal form is satisfiable,
i.e., whether there exists an assignment x = (x1 , . . . , xn ) such that f (x) = 1, where 0 denotes false
and 1 denotes true. Here a Boolean expression is a combination of the literals x j and the symbols
V
, , , (, and ). is in conjunctive normal form (CNF) if f (x) = m
i=1 ci where each clause ci is a
disjunction of one or more literals x j or x j [34, 4.1]. For instance, f (x1 , x2 ) = (x1 x2 ) x1 is
satisfiable since f (0, 0) = 1, whereas
f (x1 , x2 , x3 ) = x1 x2 x3 (x1 x2 x3 )
is not satisfiable because f (x) = 0 x {0, 1}3 . Denote X = U = {0, 1}n the space of the 2n possible
assignments to the Boolean formula f , and let S X be the set of all satisfying assignments of f . If
f is not satisfiable, S is empty, i.e., m = |S| = 0. Then a simple algorithm to solve this problem is to
perform a brute force search through the space X and to query f as the oracle function. Since there
are N = 2n possible assignments, SAT may be considered as a special N item search problem with
the oracle f working efficiently with time complexity O(logk N). Classically, it requires O(2n ) oracle
n
queries on average, whereas Grovers quantum algorithm needs O(2 2 ) oracle queries on average.
109
Example A.10. (Hamilton cycle problem) [31, 6.4] A Hamilton cycle is a cycle in which each vertex
of an undirected graph is visited exactly once. The Hamilton cycle problem (HC) is to determine
whether a given graph contains a Hamilton cycle or not. Let X = U be the set of all possible cycles
beginning in vertex 1, i.e., x = (x0 , x1 , . . . , xn1 , xn ) where x0 = xn = 1 and where (x1 , . . . , xn1 ) is a
permutation of the (n 1) vertices x j 6= 1. In other words, X contains all possible Hamilton cycles
which could be formed with the n vertices of the graph. Then a simple algorithm to solve the problem
is to perform a brute force search through the space X and to query the oracle function
1 if x S,
f (x) =
(A.13)
0 otherwise,
where S is the solution set of all cycles of the graph,
S = {x U : (i j1 , i j ) E j = 1, . . . , n}.
(A.14)
If the graph does not contain a Hamilton cycle, then S is empty and m = |S| = 0. The oracle only has
to check whether each pair (x j1 , x j ) of a specific possible Hamilton cycle is an edge of the graph,
which requires time complexity O(n2 ) since |E| 5 n2 ; because there are n pairs to be checked in this
way, the oracle works with total time complexity O(n3 ) per query. (Its space complexity is O(log2 n)
bits, because it uses E and x as input and thus needs to store temporarily only the two vertices of the
considered edge, requiring O(log2 n).)
Since there are at most N = (n 1)! = O(nn ) = O(2n log2 n ) possible orderings, the Hamilton cycle
problem is a special N item search problem. Classically, it requires O(2n log2 n ) oracle queries on
n
average, whereas Grovers quantum algorithm needs O(2 2 log2 n ) oracle queries on average [7, 6.2.1].
According to Diracs Theorem, any graph in which each vertex has at least n/2 incident edges has
a Hamilton cycle. This and some more such sufficient criterions are listed in [9, 8.1].
A problem being apparently similar to the Hamilton cycle problem is the Euler cycle problem. Its
historical origin is the problem of the Seven Bridges of Konigsberg, solved by Leonhard Euler in
1736.
Example A.11. (Euler cycle problem) [31, 3.2.2] Let = (V, E) be an undirected graph consisting
of n numbered vertices V = {1, . . . , n} and the edges E V 2 such that (x, x)
/ E and (x, y) E
implies (y, x) E for all x, y V . An Euler cycle is a closed-up sequence of edges, in which each
edge of the graph is visited exactly once, If we shortly denote (x0 , x1 , . . . , xm ) with x0 = xm = 1 for
a cycle, then a necessary condiiton to be Eulerian is that m = |E|. The Euler cycle problem (EC)
then is to determine whether a given graph contains an Euler cycle or not. By Eulers theorem [9,
0.8], a connected graph contains an Euler cycle if and only if every vertex has an even number of
edges incident upon it. Thus EC is decidable
in O(n3 ) computational steps, counting for each of the
n vertices x j in how many of the at most n2 edges (x j , y) or (y, x j ) E it is contained. As a search
problem, the search space X consists only of the n vertices of the considered graph, and the answer is
known after at most n countings of the edges incident on each vertex.
For each of these search problems, an oracle function is known which is polynomially computable
with respect to n, i.e., which has time complexity O(logk N) and is thus efficient.
110
Appendix B
Dictionary for mathematics and computer
science
A
F
feedback Ruckkopplung
finite endlich
fraction Bruch
B
bound Schranke, Grenze
business information systems Wirtschaftsinformatik
G
gcd ggT (groter gemeinsamer Teiler)
greatest common divisor groter gemeinsamer Teiler
C
calculus Differential- und Integralrechnung
circuit board Platine, Leiterplatte
complete vollstandig
computer science Informatik
column Spalte (auch einer Matrix)
D
deduce herleiten
denominator Nenner
digets Auszug, Abriss
die, pl. dice Wurfel
disjoint disjunkt
divisor Teiler, Divisor
E
economic order quantity Losgroe
edge Kante
equation Gleichung
equilateral triangle gleichseitiges Dreieck
even number gerade Zahl
evenly divisible ohne Rest teilbar
H
hash feinhacken; vermasseln, verhunzen
heap Heap (wortl. Haufen), Halde
hence deshalb, also
[the equation] holds [die Gleichung] gilt
IJK
intersection Schnittmenge
induction assumption Induktionsannahme
induction start Induktionsanfang
induction step Induktionsschritt
infinite unendlich
inflection point Wendepunkt (math. Kurvendiskussion)
initial value Anfangswert
insert einsetzen
integer ganze Zahl
invertible umkehrbar
isoscele triangle gleichschenkliges Dreieck
111
M
mapping Abbildung
merge verschmelzen, zusammenfuhren
motherboard Hauptplatine
multicriterion optimization Mehrkriterienoptimierung
N
neural network neuronales Netz
node Knoten
numerator Zahler
O
objective function Zielfunktion
obtain erhalten
obvious offensichtlich, klar
odd number ungerade Zahl
P
perpendicular senkrecht
plane math Ebene
pointer Pointer, Zeiger
polygon Polygon, Vieleck
polyhedron Polyeder, Vielflachner
polynomial Polynom; polynomial
potential set Potenzmenge
preimage Urbild (e-r Abbildung)
prime number Primzahl
proof Beweis
proposition log Aussage; math Satz, Lehrsatz
prove beweisen
S
sales figures Verkaufszahlen
satisfy the equation die Gleichung erfullen, der Gleichung
genugen
scalene triangle ungleichseitiges Dreieck
scatterplot Punktwolke, Streudiagramm
self-loop Schlinge (math)
in the sequel im folgenden
sequence Folge (math)
series Reihe (math)
set Menge
slack variable Schlupfvariable (beim Simplexalgorithmus)
spot Ort; Fleck; (Spiel-, Wurfel)Auge
stack Stack (wortl. Stapel)
suffice genugen
sufficient condition hinreichende Bedingung
suppose annehmen
subscripted letters indizierte Buchstaben
subset Teilmenge
subtree Teilbaum
T
tetrahedron Tetraeder
therefore daher
thread einfadeln, aufreihen; Faden, comp Thread
thus so, also, deshalb
time series Zeitreihe
toss (hoch)werfen; Wurf
total of the digits of Quersumme von
triangle Dreieck
UVW
up to a constant bis auf eine Konstante
upper bound = upper limit obere Schranke, obere Grenze
uppercase letter Grobuchstabe
vertex Ecke, Eckpunkt; Knoten (eines Graphen)
XYZ
queue (Warte-)Schlange
yield ergeben
112
Appendix C
Arithmetical operations
Fundamental operations of arithmetic Grundrechenarten
Operation
x=y
xy
x5y
x=y
x+y
xy
xy
23 = 6
x/y
xy
xy
x2
x3
x4
x5
..
.
3
x
n
x
n!
n
m
bxc
dxe
English
German
x equals y
x ist gleich y
x is approximately equal to y
x ist ungefahr gleich y
x is less/smaller than or equals y x ist kleiner gleich y
x is greater than or equals y
x ist groer gleich y
x plus y
x plus y
x minus y
x minus y
x times y
x mal y
two threes are six
2 mal 3 ist (gleich) 6
x divided by y, x over y
x (geteilt) durch y
x to the (power of) y
x hoch y
x to the minus y
x hoch minus y
x squared
x zum Quadrat
x cubed
x hoch 3
x to the 4th (power)
x hoch 4
x to the 5th (power)
x hoch 5
..
..
.
.
square root of x
Wurzel (aus) x
cube root of x
dritte Wurzel aus x
nth root of x
n-te Wurzel aus x
n factorial
n Fakultat
choose m out of n
n u ber m, m aus n
floor of x
untere Gausche Klammer von x
ceiling of x
obere Gausche Klammer von x
for all
fur alle
there exists
es existiert
113
Bibliography
[1] A. P. Barth. Algorithmik fur Einsteiger. Vieweg, Braunschweig Wiesbaden, 2003.
[2] J. Bather. Decision Theory. An Introduction to Dynamic Programming and Sequential Decisions.
John Wiley & Sons, Chichester, 2000.
[3] C. H. Bennett. Logical Depth and Physical Complexity. In R. Herken, editor, The Universal
Turing Machine. A Half-Century Survey, pages 207235. Springer-Verlag, Wien, 1994.
[4] J. A. Buchmann. Introduction to Cryptography. Springer-Verlag, New York, 2001.
[5] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. McGrawHill, New York, 2nd edition, 2001.
[6] A. de Vries. The ray attack on RSA cryptosystems. In R. W. Muno, editor, Jahresschrift
der Bochumer Interdisziplinaren Gesellschaft eV 2002, pages 1138, Stuttgart, 2003. ibidemVerlag. http://arxiv.org/abs/cs.CR/0307029.
[7] A. de Vries. Quantum Computation. An Introduction for Engineers and Computer Scientists.
Books On Demand, Norderstedt, 2012.
[8] A. de Vries and V. Wei. Grundlagen der Programmierung. Vorlesungsskript, Hagen, 2007.
http://www3.fh-swf.de/fbtbw/devries/download/java.pdf.
[9] R. Diestel. Graphentheorie. Springer-Verlag, Berlin Heidelberg, 2. edition, 2000.
[10] W. Domschke and A. Drexl. Einfuhrung in Operations Research. Springer-Verlag, Berlin Heidelberg, 5th edition, 2002.
[11] T. Ellinger, G. Beuermann, and R. Leisten. Operations Research. Eine Einfuhrung. SpringerVerlag, Berlin Heidelberg, 4th edition, 1998.
[12] M. Falk, R. Becker, and F. Marohn. Angewandte Statistik mit SAS. Eine Einfuhrung. SpringerVerlag, Berlin Heidelberg, 2nd edition, 1995.
[13] O. Forster. Analysis 1. Vieweg, Wiesbaden, 9. edition, 2008.
[14] D. Fudenberg and J. Tirole. Game Theory. MIT Press, Cambridge, 1991.
[15] I. Gerdes, F. Klawonn, and R. Kruse. Evolutionare Algorithmen. Genetische Algorithmen
Strategien und Optimierungsverfahren Beispielanwendungen. Vieweg, Wiesbaden, 2004.
[16] R. L. Graham, D. E. Knuth, and O. Patashnik. Concrete Mathematics. Addison-Wesley, Upper
Saddle River, NJ, 2nd edition, 1994.
[17] L. K. Grover. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev.
Lett., 79(2):325, 1997.
114
[18] H. P. Gumm and M. Sommer. Einfuhrung in die Informatik. Oldenbourg Verlag, Munchen,
2008.
[19] R. H. Guting. Datenstrukturen und Algorithmen. B.G. Teubner, Stuttgart, 1997.
[20] D. Harel and Y. Feldman. Algorithmik. Die Kunst des Rechnens. Springer-Verlag, Berlin Heidelberg, 2006.
[21] J. Havil. Verblufft?! Springer-Verlag, Berlin Heidelberg, 2009.
[22] V. Heun. Grundlegende Algorithmen. Vieweg, Braunschweig Wiesbaden, 2000.
[23] D. W. Hoffmann. Theoretische Informatik. Carl Hanser Verlag, Munchen, 2009.
[24] M. J. Holler and G. Illig. Einfuhrung in die Spieltheorie. Springer-Verlag, Berlin Heidelberg
New York, 3. edition, 1996.
[25] F. Kaderali and W. Poguntke. Graphen, Algorithmen, Netze. Grundlagen und Anwendungen in
der Nachrichtentechnik. Vieweg, Braunschweig Wiesbaden, 1995.
[26] S. C. Kleene. Turings Analysis of Computability, and Major Applications of It. In R. Herken,
editor, The Universal Turing Machine. A Half-Century Survey, Wien, 1994. Springer-Verlag.
[27] D. E. Knuth. The Art of Computer Programming. Volume 1: Fundamental Algorithms. AddisonWesley, Reading, 3rd edition, 1997.
[28] D. E. Knuth. The Art of Computer Programming. Volume 3: Sorting and Searching. AddisonWesley, Reading, 3rd edition, 1998.
[29] S. O. Krumke and H. Noltemeier. Graphentheoretische Konzepte und Algorithmen. Teubner,
Wiesbaden, 2005.
[30] Z. Michalewicz. Genetic Algorithms + Data Structure = Evolution Programs. Springer Verlag,
Berlin Heidelberg, 3rd edition, 1996.
[31] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information. Cambridge
University Press, Cambridge, 2000.
[32] T. Ottmann and P. Widmayer. Algorithmen und Datenstrukturen. Spektrum Akademischer
Verlag, Heidelberg Berlin, 4. edition, 2002.
[33] F. Padberg. Elementare Zahlentheorie. Spektrum Akademischer Verlag, Heidelberg Berlin, 2.
edition, 1996.
[34] C. M. Papadimitriou. Computational Complexity. Addison-Wesley, Reading, Massachusetts,
1994.
[35] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C++.
The Art of Scientific Computing. Cambridge University Press, Cambridge, 2nd edition, 2002.
[36] B. Schneier. Angewandte Kryptographie. Addison Wesley, Bonn, 1996.
[37] R. Sedgewick. Algorithmen in C. Addison-Wesley, Bonn, 1992.
[38] K. Sydster and P. Hammond. Mathematik fur Wirtschaftswissenschaftler. Pearson Studium,
Munchen, 2004.
115
[39] H. Tempelmeier. Material-Logistik. Modelle und Algorithmen fur die Produktionsplanung und
-steuerung in Advanced-Planning-Systemen. Springer-Verlag, Berlin Heidelberg New York,
2006.
[40] A. Torn and A. Zilinskas. Global Optimization. Lecture Notes in Computer Science, 350, 1989.
Links
1. http://www.nist.gov/dads/ NIST Dictionary of Algorithms and Data Structures
2. http://riot.ieor.berkeley.edu/ RIOT Optimization Testbed,
116
Index
<<< , 46
R+ , 18
gcd, 16
action, 75
acyclic, 65
adjacency list, 63
adjacency matrix, 62
adjacent, 62
Akra-Bazzi theorem, 30
algebra, 6
algorithm, 15
Dijkstras -, 60, 71
Euclids -, 10, 28
evolutionary -, 93
Floyd-Warshall -, 60, 69
genetic -, 60, 94
greedy -, 59
recursive -, 25
simplex -, 59, 86
Wagner-Whitin -, 60, 80
allele, 94
alphabet, 43
ant colony optimization, 60
artificial intelligence, 94
assignment, 12
asymptotically tightly bounded, 19
average-case analysis, 20
Axelrods genetic algorithm, 102
backwards induction, 77
balanced divide and conquer algorithm, 36
Bellman functional equation, 77
Bellmans Optimality Principle, 70, 75
BFS algorithm, 63
bit rotation, 46
Boolean expression, 13
brute force, 94
bucket, 45
bucket sort, 36
candidate solution, 55
capacity, 45
chromosome, 94
circular left-shifting, 46
class diagram, 64
collision, 45, 48
combinatorial optimization, 56
common divisor, 16
comparison sort algorithm, 35
complexity, 17
complexity class, 18
condition, 13
conjugate, 90
constraint, 56
constraints
primary, 85
control structure, 12
correctness, 16
cost function, 75
crossover, 95, 102
cryptography, 109
cu, 78
currency unit, 78
cycle, 65
decision, 70, 73, 75
decision problem, 66, 77
decision space, 77
decision vector, 75
depth-first search, 64
digit, 19
Dijkstra algorithm, 71
Dijkstras algorithm, 60
discrete optimization, 56
distance, 67
divide and conquer, 36
divides, 105
divisor, 105
dominates, 18
duality, 90
dynamic programming, 70, 73
edge, 61
efficient, 21
elite principle, 96
empty word, 44
Euclidean algorithm, 10, 28
Euclidean space, 55
Euler cycle, 66, 110
evolution, 94
evolution strategy, 60, 93
evolutionary
algorithm, 60, 93
programming, 60, 93
exception, 14
exchange, 12
exhaustion, 43, 60, 73
exhaustive, 94
expansion
b-adic -, 18
exponential time complexity, 20
extended Euclidean algorithm, 28
factorial, 25
117
Fibonacci heap, 72
fitness function, 95
floor-brackets, 9
Floyd-Warshall algorithm, 60, 69
game theory, 102
Gau-brackets, 9
gene, 94
generation, 96
genetic algorithm, 60, 94
genetic operator, 95
genotype, 94
golden ratio, 15
graph, 61
weighted -, 67
Gray code, 99
greatest common divisor, 10, 16
greedy algorithm, 59
Hamilton cycle, 110
Hamiltonian cycle, 65, 66
Hamiltonian cycle problem, 66
Hamming distance, 98
hash table, 45
hash value, 45
hash-function, 45
hashing, 47
HashMap, 43
HashSet, 43
HC, 66
heuristic, 103
Huffman code, 60
hyperspace, 55
independent, 52
individual, 94
input, 15
insertion sort, 34
instruction, 12
integers, 105
ISBN, 45
key comparison sort algorithm, 35
knowledge, 94, 100
Landau symbol, 18
lattice, 56
law of motion, 75
learn, 102
length, 62, 67
letter, 43
linear optimization problem, 86
linear probing, 52
linear programming, 86
load factor, 50
loci, 94
logarithmic time complexity, 20
loop, 13
Master theorem, 29
maximum problem, 55
MD5, 47
118
119