Tips - Fundamentals of Algorithmics PDF
Tips - Fundamentals of Algorithmics PDF
OF ALGORITHMICS
FUNDAMENTALS
OF ALGORITHMICS
Gilles Brassard and Paul Bratley
PRENTICE HALL
Englewood Cliffs, New Jersey 07632
Library of Congress Cataloging-in-Publication Data
BRASSARD, GILLES
Fundamentals of Algorithmics / Gilles Brassard and Paul Bratley
p. cm.
Includes bibliographical references and index.
ISBN 0-13-335068-1
1. Algorithms. I. Bratley, Paul. H. Title
QA9.58.B73 1996 95-45581
511'.8-dc2O CU'
The author and publisher of this book have used their best efforts in preparing this book. These
efforts include the development, research, and testing of the theories and formulas to determine
their effectiveness. The author and publisher shall not be liable in any event for incidental or
consequential damages in connection with, or arising out of, the furnishing, performance, or use
of these formulas.
All rights reserved. No part of this book may be reproduced, in any form or by any means, with-
out permission in writing from the publisher.
ISBN 0-13-335068-1
Prentice-Hall International (UK) Limited, London
Prentice-Hall of Australia Pty. Limited, Sydney
Prentice-Hall Canada Inc., Toronto
Prentice-Hall Hispanoamericana, S.A., Mexico
Prentice-Hall of India Private Limited, New Delhi
Prentice-Hall of Japan, Inc., Tokyo
Simon & Schuster Asia Pte. Ltd., Singapore
Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro
A nos parents
Contents
PREFACE xv
* 1 PRELIMINARIES
1.1 Introduction 1
1.2 What is an algorithm? 1
1.3 Notation for programs 6
1.4 Mathematical notation 7
1.4.1 Propositional calculus 7
1.4.2 Set theory 8
1.4.3 Integers, reals and intervals 8
1.4.4 Functions and relations 9
1.4.5 Quantifiers 10
1.4.6 Sums and products 11
1.4.7 Miscellaneous 12
1.5 Proof technique 1 - Contradiction 13
1.6 Proof technique 2 - Mathematical induction 16
1.6.1 The principle of mathematical induction 18
1.6.2 A horse of a different colour 23
1.6.3 Generalized mathematical induction 24
1.6.4 Constructive induction 27
1.7 Some reminders 31
1.7.1 Limits 31
1.7.2 Simple series 34
1.7.3 Basic combinatorics 38
1.7.4 Elementary probability 41
Vii
viii Contents
1.8 Problems 48
1.9 References and further reading 55
2.1 Introduction 57
2.2 Problems and instances 58
2.3 The efficiency of algorithms 59
2.4 Average and worst-case analyses 61
2.5 What is an elementary operation? 64
2.6 Why look for efficiency? 66
2.7 Some examples 67
2.7.1 Calculating determinants 68
2.7.2 Sorting 68
2.7.3 Multiplication of large integers 70
2.7.4 Calculating the greatest common divisor 71
2.7.5 Calculating the Fibonacci sequence 72
2.7.6 Fourier transforms 73
2.8 When is an algorithm specified? 74
2.9 Problems 74
2.10 References and further reading 78
* 3 ASYMPTOTIC NOTATION 79
3.1 Introduction 79
3.2 A notation for "the order of" 79
3.3 Other asymptotic notation 85
3.4 Conditional asymptotic notation 88
3.5 Asymptotic notation with several parameters 91
3.6 Operations on asymptotic notation 91
3.7 Problems 92
3.8 References and further reading 97
* 4 ANALYSIS OF ALGORITHMS 98
4.1 Introduction 98
4.2 Analysing control structures 98
4.2.1 Sequencing 98
4.2.2 "For" loops 99
4.2.3 Recursive calls 101
4.2.4 "While" and "repeat" loops 102
Contents ix
* 6 GREEDYALGORITHMS 187
* 7 DIVIDE-AND-CONQUER 219
7.1 Introduction: Multiplying large integers 219
7.2 The general template 223
7.3 Binary search 226
7.4 Sorting 228
7.4.1 Sorting by merging 228
7.4.2 Quicksort 231
7.5 Finding the median 237
7.6 Matrix multiplication 242
7.7 Exponentiation 243
7.8 Putting it all together: Introduction to cryptography 247
7.9 Problems 250
7.10 References and further reading 257
REFERENCES 501
INDEX 5177
Preface
xv
xvi Preface
the beginning of this preface, is nowadays more valid than ever, because the faster
your computing equipment, the more you stand to gain from efficient algorithms.
Our book is not a programming manual. Still less is it a "cookbook" containing
a long catalogue of programs ready to be used directly on a machine to solve certain
specific problems, but giving at best a vague idea of the principles involved in their
design. On the contrary, it deals with algorithmics: the systematic study of the
design and analysis of algorithms. The aim of our book is to give readers some
basic tools needed to develop their own algorithms, in whatever field of application
they may be required.
We concentrate on the fundamental techniques used to design and analyse
efficient algorithms. These techniques include greedy algorithms, divide-and-
conquer, dynamic programming, graph techniques, probabilistic algorithms and
parallel algorithms. Each technique is first presented in full generality. Thereafter
it is illustrated by concrete examples of algorithms taken from such different ap-
plications as optimization, linear algebra, cryptography, computational number
theory, graph theory, operations research, artificial intelligence, and so on. We pay
special attention to integrating the design of algorithms with the analysis of their
efficiency. Although our approach is rigorous, we do not neglect the needs of
practitioners: besides illustrating the design techniques employed, most of the
algorithms presented also have real-life applications.
To profit fully from this book, you should have some previous programming
experience. However, we use no particular programming language, nor are the
examples for any particular machine. This and the general, fundamental treatment
of the material ensure that the ideas presented here will not lose their relevance.
On the other hand, you should not expect to be able to use directly the algorithms
we give: you will always be obliged to make the necessary effort to transcribe them
into some appropriate programming language. The use of Pascal or a similarly
structured language will help reduce this effort to the minimum necessary.
Our book is intended as a textbook for an undergraduate course in algorithmics.
Some 500 problems are provided to help the teacher find homework assignments.
The first chapter includes most of the required mathematical preliminaries. In par-
ticular, it features a detailed discussion of mathematical induction, a basic skill too
often neglected in undergraduate computer science education. From time to time
a passage requires more advanced mathematical knowledge, but such passages
can be skipped on the first reading with no loss of continuity. Our book can also
be used for independent study: anyone who needs to write better, more efficient
algorithms can benefit from it.
To capture the students' attention from the outset, it is particularly effective to
begin the first lecture with a discussion of several algorithms for a familiar task such
as integer multiplication. James A. Foster, who used preliminary versions of this
book at the University of Idaho, described his experience in the following terms:
"My first lecture began with a discussion of 'how do you multiply two numbers'.
This led to what constitutes the size of the input, and to an analysis of the classical
algorithm. I then showed multiplication a la russe, with which they were soon
taken. We then discussed the divide-and-conquer algorithm (Section 7.1). All of
this was done informally, but at the end of the class (a single lecture, mind you) they
Preface xvii
Preliminaries
1.1 Introduction
In this book we shall be talking about algorithms and about algorithmics. This
introductory chapter begins by defining what we mean by these two words. We
illustrate this informal discussion by showing several ways to do a straightforward
multiplication. Even such an everyday task has hidden depths! We also take the
opportunity to explain why we think that the study of algorithms is both useful
and interesting.
Next we explain the notation we shall use throughout the book for describing
algorithms. The rest of the chapter consists essentially of reminders of things we
expect the reader to have seen already elsewhere. After a brief review of some
standard mathematical notation, we recall two useful proof techniques: proof by
contradiction and proof by mathematical induction. Next we list some results
concerning limits, the sums of series, elementary combinatorics and probability.
A reader familiar with these topics should read Sections 1.2 and 1.3, then sim-
ply glance through the rest of the chapter, skipping material that is already known.
Special attention should be paid to Section 1.6.4. Those whose basic maths and com-
puter science are rusty should at least read through the main results we present
to refresh their memories. Our presentation is succinct and informal, and is not
intended to take the place of courses on elementary analysis, calculus or program-
ming. Most of the results we give are needed later in the book; conversely, we try in
later chapters not to use results that go beyond the basics catalogued in this chapter.
1
2 Preliminaries Chapter 1
In the first twelve chapters of this book, unless the context clearly indicates
the contrary, we assume that an algorithm is a set of rules for calculating the cor-
rect answer to some problem. Chapter 13, on the other hand, deals entirely with
approximate algorithms and heuristics.
Algorithmics can now be defined simply as the study of algorithms. When we
set out to solve a problem, there may be a choice of algorithms available. In this
case it is important to decide which one to use. Depending on our priorities and on
the limits of the equipment available to us, we may want to choose the algorithm
that takes the least time, or that uses least storage, or that is easiest to program, and
so on. The answer can depend on many factors, such as the numbers involved, the
way the problem is presented, or the speed and storage capacity of the available
computing equipment. It may be that none of the available algorithms is entirely
suitable so that we have to design a new algorithm of our own. Algorithmics is
the science that lets us evaluate the effect of these various external factors on the
available algorithms so that we can choose the one that best suits our particular
circumstances; it is also the science that tells us how to design a new algorithm for
a particular task.
Take elementary arithmetic as an example. Suppose you have to multiply two
positive integers using only pencil and paper. If you were raised in North America,
the chances are that you will multiply the multiplicand successively by each figure
of the multiplier taken from right to left, that you will write these intermediate
results one beneath the other shifting each line one place left, and that finally you
will add all these rows to obtain your answer. Thus to multiply 981 by 1234 you
would produce an arrangement like that of Figure 1.1(a). If, on the other hand, you
went to school in England, you would be more likely to work from left to right,
producing the arrangement shown in Figure 1.1(b).
981 981
1234 1234
3924 981
2943 1962
1962 2943
981 3924
1210554 1210554
(a) (b)
Figure 1.1. Multiplication (a) American (b) English
These two algorithms for multiplication are very similar: so similar, in fact, that
we shall refer to them as the "classic" multiplication algorithm, without worrying
precisely which one we mean. A third, different algorithm for doing the same thing
is illustrated in Figure 1.2.
Write the multiplicand and the multiplier side by side. Make two columns,
one under each operand, by repeating the following rule until the number in the
left-hand column is 1: divide the number in the left-hand column by 2, ignoring
4 Preliminaries Chapter 1
any fractions, and double the number in the right-hand column by adding it to
itself. Next cross out each row where the number in the left-hand column is even,
and finally add up the numbers that remain in the right-hand column. The figure
illustrates how to multiply 981 by 1234. The answer obtained is
Now to multiply 0981 by 1234 we first multiply the left half of the multiplicand
(09) by the left half of the multiplier (12), and write the result (108) shifted left as
Section 1.2 What is an algorithm? 5
many places as there are figures in the multiplier: four, in our example. Next we
multiply the left half of the multiplicand (09) by the right half of the multiplier
(34), and write the result (306) shifted left by half as many places as there are
figures in the multiplier: two, in this case. Thirdly we multiply the right half of the
multiplicand (81) by the left half of the multiplier (12), and write the result (972)
also shifted left by half as many places as there are figures in the multiplier; and
fourthly we multiply the right half of the multiplicand (81) by the right half of the
multiplier (34) and write the result (2754), not shifted at all. Finally we add up the
four intermediate results as shown in Figure 1.3 to obtain the answer 1210554.
If you have followed the working of the algorithm so far, you will have seen that we
have reduced the multiplication of two four-figure numbers to four multiplications
of two-figure numbers (09 x 12, 09 x 34, 81 x 12 and 81 x 34) together with a
certain number of shifts and a final addition. The trick is to observe that each
of these multiplications of two-figure numbers can be carried out in exactly the
same way, except that each multiplication of two-figure numbers requires four
multiplications of one-figure numbers, some shifts, and an addition. For instance,
Figure 1.4 shows how to multiply 09 by 12. We calculate 0 x 1 = 0, shifted left
two places; 0 x 2 = 0, shifted left one place; 9 x 1 = 9, shifted left one place; and
9 x 2 = 18, not shifted. Finally we add these intermediate results to obtain the
answer 108. Using these ideas the whole of our calculation can be carried out in
such a way that the multiplications involve only one-figure operands. (Although
we described Figure 1.3 before Figure 1.4, this was only to simplify the presentation.
Of course we have to do the four multiplications of two-figure numbers first, since
we use the values thus calculated when we do the multiplication of the four-figure
numbers.)
This unusual algorithm is an example of the technique called "divide-and-
conquer", which we shall study in Chapter 7. If you think it unlikely that it could
outperform the classic algorithm, you are perfectly right. However we shall see in
Chapter 7 that it is possible to reduce the multiplication of two large numbers to
three, and not four, multiplications of numbers roughly half the size, together with
a certain number of shifts and additions. (If you are stimulated by challenges, try
to figure out how to do this!) With this improvement, the divide-and-conquer mul-
tiplication algorithm runs faster on a computer than any of the preceding methods,
provided the numbers to be multiplied are sufficiently large. (Still faster methods
are known for very large operands.) It is not absolutely necessary for the length
6 Preliminaries Chapter 1
of the operands to be a power of two, nor that they have the same length. Prob-
lem 1.6 shows one case where the algorithm can be useful in practice, even when
the operands are relatively small, and even when we use four submultiplications
instead of three.
The point of all these examples is that, even in an area as simple as elemen-
tary arithmetic, there may be several algorithms available to us for carrying out
the necessary operations. One may appeal by its familiarity, a second because of
the elementary nature of the intermediate calculations involved, or a third by its
speed on a machine. It is by making a more formal study of the properties of
the algorithms-by using algorithmics, in other words-that we can make a wise
choice of the technique to use in any given situation. As we shall see, a good choice
can save both money and time; in some cases, it can make all the difference between
success and failure when solving a large, hard problem. The aim of our book is to
teach you how to make such choices.
function russe(m, n)
result 0
repeat
if m is odd then result - result + n
m -m .2
m - in 2
n- n+n
until m = 1
return result
1.4 Mathematical notation
This section reviews some mathematical notation that we shall use throughout the
book. Our review is succinct, as we expect the reader to be familiar with most
of it. Nevertheless, you are encouraged to read it at least summarily because we
introduce most of the symbols that will be used, and some of them (such as [ i. . j
V, 3, 1g. Lx], .,and R° ) are not universally accepted.
An interval is a set of real numbers lying between two bounds. Let a and b be
two real numbers such that a < b. The open interval (a, b) denotes
(a, b] fE
{x= R Ia < x < b}
and
[a, b)= {x ER I a < x < b}.
Moreover, a co and b = + oo are allowed with their obvious meaning provided
they fall on the open side of an interval.
An integer interval is a set of integers lying between two bounds. Let i and j
be two integers such that i < j + 1. The integer interval [i.. j] denotes
{n e i <n <j1.
which is true of the odd integers and false of the even integers. There is also a
natural interpretation of Boolean formulas in terms of predicates. For instance,
one can define a predicate P: {true,false}3 _{true,false} by
1.4.5 Quantifiers
The symbols V and ] are pronounced "for all" and "there exists", respectively.
To illustrate this, consider an arbitrary set X and a property P on X. We write
(V x C X) [P(x)] to mean "every x in X has property P". Similarly,
(3x E X) [P(x)]
means "there exists at least one x in X that has property P". Finally, we write
(3!x C X) [P(x)] to mean "there exists exactly one x in X that has property P".
If X is the empty set, (V x E X) [P(x)] is always vacuously true-try to find a coun-
terexample if you disagree!-whereas (3x e X) [P(x)] is always trivially false.
Consider the following three concrete examples.
(V NO) [i n(n 1)
(3!n E )Ei = n2
(nm,neN)[m>1,n>1andmn =12573]
These examples state that the well-known formula for the sum of the first n integers
is always valid (see Section 1.7.2), that this sum is also equal to n2 for exactly one
positive value of n, and that 12573 is a composite integer, respectively.
An alternation of quantifiers may be used in a single expression. For instance,
says that for every natural number, there exists another natural number larger
still. When using alternation of quantifiers, the order in which the quantifiers are
presented is important. For instance, the statement (3 m ERN) (V n e RJ) [m > n]
is obviously false: it would mean that there is an integer m that is larger than every
natural number (including m itself!).
Provided the set X is infinite, it is sometimes useful to say that not only is
there an x e X such that property P (x) holds, but that there are infinitely many
of them. The appropriate quantifier in this case is 3. For instance, (Bn e N)
[n is prime]. Note that B is stronger than 3 but weaker than V. Another useful
quantifier, stronger than B but still weaker than V, is V, which is used when
Section 1.4 Mathematical notation 11
a property holds in all cases except possibly for a finite number of exceptions.
For instance, (V n E M) [if n is prime, then n is odd] means that prime numbers
are always odd, except possibly for a finite number of exceptions (in this case there
is exactly one exception: 2 is both prime and even).
When we are interested in properties of the natural numbers, there is an equiv-
alent definition for these quantifiers, and it is often better to think of them accord-
ingly. A property P of the natural numbers holds infinitely often if, no matter how
large m is, there is an n > m such that P(n) holds. Similarly, property P holds on
all natural numbers except possibly for a finite number of exceptions if there is an
integer m such that P(n) holds for all integers n > m. In the latter case, we say
that "property P holdsfor all sufficiently large integers". Formally,
whereas
The duality principle for quantifiers says that "it is not the case that property P
holds for all x E X if and only if there exists at least one x E X for which property
P does not hold". In other words,
Similarly,
E f (i)
PW
12 Preliminaries Chapter 1
denotes the sum of f (i) for all the integers i such that P(i) holds. This sum may
not be well-defined if it involves an infinite number of integers. We may also use
a mixed notation, such as
n
A f(i)
P(i)
which denotes the sum of the values taken by f on those integers between 1 and n
for which property P holds. If there are no such integers, the sum is 0. For example,
10
i=1+3+5+7+9=25.
i odd
pronounced "the product of f(i) as i goes from 1 to n". In the case n - 0, the
product is defined to be 1. This notation is generalized in the same way as the sum
notation.
1.4.7 Miscellaneous
If b X 1 and x are strictly positive real numbers, then logb x, pronounced "the log-
arithm of x in base ,".is defined as the unique real number y such that by - x.
For instance, loglo 1000 - 3. Note that although b and x must be positive, there
is no such restriction on y. For instance, log1o 0.001 = -3. When the base b is not
specified, we take it to be e = 2.7182818..., the base of the so-called natural loga-
rithm. (Some authors take the base to be 10 when it is not specified and denote the
natural logarithm by "In".) In algorithmics, the base most often used for logarithms
is 2, which deserves a notation of its own: lg x is short for log2 x. Although we
assume that the reader is familiar with logarithms, let us recall the most important
logarithmic identities:
log xy = y loga x,
logloa
Remember too that log log n is the logarithm of the logarithm of n, but log 2 n is
the square of the logarithm of n.
If x is a real number, Lx] denotes the largest integer that is not larger than x,
called the floor of x. For instance, [31/21 = 3. When x is positive, Lx] is the in-
Section 1.5 Proof technique 1 - Contradiction 13
teger you obtain by discarding the fractional part of x if there is one. When x is
negative and not itself an integer, however, Lx] is smaller than this by 1. For in-
stance, [ -31/21 = -4. Similarly, we define the ceiling of x, denoted by [xl, as the
smallest integer that is not smaller than x. Note that x -1 < Lx] < x c [xl < x + 1
for all x.
If mr > 0 and n > 0 are integers, m/n denotes as always the result of dividing
m by n, which is not necessarily an integer. For instance, 7/2 = 31/2. We denote the
quotient by the symbol " . ", so that 7 . 2 = 3. Formally, mr n [ rn/n1. We also
use mod to denote the "modulo" operator defined by
mmodn=m-nx(mr .n).
Proof Let P denote the set of all prime numbers. Assume for a contradiction that P is a
finite set. The set P is not empty since it contains at least the integer 2. Since P is
finite and nonempty, it makes sense to multiply all its elements. Let x denote that
product, and let y denote x + 1. Consider the smallest integer d that is larger than
1 and that is a divisor of y. Such an integer certainly exists since y is larger than 1
and we do not require that d be different from y. First note that d itself is prime, for
otherwise any proper divisor of d would also divide y and be smaller than d, which
would contradict the definition of d. (Did you notice that the previous sentence is
itself a proof by contradiction, nested in the larger scheme of things?) Therefore,
according to our assumption that P contains each and every prime, d belongs to P.
This shows that d is also a divisor of x since x is the product of a collection of
integers including d. We have reached the conclusion that d exactly divides both
x and y. But recall that y = x + 1. Therefore, we have obtained an integer d larger
than 1 that divides two consecutive integers x and y. This is clearly impossible: if
indeed d divides x, then the division of y by d will necessarily leave 1 as remainder.
The inescapable conclusion is that the original assumption was equally impossible.
But the original assumption was that the set P of all primes is finite, and therefore
its impossibility establishes that the set P is in fact infinite. U
For the constructively-minded reader (which every algorithmicist should be
at heart!), this proof of Euclid's can be turned into an algorithm-albeitnot a very
efficient one-capable of finding a new prime given any finite set of primes.
d-1
repeat d - d + 1 until d divides y
return d
Euclid's proof establishes that the value returned by Newprime(P) is a prime
number that does not belong to P. But who needs Euclid when writing an algorithm
for this task? What about the following, much simpler algorithm?
prime. Naturally, this situation cannot occur because there is no such thing as "the
largest prime", but Euclid's proof is needed to establish this. In sum, DumpEuclid
does work, but the proof of its termination is not immediate. In contrast, the fact
that Newprime always terminates is obvious (in the worst case it will terminate
when d reaches the value y), but the fact that it returns a new prime requires
proof.
We have just seen that it is sometimes possible to turn a mathematical proof
into an algorithm. Unfortunately, this is not always the case when the proof is by
contradiction. We illustrate this with an elegant example.
Theorem 1.5.2 There exist two irrational numbers x and y such that xy is
rational.
W Z2 = (2 2) = (2 ) 2x 2).=( )2= 2.
We have arrived at the conclusion that 2 is irrational, which is clearly false. We must
therefore conclude that our assumption was false: it must be possible to obtain a
rational number when raising an irrational to an irrational power. U
Now, how would you turn this proof into an algorithm? Clearly, the algorithm's
purpose should be to exhibit two irrationals x and y such that xy is rational.
At first, you may be tempted to say that the algorithm should simply output x = z
(as defined in the proof) and y = 2 since it was proven above that z is irrational
and that z, 2 = 2. But beware! The "proof" that z is irrational depends on the false
assumption that we started with, and therefore this proof is not valid. (It is only
the proof that is not valid. It is in fact true that z is irrational, but this is difficult
to establish.) We must always be careful not to use later an intermediate result
"proved" in the middle of a proof by contradiction.
There is no direct way to extract the required pair (x, y) from the proof of the
theorem. The best you can do is to extract two pairs and claim with confidence
that one of them does the trick-but you will not know which. Such a proof is
called nonconstructive and is not unusual among indirect proofs. Although some
mathematicians do not accept nonconstructive proofs, most see them as perfectly
valid. In any case, we shall as much as possible refrain from using them in the
context of algorithmics.
16 Preliminaries Chapter 1
(not counting the solution obtained by multiplying each of these numbers by 2).
Note that 4224814 is a 23-figure number.
Pell's equation provides an even more extreme case of compelling but incorrect
inductive reasoning. Consider the polynomial p (n)= 991n 2 + 1. The question is
whether there is a positive integer n such that p(n) is a perfect square. If you try
various values for n, you will find it increasingly tempting to assume inductively
Section 1.6 Proof technique 2 - Mathematical induction 17
that the answer is negative. But in fact a perfect square can be obtained with this
polynomial: the smallest solution is obtained when
many prime numbers (Theorem 1.5.1) and that multiplication a la russe is a correct
algorithm (Theorem 1.6.4) can be proved in a rigorous deductive manner, without
any need for experimental data. Inductive reasonings are to be banned from math-
ematics. Right? Wrong! In reality, mathematics is often very much an experimental
science. It is not unusual that a mathematician will discover a mathematical truth
by considering several special cases and inferring from them by induction a general
rule that seems plausible. For instance, if I notice that
13 = 1 12
13 + 23 9 = 32
13 + 23+ 3 = 36 62
13 + 23 + 3 + 4 = 100 = 102
13 + 23 + 33 + 43 + 53 = 225 152,
I may begin to suspect that the sum of the cubes of the first n positive integers is
always a perfect square. It turns out in this case that inductive reasoning yields a
correct law. If I am even more perceptive, I may realize that this sum of cubes is
precisely the square of the sum of the first n positive integers; see Problem 1.21.
However, no matter how compelling the evidence becomes when more and
more values of n are tried, a general rule of this sort cannot be asserted on the
basis of inductive evidence only. The difference between mathematics and the in-
herently experimental sciences is that once a general mathematical law has been
discovered by induction, we may hope to prove it rigorously by applying the deduc-
tive approach. Nevertheless, induction has its place in the mathematical process.
Otherwise, how could you hope to prove rigorously a theorem whose statement
has not even been formulated? To sum up, induction is necessary for formulating
conjectures and deduction is equally necessary for proving them or sometimes dis-
proving them. Neither technique can take the place of the other. Deduction alone
is sufficient for "dead" or frozen mathematics, such as Euclid's Elements (perhaps
history's highest monument to deductive mathematics, although much of its mate-
rial was no doubt discovered by inductive reasoning). But induction is required to
keep mathematics alive. As P6lya once said, "mathematics presented with rigor is
a systematic deductive science but mathematics in the making is an experimental
inductive science".
Finally, the punch line of this digression: one of the most useful deductive
techniques available in mathematics has the misfortune to be called mathematical
induction. This terminology is confusing, but we must live with it.
function sq(n)
if n = 0 then return 0
else return 2n + sq(n -1)-1
By induction, it seems obvious that sq(n)= n2 for all n > 0, but how could this
be proved rigorously? Is it even true? Let us say that the algorithm succeeds on
integer n whenever sq(n) = n2 , and that it fails otherwise.
Consider any integer n > 1 and assume for the moment that the algorithm
succeeds on n -1. By definition of the algorithm, sq (n) = 2n + sq (n - 1) -1. By our
assumption sq(n -1) = (n -1)2. Therefore
2
sq(n)=2n+(n -1)21 =2n+(n -2n+1)- = n2 .
What have we achieved? We have proved that the algorithm must succeed on n
whenever it succeeds on n -1, provided n > 1. In addition, it clearly succeeds
on n = 0.
The principleof mathematical induction, described below, allows us to infer from
the above that the algorithm succeeds on all n > 0. There are two ways of under-
standing why this conclusion follows: constructively and by contradiction. Con-
sider any positive integer m on which you wish to prove that the algorithm suc-
ceeds. For the sake of argument, assume that m > 9 (smaller values can be proved
easily). We know already that the algorithm succeeds on 4. From the general rule
that it must succeed on n whenever it succeeds on n - 1 for n > 1, we infer that
it also succeeds on 5. Applying this rule again shows that the algorithm succeeds
on 6 as well. Since it succeeds on 6, it must also succeed on 7, and so on. This
reasoning continues as many times as necessary to arrive at the conclusion that the
algorithm succeeds on m - 1. Finally, since it succeeds on m -1, it must succeed
on m as well. It is clear that we could carry out this reasoning explicitly-with no
need for "and so on"-for any fixed positive value of m.
If we prefer a single proof that works for all n > 0 and that does not contain
and-so-on's, we must accept the axiom of the least integer, which says that every
nonempty set of positive integers contains a smallest element; see Problem 1.24.
The axiom allows us to use this smallest number as a foundation from which to
prove theorems.
Now, to prove the correctness of the algorithm, assume for a contradiction that
there exists at least one positive integer on which the algorithm fails. Let n stand
for the smallest such integer, which exists by the axiom of the least integer. Firstly,
n must be greater than or equal to 5 since we have already verified that sq ( i)= 2
when i = 1, 2, 3 or 4. Secondly, the algorithm must succeed on n - 1 for otherwise
n would not be the smallest positive integer on which it fails. But this implies
by our general rule that the algorithm also succeeds on n, which contradicts our
assumption about the choice of n. Therefore such an n cannot exist, which means
that the algorithm succeeds on every positive integer. Since we also know that the
algorithm succeeds on 0, we conclude that sq(n)= n 2 for all integers n > 0.
We now spell out a simple version of the principle of mathematical induction,
which is sufficient in many cases. A more powerful version of the principle is
given in Section 1.6.3. Consider any property P of the integers. For instance, P(n)
could be "sq(n)= n2 If, or "the sum of the cubes of the first n integers is equal to
the square of the sum of those integers", or "n3 < 2n " The first two properties
20 Preliminaries Chapter 1
hold for every n > 0, whereas the third holds provided n > 10. Consider also an
integer a, known as the basis. If
then property P(n) holds for all integers n > a. Using this principle, we could
assert that sq(n)= n2 for all n > 0, immediately after showing that sq(0)= 0 - 02
and that sq(n)= n2 whenever sq(n -1) = (n - 1)2 and n > 1.
Our first example of mathematical induction showed how it can be used to
prove rigorously the correctness of an algorithm. As a second example, let us see
how proofs by mathematical induction can sometimes be turned into algorithms.
This example is also instructive as it makes explicit the proper way to write a proof
by mathematical induction. The discussion that follows stresses the important
points common to all such proofs.
Consider the following tiling problem. You are given a board divided into
equal squares. There are m squares in each row and m squares in each column,
where m is a power of 2. One arbitrary square of the board is distinguished as
special; see Figure 1.5(a).
I l-
- - -
- - -
You are also given a supply of tiles, each of which looks like a 2 x 2 board with one
square removed, as illustrated in Figure 1.5(b). Your puzzle is to cover the board
Section 1.6 Proof technique 2 - Mathematical induction 21
with these tiles so that each square is covered exactly once, with the exception of
the special square, which is not covered at all. Such a covering is called a tiling.
Figure 1.5(d) gives a solution to the instance given in Figure 1.5(a).
Proof The proof is by mathematical induction on the integer n such that m = 2n.
• Basis: The case n = 0 is trivially satisfied. Here m = 1, and the 1 x 1 "board"
is a single square, which is necessarily special. Such a board is tiled by doing
nothing! (If you do not like this argument, check the next simplest case: if n = 1,
then m = 2 and any 2 x 2 board from which you remove one square looks
exactly like a tile by definition.)
• Induction step: Consider any n > 1. Let m = 21. Assume the induction hypoth-
esis that the theorem is true for 2" 1 x 2"-1 boards. Consider an m x m board,
containing one arbitrarily placed special square. Divide the board into 4 equal
sub-boards by halving it horizontally and vertically. The original special square
now belongs to exactly one of the sub-boards. Place one tile in the middle of the
original board so as to cover exactly one square of each of the other three sub-
boards; see Figure 1.5(c). Call each of the three squares thus covered "special"
for the corresponding sub-board. We are left with four 2"-1 x 2n 1 sub-boards,
each containing one special square. By our induction hypothesis, each of these
sub-boards can be tiled. The final solution is obtained by combining the filings
of the sub-boards together with the tile placed in the middle of the original
board.
Since the theorem is true when m = 20, and since its truth for m = 2" follows
from its assumed truth for m = 2n-1 for all n > 1, it follows from the principle of
mathematical induction that the theorem is true for all m provided m is a power
of 2. U
The basis step is followed by the induction step, which is usually more sub-
stantial. This should start with "consider any n > a" (or equivalently "consider
any n > a + 1"). It should continue with an explicit statement of the induction hy-
pothesis, which essentially states that we assume P(n -1) to hold. At that point,
it remains to prove that we can infer that P(n) holds assuming the induction hy-
pothesis. Finally, an additional sentence such as the one at the end of the proof
of Theorem 1.6.1 can be inserted to conclude the reasoning, but this is generally
unnecessary.
Concerning the induction hypothesis, it is important to understand that we as-
sume that P (n - 1) holds on a provisionalbasis; we do not really know that it holds
until the theorem has been proved. In other words, the point of the induction step
is to prove that the truth of P(n) would follow logically from that of P(n -1), re-
gardless of whether or not P (n -1) actually holds. If in fact P (n -1) does not hold,
the induction step does not allow us to conclude anything about the truth of P (n).
For instance, consider the statement "n3 < 2n ", which we shall denote P(n).
For positive integer n, it is easy to show that n3 < 2 x (n - 1)3 if and only if n > 5.
Consider any n > 5 and provisionally assume that P(n -1) holds. Now
n3 < 2 x (n- 1)3 because n > 5
< 2 x 2n-1 by the assumption that P(n -1) holds
2n.
Thus we see that P(n) follows logically from P(n -1) whenever n > 5. Never-
theless P(4) does not hold (it would say 43 < 24, which is 64 < 16) and therefore
nothing can be inferred concerning the truth of P(5). By trial and error, we find
however that P(10) does hold (103 = 1000 < 210 = 1024). Therefore, it is legitimate
to infer that P(11) holds as well, and from the truth of P(11) it follows that P(12)
holds also, and so on. By the principle of mathematical induction, since P(10)
holds and since P(n) follows from P(n -1) whenever n > 5, we conclude that
n3 < 2n is true for all n > 10. It is instructive to note that P(n) holds also for n = 0
and n = 1, but that we cannot use these points as the basis of the mathematical
induction because the induction step does not apply for such small values of n.
It may happen that the property to be proved is not concerned with the set of
all integers not smaller than a given basis. Our tiling puzzle, for instance, concerns
only the set of integers that are powers of 2. Sometimes, the property does not
concern integers at all. For instance, it is not unusual in algorithmics to wish to
prove a property of graphs. (It could even be said that our tiling problem is not
really concerned with integers, but rather with boards and tiles, but that would be
hairsplitting.) In such cases, if simple mathematical induction is to be used, the
property to be proved should first be transformed into a property of the set of all
integers not smaller than some basis point. (An alternative approach is given in
Section 1.6.3.) In our tiling example, we proved that P(m) holds for all powers
of 2 by proving that Q(n) holds for all n > 0, where Q(n) is equivalent to P(2").
When this transformation is necessary, it is customary to begin the proof (as we
did) with the words "The proof is by mathematical induction on such-and-such a
parameter". Thus we find proofs on the number of nodes in a graph, on the length
of a character string, on the depth of a tree, and so on.
Section 1.6 Proof technique 2 - Mathematical induction 23
Proof We shall prove that any set of horses contains only horses of a single colour. In par-
ticular, this will be true of the set of all horses. Let H be an arbitrary set of horses.
Let us prove by mathematical induction on the number n of horses in H4 that they
are all the same colour.
o Basis: The case n = 0 is trivially true: if there are no horses in Hf, then surely
they are all the same colour! (If you do not like this argument, check the next
simplest case: if n= 1, then there is only one horse in -, and again it is
vacuously clear that "they" are "all" the same colour.)
o Induction step: Consider any number n of horses in -f. Call these horses
hi, h2, ... , h, . Assume the induction hypothesis that any set of n -1 horses
contains only horses of a single colour (but of course the horses in one set
could a priori be a different colour from the horses in another). Let H1, be the
set obtained by removing horse hi from Hf, and let -f2 be defined similarly;
see Figure 1.6.
Hfi: h2 h3 h4 h5
H42: hi h3 h4 h5
Figure 1.6. Horses of the same colour (n = 5)
There are n -1 horses in each of these two new sets. Therefore, the induction
hypothesis applies to them. In particular, all the horses in •1f are of a single
colour, say cl, and all the horses in •2 are also of a single (possibly differ-
ent) colour, say c2. But is it really possible for colour ci to be different from
24 Preliminaries Chapter 1
colour c2 ? Surely not, since horse h, belongs to both sets and therefore both
ci and c2 must be the colour of that horse! Since all the horses in Hf belong
to either HI or f2 (or both), we conclude that they are all the same colour
c = c1 = c2. This completes the induction step and the proof by mathematical
induction. X
Before you continue, figure out the fallacy in the above "proof". If you think
the problem is that our induction hypothesis ("any set of n - 1 horses must contain
only horses of a single colour") was absurd, think again!
Solution: The problem is that "ha belongs to both sets" is not true for n = 2
since h2 does not belong to H2! Our reasoning was impeccable for the basis cases
n = 0 and n = 1. Moreover, it is true that our theorem follows for sets of n horses
assuming that it is true for n -1, but only when n > 3. We can go from 2 to 3,
from 3 to 4, and so on, but not from 1 to 2. Since the basis cases contain only 0
and 1, and since we are not allowed to go from 1 to 2, the induction step cannot get
started. This small missing link in the proof is enough to invalidate it completely.
We encountered a similar situation when we proved that n 3 < 2n: the induction
step did not apply for n < 5, and thus the fact that the statement is true for n = 0
and n = 1 was irrelevant. The important difference was that n3 < 2n is true for
n = 10, and therefore also for all larger values of n.
Proof The proof is by generalized mathematical induction. In this case, there is no need
for a basis.
o Induction step: Consider any composite integer n > 4. (Note that 4 is the small-
est positive composite integer, hence it would make no sense to consider smaller
values of n.) Assume the induction hypothesis that any positive composite in-
teger smaller than n can be expressed as a product of prime numbers. (In the
26 Preliminaries Chapter 1
In either case, this completes the proof of the induction step and thus of the
theorem. 0
Until now, the induction hypothesis was always concerned with a finite set
of instances (exactly one for simple mathematical induction, usually many but
sometimes none for generalized mathematical induction). In our final example
of proof by generalized mathematical induction, the induction hypothesis covers
an infinity of cases even when proving the induction step on a finite instance!
This time, we shall prove that multiplication a la russe correctly multiplies any
pair of positive integers. The key observation is that the tableau produced when
multiplying 490 by 2468 is almost identical to Figure 1.2, which was used to multiply
981 by 1234. The only differences are that the first line is missing when multiplying
490 by 2468 and that consequently the term 1234 found in that first line is not added
into the final result; see Figure 1.7. What is the relationship between instances
(981 1234) and (4902468)? Of course, it is that 490 = 981 . 2 and 2468 = 2 x 1234.
defined below. The second example shows how the technique can be useful in the
analysis of algorithms.
The sequence named for Fibonacci, an Italian mathematician of the twelfth
century, is traditionally introduced in terms of rabbits (although this time not out
of a hat). This is how Fibonacci himself introduced it in his Liberabaci,published
in 1202. Suppose that every month a breeding pair of rabbits produce a pair of
offspring. The offspring will in their turn start breeding two months later, and
so on. Thus if you buy a pair of baby rabbits in month 1, you will still have just
one pair in month 2. In month 3 they will start breeding, so you now have two
pairs; in month 4 they will produce a second pair of offspring, so you now have
three pairs; in month 5 both they and their first pair of offspring will produce baby
rabbits, so you now have five pairs; and so on. If no rabbits ever die, the number
of pairs you have each month will be given by the terms of the Fibonacci sequence,
defined more formally by the following recurrence:
I fo = O;fi =1 and
f
fn =ni+fn 2 forn>2.
The sequence begins 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 ... It has numerous applications
in computer science, in mathematics, and in the theory of games. De Moivre
obtained the following formula, which is easy to prove by mathematical induction
(see Problem 1.27):
An=
[¢n - -()n]
5
where P = (1 + V5) /2 is the golden ratio. Since 0 < kh-1 < 1, the term ( P) n can
be neglected when n is large. Hence the value of ft is roughly (4"/\5, which is
exponential in n.
But where does de Moivre's formula come from? In Section 4.7 we shall see a
general technique for solving Fibonacci-like recurrences. In the meantime, assume
you do not know any such techniques, nor do you know de Moivre's formula,
yet you would like to have an idea of the behaviour of the Fibonacci sequence.
If you compute the sequence for a while, you soon discover that it grows quite
rapidly (fico is a 21-figure number). Thus, the conjecture "the Fibonacci sequence
grows exponentially fast" is reasonable. How would you prove it? The difficulty is
that this conjecture is too vague to be proved directly by mathematical induction:
remember that it is often easier to prove a stronger theorem than a weaker one.
Let us therefore guess that there exists a real number x > 1 such that ft > Xn for
each sufficiently large integer n. (This statement could not possibly be true for
every positive integer n since it obviously fails on n < 2.) In symbols,
Conjecture: (3 x > 1) (V n X N) [fet Xn>.
There are two unknowns in the theorem we wish to prove: the value of x and
the precise meaning of "for each sufficiently large". Let us not worry about the latter
for the time being. Let P, (n) stand for "ft > Xn ". Consider any sufficiently large
integer n. The approach by constructive induction consists of asking ourselves for
which values of x Px (n) follows from the partiallyspecified induction hypothesis that
Section 1.6 Proof technique 2 - Mathematical induction 29
P, (m) holds for each integer m that is less than n but that is still sufficiently large.
Using the definition of the Fibonacci sequence and this hypothesis, and provided
n - 1 and n - 2 are also "sufficiently large",
2
f, = fn-l + fn-2 > X' 1 + Xn-2 = (X-1 + x- ) Xn
To conclude that fn, > xn, we need x-1 + x-2 > 1, or equivalently x2 - x -1 S 0.
By elementary algebra, since we are only interested in the case x > 1, solving this
quadratic equation implies that 1 < x < ¢) = (1 + f5 ) /2.
We have established that P (n) follows from P, (n -1) and Px (n - 2) provided
1 < x < 4. This corresponds to proving the induction step in a proof by mathe-
matical induction. To apply the principle of mathematical induction and conclude
that the Fibonacci sequence grows exponentially fast, we must also take care of the
basis. In this case, because the truth of Px(n) depends only on that of P,(n -1)
and Px (n - 2), it is sufficient to verify that property P, holds on two consecutive
positive integers to assert that it holds from that point on.
It turns out that there are no integers n such that fn, > 4A. However, finding
two consecutive integers on which property P holds is easy for any x strictly
smaller than 4,. Forinstance, both PX (11) and P, (12) hold when x = 3/2. Therefore,
f, Ž ( 2 )n for all n > 11. This completes the proof that the Fibonacci sequence grows
at least exponentially. The same process can be used to prove that it grows nofaster
than exponentially: fA, < yn for every positive integer n provided y > 4,. Here
again, the condition on y is not God-given: it is obtained by constructive induction
when trying to find constraints on y that make the induction step go through.
Putting those observations together, we conclude that fA, grows exponentially;
more precisely, it grows like a power of a number close to 4,. The remarkable thing
is that we can reach this conclusion with no need for an explicit formula such as
de Moivre's.
Our second example of constructive induction concerns the analysis of the
obvious algorithm for computing the Fibonacci sequence.
function Fibonacci(n)
if n < 2 then return n
else return Fibonacci(n-1) +Fibonacci(n - 2) (*)
Let g(n) stand for the number of times instruction (*) is performed when
Fibonacci(n) is called (counting the instructions performed in recursive calls). This
function is interesting because g(n) gives a bound on the time required by a call
on Fibonacci(n).
Clearly, 9(0)= g(1)= 0. When n > 2, instruction (*) is executed once at the
top level, and g(n - 1) and g(n - 2) times by the first and second recursive calls,
respectively. Therefore,
This formula is similar to the recurrence that defines the Fibonacci sequence
itself. It is therefore reasonable to conjecture the existence of positive real constants
30 Preliminaries Chapter 1
a and b such that af, • g (n) • bf, for each sufficiently large integer n. Using
constructive induction, it is straightforward to find that afn g (n) holds for each
sufficiently large n provided it holds on two consecutive integers, regardless of the
value of a. For instance, taking a = 1, fn < g(n) holds for all n > 2.
However when we try to prove the other part of our conjecture, namely that
there exists a b such that g (n) < bf, for each sufficiently large n, we run into
trouble. To see what happens, let Pb(n) stand for "g(n) < bf,", and consider
any sufficiently large integer n (to be made precise later). We wish to determine
conditions on the value of b that make Pb (n) follow from the hypothesis that
Pb (m) holds for each sufficiently large m < n. Using the definition of the Fibonacci
sequence and this partially specified induction hypothesis, and provided n - 1 and
n - 2 are also sufficiently large,
g(n)= g(n - l)+g(n -2)+1 < bfn1 + bfn-2 + 1 = bfn + 1,
where the last equality comes from fn = fn 1 + fn-2. Thus, we infer that
g(n)< bJn + 1, but not that g(n)< bfa. Regardless of the value of b, we can-
not make the induction step work!
Does this mean the original conjecture was false, or merely that construc-
tive induction is powerless to prove it? The answer is: neither. The trick is to
use constructive induction to prove there exist positive real constants b and c
such that g (n) < bfn - c for each sufficiently large n. This may seem odd, since
g(n) < bfn -c is a stronger statement than g(n) < bf, , which we were unable to
prove. We may hope for success, however, on the ground that if the statement to
be proved is stronger, then so too is the induction hypothesis it allows us to use;
see the end of Section 1.6.1.
Consider any sufficiently large integer n. We must determine for which values
of b and c the truth of g (n) < bfn - c follows from the partially specified induc-
tion hypothesis that g (m)) bfu - c for each sufficiently large m < n. Using the
definition of the Fibonacci sequence and this hypothesis, and provided n -1 and
n - 2 are also sufficiently large,
g(n) = g(n - 1)+g(n - 2)+1
< bfn 1- c + bfn 2 - c + 1 = bfn - 2c + 1.
To conclude that g(n) < bf, -c, it suffices that -2c + 1 < -c, or equivalently
that c > 1. We have thus established that the truth of our conjecture on any given
integer n follows from its assumed truth on the two previous integers provided
c 1, regardless of the value of b. Before we can claim the desired theorem, we
still need to determine values of b and c that make it work on two consecutive
integers. For instance, b = 2 and c = 1 make it work on n = 1 and n = 2, and
therefore g(n)< 2fn -1 for all n 2 1.
The key idea of strengthening the incompletely specified statement to be proved
when constructive induction fails may again appear to be produced like a rabbit
out of a hat. Nevertheless, this idea comes very naturally with experience. To gain
such experience, work Problems 1.31 and 1.33. Unlike the Fibonacci examples,
which could have been handled easily by the techniques of Section 4.7, the cases
tackled in these problems are best handled by constructive induction.
Section 1.7 Some reminders 31
1.7.1 Limits
Let f(n) be any function of n. We say that f (n) tends to a limit a as n tends to
infinity if f(n) is nearly equal to a when n is large. The following formal definition
makes this notion more precise.
In other words, however small the positive number 6, we can find a threshold no (6)
corresponding to 3, such that f(n) differs from a by less than 6 for all values of n
greater than or equal to no(6). When 6(n) tends to a limit a as n tends to infinity,
we write
lim f(n)= a
n-X
Once again this means that we can find a threshold no(A) corresponding to A,
such that f(n) is greater than A for all values of n greater than or equal to no (A).
We write
limf(n)= +o
n-OO
A similar definition takes care of functions such as -n 2 that take increasingly large
negative values as n tends to infinity. Such functions are said to tend to minus
infinity.
Finally, when f(n) does not tend to a limit, nor to + o nor to - o, we say that
f(n) oscillatesas n tends to infinity. If it is possible to find a positive constant K such
that -K < f(n) < K for all values of n, then we say that f(n) oscillates finitely;
otherwise f(n) oscillates infinitely. For example, the function (- 1)" oscillates
finitely; the function (- 1) n oscillates infinitely.
The following propositions state some general properties of limits.
32 Preliminaries Chapter 1
Proposition 1.7.3 Iftwofunctions f(n) and g(n) tend to limits a and b respec-
tively as n tends to infinity, then f (n) +g (n) tends to the limit a + b.
Proposition 1.7.4 Iftwofunctions f(n) and g(n) tend to limits a and b respec |
lively as n tends to infinity, then f (n)g(n) tends to the limit ab.
Both these propositions may be extended to the sum or product of any finite number
of functions of n. An important particular case of Proposition 1.7.4 is when g (n)
is constant. The proposition then states that if the limit of f(n) is a, then the
limit of cf (n) is ca, where c is any constant. It is perfectly possible for either
f(n)+g(n) or f(n)g(n) to tend to a limit even though neither f(n) nor g(n)
does so; see Problem 1.34. Finally the following proposition deals with division.
Proposition 1.7.5 If twofunctions f (n) and g (n) tend to limits a and b respec-
tively as n tends to infinity, and b is not zero, then f(n)/g(n) tends to the limit
a/b.
These propositions, although simple, are surprisingly powerful. For instance, sup-
pose we want to know the behaviour as n tends to infinity of the most general
rational function of n, namely
S(n)= aOnP + ajnP 1 + + ap
bonq + blnq1 + .+ bq'
where neither ao nor bo is zero. Writing S (n) in the form
and applying the above propositions, it is easy to see that the function in braces
tends to the limit ao / bo as n tends to infinity. Furthermore nP -q tends to the limit O
if p < q; nP-q 1 and therefore nP-q tends to the limit I if p = q; and nP-q tends
to infinity if p > q. Hence
lim S(n)= O when p < q,
n-.0
lim S(n)= ao/bo when p = q,
and S(n) tends to plus or minus infinity when p > q, depending on the sign of
ao/ bo.
or alternatively that both these limits are infinite. Supposefurther that the domains
off and g can be extended to some real interval [no, + oa) in such a way that (a) the
correspondingnew functions f and g are differentiable on this interval, and also
that (b)g'(x), the derivative of 4 (x), is never zero for x e [no, + oo), then
For a simple example, suppose f(n)= logn and g(n)= na, where a > 0 is an
arbitrary positive constant. Now both f(n) and g(n) tend to infinity as n tends
to infinity, so we cannot use Proposition 1.7.5. However if we extend f (n) to
f(x) = log x and g (n) to 4 (x) = xa, de l'H6pital's rule allows us to conclude that
Proposition 1.7.8 If twofunctions f(n) and g(n) tend to limits a and b respec-
tively as n tends to infinity, and if f (n) < g (n) for all sufficiently large n, then
a < b.
34 Preliminaries Chapter 1
It is often convenient to change the notation slightly, and to write this equation in
the form
Sn = U1 +U2+ +Un,
or simply
n
Sn UAi,
iil
which we read as "the sum of ui as i goes from 1 to n". Now if sn tends to a limit
s when n tends to infinity, we have
s = n-OO
lim Eu,
S ZU 1
Lil
or as
S U 1 + U2 +
where the dots show that the series is continued indefinitely. In this case we say
that the series is convergent, and we call s the sum of the series.
If on the other hand s, does not tend to a limit, but s, tends to + co or to -co,
then we say that the series diverges, to + oCor - co as the case may be. Finally if s,
does not tend to a limit, nor to + oaor - oo, then we say that the series oscillates
(finitely or infinitely, as the case may be). It is evident that a series with no negative
terms must either converge, or else diverge to + Co: it cannot oscillate.
Two particularly simple kinds of series are arithmetic series and geometric series.
In an arithmetic series the difference between successive terms is constant, so we
may represent the first n terms of the series as
a, a + d, a + 2d,...,a + (n - I)d,
where a, the first term, and d, the difference between successive terms, are suitable
constants. In a geometric series, the ratio of successive terms is constant, so that
here the first n terms of the series may be represented as
Proposition 1.7.9 (Arithmetic series) Let s, be the sum of the first n terms of
the arithmetic series a, a + d, a + 2d,... Then s, = an + n(n - 1)d/2.
The series diverges unless a = d = 0, in which case sn = 0 for all n. The proposition
is easily proved. First write the sum as
where there are n equal terms on the right. The result follows immediately.
Proposition 1.7.10 (Geometric series) Let Sn be the sum of the first n terms of
the geometric series a, ar, ar 2 ,... Then sn a(1 - rn)/(1- r), except in the
special case in which r = 1, when s, = an.
so that
rsn = a(r + r 2 + r3 + + r).
Subtracting the second equation from the first, we obtain immediately
In the general case (that is, when r 7 1) the sum sn of a geometric series tends to a
limit if and only if rn does so. This gives us the following proposition.
A similar technique can be used to obtain a useful result concerning yet another
series. If we write
s= r+2r 2 +3r 3
+...+(n -l)r 1
we have that
rsn= r 2 + 2r3 + 3r 4 + (n -1)r'.
Subtracting the second equation from the first, we obtain
2 3
( r)s, r+r +r + -**+r -1 (nl 1)r'
2
r(l-+-r+r ±..+rrn l)-nrn
+
=r(l - r') M1 r) -nr'
The right-hand side tends to a limit as n tends to infinity if -1 < r < 1 (use
Proposition 1.7.6), giving us the following result.
n
i(i+1)...(i+k)=n(n+1)...(n+k+1)/(k+2).
i=l
n
ir = nr+l/(r + 1)+pr(n),
n(n+1)...(n+r)/(r+1)+p'(n)
nr+1-/(r + 1)+p"(n)
where p(i) is a polynomial of degree not more than r -1, and p'(n) and p'(n)
are polynomials of degree not more than r. We leave the reader to fill in the details
of the argument.
Finally we consider briefly series of the form 1 r, 2-r, ... where r is a positive
integer. When r = 1 we obtain the series 1,1/2,1/3,... known as the harmonic
series. It is easy to show that this series diverges. The following proposition gives
us a better idea of its behaviour.
To see this, consider Figure 1.8. The area under the "staircase" gives the sum of the
harmonic series; the area under the lower curve, y = / (x + 1), is less than this
sum, while the area under the upper curve, which is y = 1 for x < 1 and y = 1 /x
thereafter, is greater than or equal to the sum. Hence
A d <Hn < 1 + --
where y t 0.57721 ... is Euler's constant, but the proof of this is beyond the scope
of our book.
0 1 2 3 n-l n
It is easy to show that series of the form 1, 1/2r, 1/3r,... with r > 1 are all con-
vergent, and that the sum of such a series is less than r / (r - 1); see Problem 1.39.
For example
lirm ( +
1
+
2
2 +32 ***+ 2) 6 1.64493...
suppose they are labelled a, b, and so on, with each object having a distinct label.
From now on we shall simply refer to a, for example, when we mean "the object
labelled a".
Our first definition concerns the number of ways we can arrange these n objects
in order.
For example, if we have four objects a, b, c and d, we can arrange them in order
in 24 different ways:
abcd abdc acbd acdb adbc adcb
bacd badc bcad bcda bdac bdca
cabd cadb cbad cbda cdab cdba
dabc dacb dbac dbca dcab dcba
The first object in the permutation may be chosen in n ways; once the first object
has been chosen, the second may be chosen in n - 1 different ways; once the first
and second have been chosen, the third may be chosen in n - 2 different ways, and
so on. There are two possibilities when we choose last but one object, and there is
only one way of choosing the last object. The total number of permutations of n
objects is therefore
n(n - 1) (n - 2) .. 2 1 = n!
Next we consider the number of ways of choosing a certain number of these
objects, without regard to the order in which we make our choices.
For example, if we have five objects a, b, c, d and e, we can choose three of them
in 10 different ways if order is not taken into account:
abc abd abe acd ace
ade bcd bce bde cde
A choice such as eba does not appear in this list, since it is the same as abe when the
order of the objects is disregarded.
When we make our choice of r objects from among n, there are n ways of
making the first choice. When the first object has been chosen, there remain n - 1
ways of choosing the second, and so on. When we choose the last of the r objects
we want, there remain n - r + 1 possibilities. Hence there are
ways of choosing r objects from n when order is taken into account. However when
we do not take order into account, we can permute the r chosen objects any way
40 Preliminaries Chapter 1
we like, and it still counts as the same combination. In the example above, for
instance, the six ordered choices abc, acb, bac, bca, cab and cba all count as the same
combination. Since there are r! ways of permuting r objects, the number of ways
of choosing r objects from n when order is not taken into account is
(n) n(n - 1) n - 2) .. (n - r + 1)(1)
Several alternative notations are used for the number of combinations of n objects
taken r at a time: among others, you may encounter nCr and Cr'. This accounts
for the common, but illogical, habit of writing (n) but reading this symbol aloud
as "n C r".
When r > n, Equation 1.1 gives (nr) = 0, which is sensible: there are no ways of
choosing more than n objects from n. It is convenient to take (On) = 1 (there is just
one way of not choosing any objects), and when r < 0, which has no combinatorial
meaning, we define ( 0. When 0 < r < n, Equation 1.1 can conveniently be
0)
written in the form
(n) n!
(r) r!(n - r)!
A simple argument allows us to obtain an important relation. Pick any one of
the n objects, and say that this object is "special". Now when we choose r objects
from the n available, we can distinguish those choices that include the special
object, and those that do not. For instance, if we are to choose three objects among
a, b, c, d and e, and the special object is b, then the choices that include the special
object are abc, abd, abe, bcd, bce and bcd, while those that do not include the special
object are acd, ace, ade and cde. To make a selection of the first kind, we can first
choose the special object (since the order of our choices is not important), and then
complete our selection by picking r - 1 objects among the n - 1 that are left; this
can be done in ( -1) ways. To make a selection of the second kind, we must choose
our r objects among the n - 1 that are not special; this can be done in (or 1) ways.
Since every selection of r objects from among the n available is of one kind or the
other, we must have
n\r 0 1 2 3 4 5
0 1
1 1 1
2 1 2 1
3 1 3 3 1
4 1 4 6 4 1
5 1 5 10 10 5 1
Figure 1.9. Combinations of n objects taken r at a time
1
(1 + X)n= 1 + ()X + (2)x + *+ n +Xn.
Using this theorem it is easy to obtain interesting results concerning the binomial
coefficients. For example, on setting x = 1 we obtain immediately
In combinatorial terms, the sum on the left is the number of ways of choosing an
arbitrary number of objects (including 0 objects) from n when order is not taken
into account. Since there are 2 possibilities for each of the n objects-we can take
it or leave it-there are 2n ways this can be done. Similarly, on setting x = -1 in
Theorem 1.7.20 we find
n (n) = n (n)
r odd r even
passing a given point, the sample space is S = {0, 1,2, ...}. For the random ex-
periment that consists of measuring the response time of a computer system, the
sample space is S = {tIt > 01.
A sample space can be finite or infinite, and it can be discrete or continuous.
A sample space is said to be discrete if the number of sample points is finite, or
if they can be labelled 1, 2, 3, and so on using the positive integers; otherwise the
sample space is continuous. In the examples above, S is finite and therefore discrete
for the random experiment of throwing a dice; S is infinite but discrete for the
experiment of counting cars, for the possible outcomes can be made to correspond
to the positive integers; and S is continuous for the experiment of measuring a
response time. In this book we shall be concerned almost entirely with random
experiments whose sample space is finite, and therefore discrete.
An event is now defined as a collection of sample points, that is, as a subset
of the sample space. An event A is said to occur if the random experiment is
performed and the observed outcome is an element of the set A. For example,
when we throw a dice, the event described by the statement "The number shown
on the dice is odd" corresponds to the subset A = {1,3, 5} of the sample space.
When we count cars, the event described by "The observed number of cars is less
than 20" corresponds to the subset A = {0, 1, 2,...,18,191, and so on. Informally,
we use the word "event" to refer either to the statement describing it, or to the
corresponding subset of the sample space. In particular, the entire sample space
S is an event called the universal event, and the empty set 0 is an event called the
impossible event.
Since a sample space S is a set and an event A is a subset of S, we can form new
events by the usual operations of set theory. Thus to any event A there corresponds
an event A consisting of all the sample points of S that are not in A. Clearly A is the
event "A does not occur". Similarly the event A u B corresponds to the statement
"Either A or B occurs", while the event A n B corresponds to "Both A and B occur".
Two events are said to be mutually exclusive if A n B = 0.
Finally a probability measure is a function that assigns a numerical value Pr[A]
to every event A of the sample space. Obviously the probability of an event is
supposed to measure in some sense the relative likelihood that the event will occur
if the underlying random experiment is performed. The philosophical bases of
the notion of probability are controversial (What precisely does it mean to say that
the probability of rain tomorrow is 0.25?), but there is agreement that any sensible
probability measure must satisfy the following three axioms:
for any finite collection Al, A2 ,..., A, of mutually exclusive events. (A modified
form of axiom 3 is necessary if the sample space is infinite, but that need not concern
us here.) The axioms lead to a number of consequences, among which are
4. Pr[A]= 1 - Pr[A] for any event A; and
5. Pr[A u B] = Pr[A] + Pr[B] - Pr[A n B] for any events A and B.
The basic procedure for solving problems in probability can now be outlined:
first, identify the sample space S; second, assign probabilities to the elements in S;
third, identify the events of interest; and finally, compute the desired probabilities.
For instance, suppose we want to know the probability that a random number
generator will produce a value that is prime. We have first to identify the sample
space. Suppose we know that the generator can produce any integer value between
0 and 9999 inclusive. The sample space (that is, the set of possible outcomes) is
therefore {0, 1,2, . . ., 9999 1. Next, we must assign probabilities to the elements
of S. If the generator produces each possible elementary event with equal prob-
ability, it follows that Pr[0]= Pr[1]= =. Pr[9999]= 1/10000. Third, the inter-
esting event is "the generator produces a prime", which corresponds to the subset
A = {2,3,5,...,9967,9973}. Finally the interesting probability is Pr[A], which can
be computed as ,eeA Pr [ e], where the sum is over the elementary events that com-
pose A. Since the probability of each elementary event is 1/ 10000, and there are
1229 elementary events in A, we find Pr[A]= 0.1229.
So far we have assumed that all we know about the outcome of some random
experiment is that it must correspond to some sample point in the sample space S.
However it is often useful to calculate the probability that an event A occurs when
it is known that the outcome of the experiment is contained in some subset B of
the sample space. For example, we might wish to calculate the probability that the
random number generator of the previous paragraph has produced a prime when
we know that it has produced an odd number. The symbol for this probability is
Pr[AIB], called the conditional probability of A given B. Obviously this conditional
probability is only interesting when Pr[B] ¢ 0.
In essence, the extra information tells us that the outcome of the experiment
lies in a new sample space, namely B. Since the sum of the probabilities of the
elementary events in the sample space must be 1, we scale up the original values
to fulfill this condition. Furthermore we are only interested now in that part of A
that lies in B, namely A n B. Thus the conditional probability is given by
Pr[AIBI= Pr[A B]
Pr[A~B]Pr[B]
For our example, A is the event "the generator produces a prime" and B is the
event "the generator produces an odd number". Now B includes 5000 elementary
events, so Pr[B]= 0.5.. There are 1228 odd primes less than 10000 (only the even
prime 2 drops out), so Pr[A n B]= 0.1228. Hence the probability that the generator
has produced a prime, given that it has produced an odd number, is
and hence the knowledge that event B has occurred does not change the probability
that event A will occur. In fact this condition can be used as an alternative definition
of independence.
In Chapter 10 we shall be looking at probabilistic algorithms for determining
whether or not a given integer is prime. In this context the following approximations
are useful. Suppose the sample space S contains a large number n of consecutive
integers, and that the probability of each of these outcomes is the same, namely
1/ n. For example, S might be the set {1, 2, . . ., n }. Let Di be the event "the outcome
is divisible by i". Since the number of outcomes in S that are divisible by i is ap-
proximately n/ i, we have that Pr[D 1] (n/ i) x (1 /n) = I / i. Furthermore, if p and
q are two different primes, the events Dp and Dq are approximately independent,
so that
Pr[Dp n Dqh] Pr[Dp]Pr[Dq]; 1/pq.
Of course this may not be true if either p or q is not prime: clearly the events D2 and
D4 are not independent; also when S contains only a small number of elements,
the approximations do not work very well, as Problem 1.46 illustrates. Now the
Prime Number Theorem (whose proof is well beyond the scope of this book) tells
us that the number of primes less than n is approximately n/log n, so that if S is
indeed {1, 2, . . ., n}, with equal probability for each outcome, and A is the event
"the outcome is prime", we have that Pr[A] I/logn.
Consider for instance the following problem. A random number generator
produces the value 12262409. We would like to know whether this number is prime,
but we don't have the computing power to compute the answer in a deterministic
way. (Of course this is unrealistic for such a small example.) So what can we say
with the help of a little probability theory?
The first thing to note is that a question such as "What is the probability that
12262409 is prime?" is meaningless. No random experiment is involved here, so we
have no sample space to talk about, and we cannot even begin to assign probabilities
to outcomes. On the other hand, a question such as "What is the probability that
our random number generator has produced a prime?" is meaningful and can be
answered quite easily.
As before, the first step is to identify our sample space, and the second to
assign probabilities to the elements of S. Suppose we know that the genera-
tor produces each of the values from 0 to 99999999 with equal probability: then
S = {0, 1,.. ,99999999}, and the probability of each elementary event in S is 1/108.
The Prime Number Theorem tells us that approximately 10 8 / log 108 5.43 x 106
Section 1.7 Some reminders 45
elements of S are prime. (The correct value is actually 5761455, but the approxima-
tion is good enough for our purposes.) So if A is the event "our generator produces
a prime", we have that Pr[A] 0.0543.
Now let the event Dp be "our generator produces a number that is divisible
by the prime p". Since Pr[Dp l/p, the probability of the complementary event
"our generator produces a number that is not divisible by the prime p " is Pr[1Jp -]
1 - 1/ p. Suppose we test 12262409 by trying to divide it by 2. The attempt fails, but
now we have some additional information, and we can ask "What is the probability
that our generator produces a prime, given that it produces a number not divisible
by 2?" In symbols,
Pr[AID 2 ] = Pr[A n D 2 ]/ Pr[D2 ]
2Pr[A] 0.109.
Here Pr[A n D2] is essentially the same as Pr[A] because all the primes save one
are not divisible by 2. If we next try dividing 12262409 by 3, the attempt again fails,
so now we can ask "What is the probability that our generator produces a prime,
given that it produces a number divisible neither by 2 nor by 3?" This probability
is
Pr[A n~ D2 n~ D3]
Pr[AID2 n D3 ] Pr[D2 n D 3 ]
Pr[A]/Pr[D2]Pr[D3]
3Pr[A]= 0.163.
Continuing in this way, each successive failure to divide 12262409 by a new prime
allows us to ask a more precise question, and to be a little more confident that our
generator has indeed produced a prime. Suppose, however, that at some stage we
try to divide 12262409 by 3121. Now the trial division succeeds, so our next question
would be "What is the probability that our generator has produced a prime, given
that it has produced a number divisible by none of 2, 3, .. ., but divisible by 3121?"
The answer is of course 0: once we have found a divisor of the number produced
by the generator, we are sure it is not prime. Symbolically, this answer is obtained
because
Pr[A n D 2 n n D31 21 ]= 0.
Notice that we cannot start this process of calculating a new probability after
each trial division if we are unable to estimate Pr[A], the unconditional probability
that our generator has produced a prime. In the example we did this using the
Prime Number Theorem plus our knowledge that the generator produces each
value from 0 to 99999999 with equal probability. Suppose on the other hand we are
presented with the number 12262409 and simply told that it has been selected for the
purposes of the example. Then the first question to ask is "What is the probability
that a number selected in some unspecified way for the purposes of an example
is prime?" Clearly this is impossible to answer. The sample space of the random
experiment consisting of choosing a number to serve as an example is unknown,
as are the probabilities associated with the elementary events in this sample space.
We can therefore make no meaningful statement about the probabilities associated
with a number selected in this way.
46 Preliminaries Chapter 1
In many random experiments we are interested less in the outcome itself, than
in some number associated with this outcome. In a horse-race, for instance, we may
be less interested in the name of the winning horse than in the amount we win or
lose on the race. This idea is captured by the notion of a random variable. Formally,
a random variable is a function (and not a variable at all, despite its name) that
assigns a real number to each sample point of some sample space S.
If X is a random variable defined on a sample space S, and x is a real number,
we define the event A, to be the subset of S consisting of all the sample points to
which the random variable X assigns the value x. Thus
Ax = Is E SIX(s)= x}.
The notation X = x is a convenient way to denote the event Ax. Hence we can
write
Pr[X = x] Pr[Ax] = E Pr[s].
ses
x(S) x
If we define p(x) by
p(x)= Pr[X = x]
then p(x) is a new function associated with the random variable X, called the
probability mass function of X. We define the expectation E[X] of X (also called the
mean or the average) by
bability Winnings
0.10 50
0.05 100
0.25 -30
0.50 -30
0.10 15
Figure 1.10. Probability of each outcome
Section 1.7 Some reminders 47
(Remember that the probabilities must sum to 1.) We have made a number of
wagers on the race so that, depending on the outcome, we will win or lose some
money The amount we win or lose is also shown in Figure 1.10. This amount is a
function of the outcome, that is, a random variable. Call this random variable W;
then the table shows that W(Ariel)= 50, WV(Bonbon)= 100, and so on. The random
variable W can take the values -30, 15, 50, or 100, and for instance
= Pr[s]
seS
W(s) -30
Similarly p(15)= Pr[W = 15]= 0.10, p(50)= Pr[W = 50]= 0.10 and p(100)=
Pr[W =100] 0.05. Our expected winnings can be calculated as
E[W] = Exp(x)
x
= -30p(-30) +15p (15) +50p (50) +lOOp(100)
= -11.
Once we have obtained E[X], we can calculate a second useful measure called
the variance of X, denoted by Var[X] or o-x. This is defined by
In words, it is the expected value of the square of the difference between X and its
expectation E[X]. The standarddeviation of X, denoted by ax, is the square root of
the variance. For the horse-race example above, we have
general conditions, the Central Limit Theorem suggests that when n is large, the
average observed value of xi will have a distribution that is approximately normal
with mean E[X] and variance Var[X]/n. To take advantage of this, all we need is
a table of the normal distribution. These are widely available. Such a table tells
us, among other things, that a normal deviate lies between plus or minus 1.960
standard deviations of its mean 95% of the time; 99% of the time it lies within plus
or minus 2.576 standard deviations of its mean.
For example, suppose that by some magic the horse-race described above
could be run 50 times under identical conditions. Suppose we win wi on the
i-th repetition, and let our average winnings be w wI-
wi/50.
1 Then we can
expect that fv will be approximately E[W]= -11. The variance of wv will be
Var[W]/50 = 1326.5/50 = 26.53, and its standard deviation will be the square
root of this, or 5.15, while the Central Limit Theorem tells us that the distribution
of w will be approximately normal. Therefore 95% of the time we can expect our
average winnings fv to lie between -11 - 1.960 x 5.15 and -11 + 1.960 x 5.15, that
is between -21.1 and -0.9. A similar calculation shows that 99% of the time w
will lie between -24.3 and +2.3; see Problem 1.47.
It is usually safe to use the Central Limit Theorem when n is greater than about
25 or so.
1.8 Problems
Problem 1.1. The word "algebra" is also connected with the mathematician
al-Khowarizmi, who gave his name to algorithms. What is the connection?
Problem 1.2. Easter Sunday is in principle the first Sunday after the first full moon
after the spring equinox. Is this rule sufficiently precise to be called an algorithm?
Justify your answer.
Problem 1.3. While executing an algorithm by hand, you have to make a random
choice. To help you, you have a fair coin, which gives the values heads and tails
with equal probability, and a fair dice, which gives each of the values 1 to 6 with
equal probability. You are required to choose each of the values red, yellow and blue
with equal probability. Give at least three different ways of doing this.
Repeat the problem with five colours instead of three.
Problem 1.4. Is it possible that there exists an algorithm for playing a perfect game
of billiards? Justify your answer.
Problem 1.5. Use multiplication a la russe to multiply (a) 63 by 123, and (b) 64
by 123.
Problem 1.6. Find a pocket calculator accurate to at least eight figures, that is,
which can multiply a four-figure number by a four-figure number and get the
correct eight-figure answer. You are required to multiply 31415975 by 8182818.
Show how the divide-and-conquer multiplication algorithm of Section 1.2 can be
used to reduce the problem to a small number of calculations that you can do on
your calculator, followed by a simple paper-and-pencil addition. Carry out the
calculation. Hint: Don't do it recursively!
Section 1.8 Problems 49
Problem 1.7. You are required to multiply two numbers given in Roman figures.
For instance, XIX times XXXIV is DCXLVI. You may not use a method that in-
volves translating the numbers into Arabic notation, multiplying them, and then
translating them back again. Devise an algorithm for this problem.
Hint: Find ways to translate back and forth between true Roman notation and
something similar that does not involve any subtractions. For instance, XIX might
become XVIIII in this "pseudo-Roman" notation. Next find easy ways to double,
halve and add figures in pseudo-Roman notation. Finally adapt multiplication a
la russe to complete the problem.
Problem 1.8. As in Problem 1.6, suppose you have available a pocket calculator
that can multiply a four-figure number by a four-figure number and get the cor-
rect eight-figure answer. Devise an algorithm for multiplying two large numbers
based on the classic algorithm, but using blocks of four figures at a time instead
of just one. (If you like, think of it as doing your calculation in base 10000 arith-
metic.) For instance, when multiplying 1234567 by 9876543, you might obtain the
arrangement shown in Figure 1.11.
0123 4567
0987 6543
0080 7777 1881
0012 1851 7629
0012 1932 5406 1881
Figure 1.11. Multiplication in base 10000
Here the first line of the calculation is obtained, from right to left, as
(that is, a result of 1881 and a carry of 2988), followed by 0123 x 6543 + 2988 = 807777
(that is, a result of 7777 and a carry of 0080). The second line of the calculation is
obtained similarly, and the final result is found by adding the columns. All the
necessary arithmetic can be done with your calculator.
Use your algorithm to multiply 31415975 by 8182818. Check that your answer is
the same as the one you found in Problem 1.6.
Problem 1.9. Figure 1.12 shows yet another method of multiplying two positive
integers, sometimes called Arabic multiplication.
In the figure, as before, 981 is multiplied by 1234. To use this method, draw a
rectangle with as many columns as there are figures in the multiplicand (here 3) and
as many rows as there are figures in the multiplier (here 4). Write the multiplicand
above the columns, and the multiplier down the right-hand side of the rectangle.
Draw diagonal lines as shown. Next fill in each cell of the rectangle with the product
of the figure at the top of the column and the figure at the right-hand end of the
row. The tens figure of the result (which may be 0) goes above the diagonal line,
50 Preliminaries Chapter 1
9 8 I
0 09 0
2 0 2 2
X 8
X 4 3 3
0 4
74
5 5 4
and the units figure below it. Finally add the diagonals of the rectangle starting at
the bottom right. Here the first diagonal gives 4; the second gives 2 + 0 + 3 = 5; the
third gives 6 + 3 + 4 + 0 + 2 = 15, so write down 5 and carry 1; and so on. Now the
result 1210554 can be read off down the left-hand side and along the bottom of the
rectangle.
Once again, use this algorithm to multiply 31415975 by 8182818, checking your
result against the answers to previous problems.
Problem 1.10. Are the two sets X = {1, 2, 31 and Y = {2, 1, 31 equal?
Problem 1.11. Which of the following sets are finite: 0, {0}, A, {1I}? What is
the cardinality of those among the above sets that are finite?
Problem 1.12. For which values of Boolean variables p, q and r is the Boolean
formula (p A q)v(- q A r) true?
Problem 1.16. Provethatx - 1 < Lx] < x < [xI < x + lforeveryrealnumberx.
Section 1.8 Problems 51
Problem 1.17. An alternative proof of Theorem 1.5.1, to the effect that there are
infinitely many primes, begins as follows. Assume for a contradiction that the
set of prime numbers is finite. Let p be the largest prime. Consider x = p! and
y = x + 1. Your problem is to complete the proof from here and to distill from your
proof an algorithm Biggerprime(p) that finds a prime larger than p. The proof of
termination for your algorithm must be obvious, as well as the fact that it returns
a value larger than p.
Problem 1.18. Modify the proof of Theorem 1.5.1 to prove that there are infinitely
many primes of the form 4k -1, where k is an integer.
Hint: Define x as in the proof of Theorem 1.5.1, but then set y = 4x - 1 rather than
y = x + 1. Even though y itself may not be prime and the smallest integer d larger
than 1 that divides y may not be of the form 4k - 1, prove by contradictionthat y
has at least one prime divisor of the required form.
It is also true that there are infinitely many primes of the form 4k + 1, but this is
more involved to prove. Where does your reasoning for the case 4k - 1 break down
in trying to use the same idea to prove the case 4k + 1?
Problem 1.19. Let n be a positive integer. Draw a circle and mark n points regu-
larly spaced around the circumference. Now, draw a chord inside the circle between
each pair of these points. In the case n = 1, there are no pairs of points and thus
no chords are drawn; see Figure 1.13.
Finally, denote by c(n) the number of sections thus carved inside the circle. You
should find that c(1) = 1, c(2) = 2, c (3) = 4 and c(4) = 8. By induction, what do you
think the general formula for c(n) is? Determine c(5) by drawing and counting.
Was your inductively found formula correct? Try again with c(6). What if you
allow the points to be spaced irregularly? (Optional and much harder: determine
the correct formula for c(n), and prove that it is correct.)
Problem 1.20. Why do you think that mathematical induction received this name
even though it really is a deductive technique?
Problem 1.21. Prove by mathematical induction that the sum of the cubes of the
first n positive integers is equal to the square of the sum of these integers.
52 Preliminaries Chapter 1
Problem 1.22. Following Problem 1.21, prove that the sum of the cubes of the first
n positive integers is equal to the square of the sum of these integers, but now
use Proposition 1.7.13 rather than mathematical induction. (Of course, Proposi-
tion 1.7.13 was proved by mathematical induction, too!)
Problem 1.23. Determine by induction all the positive integer values of n for
which n3 > 2'. Prove your claim by mathematical induction.
Problem 1.24. The axiom of the least integer says that every nonempty set of positive
integers contains a smallest element. (This is not true in general for sets of positive
real numbers-consider for instance the set of all reals strictly between 0 and 1.)
Using this axiom, give a rigorous proof by contradiction that the simple principle
of mathematical induction is valid. More precisely, consider any integer a and
any property P of the integers such that P(a) holds, and such that P(n) holds
whenever P(n -1) holds for any integer n > a. Assume furthermore that it is not
the case that P(n) holds for all n > a. Use the axiom of the least integer to derive
a contradiction.
Problem 1.25. Problem 1.24 asked you to prove the validity of the principle of
mathematical induction from the axiom of the least integer. In fact the principle
and the axiom are equivalent: prove the axiom of the least integer by mathematical
induction!
Hint: As a first step, prove that any nonemptyfinite set of positive integers contains
a smallest element by mathematical induction on the number of elements in the
set. Note that your proof would hold just as well for any finite set of real numbers,
which shows clearly that it does not apply directly to infinite sets. To generalize
the result to infinite sets of positive integers, consider any such set X. Let m be any
element in X (we do not need an axiom for this: any infinite set contains at least one
element by definition). Let Y be the set of elements of X that are not larger than m.
Show that Y is a nonempty finite set of positive integers, and therefore your proof
by mathematical induction applies to conclude that Y contains a smallest element,
say n. Finish the proof by arguing that n is also the smallest element in X.
Problem 1.26. Give a rigorous proof that the generalized principle of mathematical
induction is valid.
Hint: Prove it by simple mathematical induction.
Problem 1.27. Recall that the Fibonaccisequence is defined as
fo 0;fJ =1 and
f fn11 + f7 2 for n > 2.
Prove by generalized mathematical induction that
lt(l)= a and
Lt(n)=bn+nt(n-1) forn>2.
for each positive integer n. Using constructive induction, prove the existence
of a real positive constant c such that t(n)< cnlogn for each sufficiently large
integer n.
Problem 1.34. Give two functions f(n) and g (n) such that neither f(n) nor g (n)
tends to a limit as n tends to infinity, but both f (n) +g (n) and f (n) /g (n) do tend
to a limit.
Problem 1.35. Determine the behaviour as n tends to infinity of f (n) = n-rxn,
where r is any positive integer.
Problem 1.36. Use de l'Hopital's rule to find the limit as n tends to infinity of
(log log n) a / log n, where a > 0 is an arbitrary positive constant.
54 Preliminaries Chapter 1
Problem 1.38. Give a simple proof, not using integrals, that the harmonic series
diverges.
Problem 1.39. Use a technique similar to the one illustrated in Figure 1.8 to show
that for r > 1 the sum
Sn =1 1 +1 + +
Sf 21r 3r nrY
Show further that the error we make if we approximate the sum of the whole series
by the sum of the first n terms is less than f (n + 1).
Problem 1.41. Show that (<n) < 2n-1 for n > 0 and 0 < r < n.
Problem 1.45. Show that two mutually exclusive events are not independent ex-
cept in the trivial case that at least one of them has probability zero.
Problem 1.46. Consider a random experiment whose sample space S is {1, 2,3,
4, 5} . Let A and B be the events "the outcome is divisible by 2" and "the outcome is
divisible by 3", respectively. What are Pr[A], Pr[B] and Pr[A n B]? Are the events
A and B independent?
Problem 1.47. Show that for the horse-race of Section 1.7.4, 99% of the time our
expected winnings fv averaged over 50 races lie between -24.3 and +2.3.
Section 1.9 References and further reading 55
Leonardo Pisano (c. 1170-c. 1240), or Leonardo Fibonacci, was the first great
western mathematician of the Middle Ages. There is a brief account of his life and
an excerpt from Liberabaci in Calinger (1982). The proof that
lim 1 + r 2+2
n-fo( 22 ±+ 32 + + W2 6
is due to Euler; see Eves (1983) or Scharlau and Opolka (1985). These references
also give the history of the binomial theorem.
Chapter 2
Elementary Algorithmics
2.1 Introduction
In this chapter we begin our detailed study of algorithms. First we define some
terms: we shall see that a problem, such as multiplying two positive integers, will
normally have many-usually infinitely many-instances, such as multiplying the
particular integers 981 and 1234. An algorithm must work correctly on every in-
stance of the problem it claims to solve.
Next we explain what we mean by the efficiency of an algorithm, and discuss
different ways for choosing the most efficient algorithm to solve a problem when
several competing techniques are available. We shall see that it is crucial to know
how the efficiency of an algorithm changes as the problem instances get bigger
and therefore (usually) harder to solve. We also distinguish between the average
efficiency of an algorithm when it is used on many instances of a problem and its
efficiency in the worst possible case. The pessimistic, worst-case estimate is often
appropriate when we have to be sure of solving a problem in a limited amount of
time, for instance.
Once we have defined what we mean by efficiency, we can begin to investigate
the methods used to analyse algorithms. Our line of attack is to try to count the
number of elementary operations, such as additions and multiplications, that an
algorithm performs. However we shall see that even such commonplace operations
as these are not straightforward: both addition and multiplication get slower as the
size of their operands increases. We also try to convey some notion of the practical
difference between a good and a bad algorithm in terms of computing time.
57
58 Elementary Algorithmics Chapter 2
One topic we shall not cover, either in this chapter or elsewhere, is how to prove
rigorously that the programs we use to represent algorithms are correct. Such an
approach requires a formal definition of programming language semantics well
beyond what we consider necessary; an adequate treatment of this subject would
deserve a book to itself. For our purposes we shall be content to rely on informal
proofs using common-sense arguments.
The chapter concludes with a number of examples of algorithms from different
areas, some good and some poor, to show how the principles we put forward apply
in practice.
than its predecessor only when they are both used on large instances, this last point
is particularly important.
It is also possible to analyse algorithms using a hybrid approach, where the form
of the function describing the algorithm's efficiency is determined theoretically, and
then any required numerical parameters are determined empirically for a particular
program and machine, usually by some kind of regression. Using this approach we
can predict the time an actual implementation will take to solve an instance much
larger than those used in the tests. Beware however of making such an extrapolation
solely on the basis of a small number of empirical tests, ignoring all theoretical
considerations. Predictions made without theoretical support are likely to be very
imprecise, if not plain wrong.
If we want to measure the amount of storage an algorithm uses as a function
of the size of the instances, there is a natural unit available to us, namely the bit.
Regardless of the machine being used, the notion of one bit of storage is well
defined. If on the other hand, as is more often the case, we want to measure the
efficiency of an algorithm in terms of the time it takes to arrive at an answer, there
is no such obvious choice. Clearly there can be no question of expressing this
efficiency in seconds, say, since we do not have a standard computer to which all
measurements might refer.
An answer to this problem is given by the principle of invariance, which states
that two different implementations of the same algorithm will not differ in efficiency
by more than some multiplicative constant. If this constant happens to be 5, for
example, then we know that, if the first implementation takes 1 second to solve
instances of a particular size, then the second implementation (maybe on a different
machine, or written in a different programming language) will not take longer than
5 seconds to solve the same instances. More precisely, if two implementations of
the same algorithm take tj (n) and t2 (n) seconds, respectively, to solve an instance
of size n, then there always exist positive constants c and d such that tj (n) < ct2 (n)
and t 2 (n) < dt1 (n) whenever n is sufficiently large. In other words, the running
time of either implementation is bounded by a constant multiple of the running
time of the other; the choice of which implementation we call the first, and which
we call the second, is irrelevant. The condition that n be sufficiently large is not
really necessary: see the "threshold rule" in Section 3.2. However by including it
we can often find smaller constants c and d than would otherwise be the case. This
is useful if we are trying to calculate good bounds on the running time of one
implementation when we know the running time of the other.
This principle is not something we can prove: it simply states a fact that can be
confirmed by observation. Moreover it has very wide application. The principle
remains true whatever the computer used to implement an algorithm (provided it is
of conventional design), regardless of the programming language and the compiler
employed, and regardless even of the skill of the programmer (provided he or she
does not actually modify the algorithm!). Thus a change of machine may allow us
to solve a problem 10 times or 100 times faster, giving an increase in speed by a
constant factor. A change of algorithm, on the other hand-and only a change of
algorithm-may give us an improvement that gets more and more marked as the
size of the instances increases.
Section 2.4 Average and worst-case analyses 61
Returning to the question of the unit to be used to express the theoretical ef-
ficiency of an algorithm, the principle of invariance allows us to decide that there
will be no such unit. Instead, we only express the time taken by an algorithm to
within a multiplicative constant. We say that an algorithm for some problem takes
a time in the order of t (n), for a given function t, if there exist a positive constant c
and an implementation of the algorithm capable of solving every instance of size n
in not more than c t (n) seconds. (For numerical problems, as we remarked earlier,
n may sometimes be the value rather than the size of the instance.)
The use of seconds in this definition is obviously arbitrary: we only need to
change the constant to bound the time by at(n) years or bt(n) microseconds.
By the principle of invariance, if any one implementation of the algorithm has the
required property, then so do all the others, although the multiplicative constant
may change from one implementation to another. In the following chapter we
give a more rigorous treatment of this important concept known as the asymptotic
notation. It will be clear from the formal definition why we say "in the order of"
rather than the more usual "of the order of."
Certain orders occur so frequently that it is worth giving them a name. For ex-
ample, suppose the time taken by an algorithm to solve an instance of size n is
never more than en seconds, where c is some suitable constant. Then we say that
the algorithm takes a time in the order of n, or more simply that it takes linear
time. In this case we also talk about a linear algorithm. If an algorithm never takes
more than cn 2 seconds to solve an instance of size n, then we say it takes time in
the order of n2 , or quadratictime, and we call it a quadraticalgorithm. Similarly an
algorithm is cubic, polynomial or exponential if it takes a time in the order of n3 , nk
or c 1, respectively, where k and c are appropriate constants. Section 2.6 illustrates
the important differences between these orders of magnitude.
Do not fall into the trap of completely forgetting the hidden constants, as the
multiplicative constants used in these definitions are often called. We commonly
ignore the exact values of these constants and assume that they are all of about
the same order of magnitude. This lets us say, for instance, that a linear algorithm
is faster than a quadratic one without worrying whether our statement is true in
every case. Nevertheless it is sometimes necessary to be more careful.
Consider, for example, two algorithms whose implementations on a given ma-
chine take respectively n2 days and n 3 seconds to solve an instance of size n. It is
only on instances requiring more than 20 million years to solve that the quadratic
algorithm outperforms the cubic algorithm! (See Problem 2.7.) From a theoretical
point of view, the former is asymptotically better than the latter; that is, its perfor-
mance is better on all sufficiently large instances. From a practical point of view,
however, we will certainly prefer the cubic algorithm. Although the quadratic al-
gorithm may be asymptotically better, its hidden constant is so large as to rule it
out of consideration for normal-sized instances.
T[j + 1].- x
procedure select(T[l . . n])
for i - 1 to n - 1 do
minj - i; minx - T[i]
for j - i + 1 to n do
if T [j]< minx then = minj - j
minx - T[j]
T[minj]- T[i]
T[i]- minx
Simulate the operation of these two algorithms on a few small arrays to make
sure you understand how they work. The main loop in insertion sorting looks suc-
cessively at each element of the array from the second to the n-th, and inserts it in
the appropriate place among its predecessors in the array. Selection sorting works
by picking out the smallest element in the array and bringing it to the beginning;
then it picks out the next smallest, and puts it in the second position in the array;
and so on.
Let U and V be two arrays of n elements, such that U is already sorted in
ascending order, whereas V is sorted in descending order. Problem 2.9 shows that
both these algorithms take more time on V than on U. In fact, array V represents the
worst possible case for these two algorithms: no array of nt elements requires more
work. Nonetheless, the time required by the selection sort algorithm is not very
sensitive to the original order of the array to be sorted: the test "if T[j] < minx" is
executed exactly the same number of times in every case. The variation in execution
time is only due to the number of times the assignments in the then part of this test
are executed. When we programmed this algorithm and tested it on a machine,
we found that the time required to sort a given number of elements did not vary
by more than 15% whatever the initial order of the elements to be sorted. As we
will show in Section 4.4, the time required by select(T) is quadratic, regardless of
the initial order of the elements.
The situation is different if we compare the times taken by the insertion sort
algorithm on the same two arrays. Because the condition controlling the while
loop is always false at the outset, insert(U) is very fast, taking linear time. On the
other hand, insert(V) takes quadratic time because the while loop is executed i - 1
times for each value of i; see Section 4.4 again. The variation in time between
these two instances is therefore considerable. Moreover, this variation increases
with the number of elements to be sorted. When we implemented the insertion
sort algorithm, we found that it took less than one-fifth of a second to sort an
array of 5000 elements already in ascending order, whereas it took three and a half
minutes-that is, a thousand times longer-to sort an array with the same number
of elements, this time initially in descending order.
Section 2.4 Average and worst-case analyses 63
If such large variations can occur, how can we talk about the time taken by
a algorithm solely in terms of the size of the instance to be solved? We usually
consider the worst case of the algorithm, that is, for each size of instance we only
consider those on which the algorithm requires the most time. This is why we said
in the preceding section that an algorithm must be able to solve every instance of
size n in not more than c t (n) seconds, for an appropriate constant c that depends
on the implementation, if it is to run in a time in the order of t (n): we implicitly
had the worst case in mind.
Worst-case analysis is appropriate for an algorithm whose response time is
critical. For example, if it is a question of controlling a nuclear power plant, it is
crucial to know an upper limit on the system's response time, regardless of the
particular instance to be solved. On the other hand, if an algorithm is to be used
many times on many different instances, it may be more important to know the
average execution time on instances of size n. We saw that the time taken by the
insertion sort algorithm varies between the order of n and the order of n2 . If we
can calculate the average time taken by the algorithm on the n! different ways
of initially ordering n distinct elements, we shall have an idea of the likely time
taken to sort an array initially in random order. We shall see in Section 4.5 that if
the n! initial permutations are equally likely, then this average time is also in the
order of n 2 . Insertion sorting thus takes quadratic time both on the average and in
the worst case, although for some instances it can be much faster. In Section 7.4.2
we shall see another sorting algorithm that also takes quadratic time in the worst
case, but that requires only a time in the order of n log n on the average. Even
though this algorithm has a bad worst case-quadratic performance is slow for a
sorting algorithm-it is probably the fastest algorithm known on the average for
an in-place sorting method, that is, one that does not require additional storage.
It is usually harder to analyse the average behaviour of an algorithm than to
analyse its behaviour in the worst case. Furthermore, such an analysis of average
behaviour can be misleading if in fact the instances to be solved are not chosen
randomly when the algorithm is used in practice. For example, we stated above that
insertion sorting takes quadratic time on the average when all the n! possible initial
arrangements of the elements are equally probable. However in many applications
this condition may be unrealistic. If a sorting program is used to update a file, for
instance, it might mostly be asked to sort arrays whose elements are already nearly
in order, with just a few new elements out of place. In this case its average behaviour
on randomly chosen instances will be a poor guide to its real performance.
A useful analysis of the average behaviour of an algorithm therefore requires
some a priori knowledge of the distribution of the instances to be solved. This
is normally an unrealistic requirement. Especially when an algorithm is used as
an internal procedure in some more complex algorithm, it may be impractical
to estimate which instances it is most likely to encounter, and which will only
occur rarely. In Section 10.7, however, we shall see how this difficulty can be
circumvented for certain algorithms, and their behaviour made independent of
the specific instances to be solved.
In what follows we shall only be concerned with worst-case analyses unless
stated otherwise.
64 Elementary Algorithmics Chapter 2
x- T[1]
for i - 2 to n do
if T[i] < x then x - T[i].
The example at the beginning of this section suggested that we can consider
addition and multiplication to be unit cost operations, since it assumed that the
time required for these operations could be bounded by a constant. In theory,
however, these operations are not elementary since the time needed to execute
them increases with the length of the operands. In practice, on the other hand, it
may be sensible to consider them as elementary operations so long as the operands
concerned are of a reasonable size in the instances we expect to encounter. Two
examples will illustrate what we mean.
function Sum(n)
{Calculates the sum of the integers from 1 to n}
sum - 0
for i - 1 to n do sum - sum + i
return sum
function Fibonacci(n)
{Calculates the n-th term of the Fibonacci sequence;
see Section 1.6.4}
i 1; j 0
for k - 1 to n do j - i + j
i- ji
return j
In the algorithm called Sum the value of sum stays reasonable for all the in-
stances that the algorithm can realistically be expected to meet in practice. If we
are using a machine with 32-bit words, all the additions can be executed directly
provided n is no greater than 65 535. In theory, however, the algorithm should
work for all possible values of n. No real machine can in fact execute these ad-
ditions at unit cost if n is chosen sufficiently large. The analysis of the algorithm
must therefore depend on its intended domain of application.
The situation is different in the case of Fibonnaci. Here it suffices to take n = 47
to have the last addition "j - i + j " cause arithmetic overflow on a 32-bit machine.
To hold the result corresponding to n = 65 535 we would need 45 496 bits, or more
than 1420 computer words. It is therefore not realistic, as a practical matter, to
consider that these additions can be carried out at unit cost. Rather, we must
attribute to them a cost proportional to the length of the operands concerned.
In Section 4.2.2 this algorithm is shown to take quadratic time, even though at
first glance its execution time appears to be linear.
In the case of multiplication it may still be reasonable to consider this an ele-
mentary operation for sufficiently small operands. However it is easier to produce
large operands by repeated multiplication than by addition, so it is even more im-
portant to ensure that arithmetic operations do not overflow. Furthermore, when
the operands do start to get large, the time required to perform an addition grows
linearly with the size of the operands, but the time required to perform a multipli-
cation is believed to grow faster than this.
A similar problem can arise when we analyse algorithms involving real num-
bers if the required precision increases with the size of the instances to be solved.
66 Elementary Algorithmics Chapter 2
One typical example of this phenomenon is the use of de Moivre's formula (see Prob-
lem 1.27) to calculate values in the Fibonacci sequence. This formula tells us
that fA, the n-th term in the sequence, is approximately equal to 0A / 5, where
4 = (1 + \5 )/2 is the golden ratio. The approximation is good enough that we
can in principle obtain the exact value of fn by simply taking the nearest integer;
see Problem 2.23. However we saw above that 45 496 bits are required to represent
f65535 accurately. This means that we would have to calculate the approximation
with the same degree of accuracy to obtain the exact answer. Ordinary single
or double precision floating-point arithmetic, using one or two computer words,
would certainly not be accurate enough. In most practical situations, however,
the use of single or double precision floating-point arithmetic proves satisfactory,
despite the inevitable loss of precision. When this is so, it is reasonable to count
such arithmetic operations at unit cost.
To sum up, even deciding whether an instruction as apparently innocuous as
- i + j" can be considered as elementary or not calls for the use of judgement.
-
an instance of size 10 will take you 10 seconds, and an instance of size 20 will still
require between one and two minutes. But now an instance of size 30 can be solved
in four and a half minutes, and in one day you can solve instances whose size is
greater than 200; with one year's computation you can almost reach size 1500. This
is illustrated by Figure 2.1.
Not only does the new algorithm offer a much greater improvement than the pur-
chase of new hardware, it will also, supposing you are able to afford both, make
such a purchase much more profitable. If you can use both your new algorithm
and a machine one hundred times faster than the old one, then you will be able to
solve instances four or five times bigger than with the new algorithm alone, in the
same length of time-the exact factor is 100. Compare this to the situation with
the old algorithm, where you could add 7 to the size of the instance; here you can
multiply the size of the instance by four or five. Nevertheless the new algorithm
should not be used uncritically on all instances of the problem, in particular on
rather small ones. We saw above that on the original machine the new algorithm
takes 10 seconds to solve an instance of size 10, which is one hundred times slower
than the old algorithm. The new algorithm is faster only for instances of size 20 or
greater. Naturally, it is possible to combine the two algorithms into a third one that
looks at the size of the instance to be solved before deciding which method to use.
2.7.2 Sorting
The sorting problem is of major importance in computer science, and in particular
in algorithmics. We are required to arrange in order a collection of n objects on
which a total orderingis defined. By this we mean that when we compare any two
objects in the collection, we know which should come first. For many kinds of
objects, this requirement is trivial: obviously 123 comes before 456 in numerical
order, and 1 August 1991 comes before 25 December 1995 in calendar order, so that
both integers and dates are totally ordered. For other, equally common objects,
though, defining a total order may not be so easy. For example, how do you
order two complex numbers? Does "general" come before or after "General" in
alphabetical order, or are they the same word (see Problem 2.16)? Neither of these
questions has an obvious answer, but until an answer is found the corresponding
objects cannot be sorted.
Sorting problems are often found inside more complex algorithms. We have
already seen two standard sorting algorithms in Section 2.4: insertion sorting and
selection sorting. Both these algorithms, as we saw, take quadratic time both in the
worst case and on the average. Although they are excellent when n is small, other
sorting algorithms are more efficient when n is large. Among others, we might
use Williams's heapsort algorithm (see Section 5.7), mergesort (see Section 7.4.1), or
Hoare's quicksort algorithm (see Section 7.4.2). All these algorithms take a time in
Section 2.7 Some examples 69
the order of n log n on the average; the first two take a time in the same order even
in the worst case.
To have a clearer idea of the practical difference between a time in the order of
n2 and a time in the order of n log n, we programmed the insertion sort algorithm
and quicksort on our local machine. The difference in efficiency between the two
algorithms is marginal when the number of elements to be sorted is small. Quicksort
is already almost twice as fast as insertion when sorting 50 elements, and three times
as fast when sorting 100 elements. To sort 1000 elements, insertion takes more than
three seconds, whereas quicksort requires less than one-fifth of a second. When we
have 5000 elements to sort, the inefficiency of insertion sorting becomes still more
pronounced: one and a half minutes are needed on average, compared to little more
than one second for quicksort. In 30 seconds, quicksort can handle 100 000 elements;
we estimate it would take nine and a half hours to do the same job using insertion
sorting.
We shall see in Chapter 12 that no sorting algorithm that proceeds by comparing
the elements to be sorted can be faster than the order of n log n, so that in this sense
heapsort, mergesortand quicksort are as fast as an algorithm can be (although quicksort
has a bad worst case). Of course their actual running times depend on the hidden
multiplicative constants in the definition of "the order of." Other, faster sorting
algorithms can be found in special cases, however. Suppose for instance that the
elements to be sorted are integers known to lie between 1 and 10 000. Then the
following algorithm can be used.
Parallel sorting methods, which use many processors to carry out several com-
parisons simultaneously, allow us to go faster still. An example of a parallel sorting
algorithm is outlined in Chapter 11.
algorithm is some three times more efficient than the classic algorithm: they take
about 15 seconds and 40 seconds, respectively. The gain in efficiency continues to
increase as the size of the operands goes up.
More sophisticated algorithms exist, the fastest at present taking a time in the
order of n log n log log n to multiply two integers of size n. However these more
sophisticated algorithms are largely of theoretical interest; the hidden constants
involved are such that they only become competitive for much larger operands.
For "small" instances involving operands with only a few thousand decimal digits,
they are considerably slower than the algorithms mentioned above.
function gcd(m, n)
i - min(m, n) +1
repeat i - i - 1 until i divides both m and n exactly
return i
The time taken by this algorithm is in the order of the difference between the
smaller of the two arguments and their greatest common divisor. When m and n
are of similar size and coprime, it therefore takes a time in the order of n. (Notice
that this is the value of the operand, not its size.)
A classic algorithm for calculating gcd ( m, n) consists of first factorizing m and
n, and then taking the product of the prime factors common to m and n, each prime
factor being raised to the lower of its powers in the two arguments. For example,
to calculate gcd(120,700) we first factorize 120 = 23 x 3 x 5 and 700 = 22 x 52 x 7.
The common factors of 120 and 700 are therefore 2 and 5, and their lower powers
are 2 and 1, respectively. The greatest common divisor of 120 and 700 is therefore
22 x 5 = 20.
Even though this algorithm is better than the one given previously, it requires
us to factorize m and n, an operation nobody knows how to do efficiently when
m and n are large; see Section 10.7.4. In fact there exists a much more efficient
algorithm for calculating greatest common divisors, known as Euclid's algorithm,
even though it can be traced back well before the time of the ancient Greeks.
function Euclid(m, n)
while m > 0 do
t- m
m- nmodm
n- t
return n
72 Elementary Algorithmics Chapter 2
If we consider the arithmetic operations involved to have unit cost, this algo-
rithm takes a time in the order of the logarithm of its arguments-that is, in the
order of their size-even in the worst case; see Section 4.4. To be historically ex-
act, Euclid's original algorithm works using successive subtractions rather than by
calculating a modulo. In this form it is more than 3500 years old.
Ao = 0; f = I and
fn =fn1;+fn 2 forn >2.
As we saw, the sequence begins 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 ... We also saw
de Moivre's formula
LI -[
v5[V - (-W/)n]
where d> = (1 + /5 )/2 is the golden ratio, and we pointed out that the term ( - <) -
can be neglected when n is large. Hence the value of fn is in the order of gqn, and
therefore the size of fn is in the order of n. However, de Moivre's formula is of little
immediate help in calculating fn exactly, since the larger n becomes, the greater is
the degree of precision required in the values of 5 and (b; see Section 2.5. On our
local machine, a single-precision computation produces an error for the first time
when calculating f66.
The recursive algorithm obtained directly from the definition of the Fibonacci
sequence was given in Section 1.6.4 under the name Fibonacci.
function Fibrec(n)
if n < 2 then return n
else return Fibrec(n - 1)+Fibrec(n- 2)
This algorithm is very inefficient because it recalculates the same values many
times. For instance, to calculate Fibrec(5) we need the values of Fibrec(4) and
Fibrec(3); but Fibrec(4) also calls for the calculation of Fibrec(3). It is simple to check
that Fibrec(3) will be calculated twice, Fibrec(2) three times, Fibrec(l) five times,
and Fibrec(O) three times. (The number of calls of Fibrec(5),Fibrec(4),...,Fibrec(l),
are thus 1, 1, 2, 3 and 5 respectively. It is no coincidence that this is the beginning
of the Fibonacci sequence; see Problem 2.24.)
In fact, the time required to calculate f, using this algorithm is in the order of
the value of fn itself, that is, in the order of k 'U To see this, note that the recursive
calls only stop when Fibrec returns a value 0 or 1. Adding these intermediate results
to obtain the final result fn must take at least f, operations, and hence the complete
algorithm certainly takes a number of elementary operations at least in the order
of fn. This was proven formally by constructive induction in Section 1.6.4, together
with a proof that the number of operations required is not more than the order of
f, provided the additions are counted at unit cost. The case when additions are
Section 2.7 Some examples 73
not counted at unit cost yields the same conclusion, as the more precise analysis
given in Section 4.2.3 shows.
To avoid wastefully calculating the same values over and over, it is natural to
proceed as in Section 2.5, where a different algorithm Fibonacci was introduced,
which we rename Fibiter for comparison with Fibrec.
function Fibiter(n)
i-1; jo-O
fork -tondo j- i+j
i-i ii
return j
This second algorithm takes a time in the order of n, assuming we count each
addition as an elementary operation. Figure 2.2, which shows some computation
times we observed in practice, illustrates the difference. To avoid the problems
caused by ever-longer operands, the computations reported in this figure were
carried out modulo 107, which is to say that we only computed the seven least
significant figures of the answer. The times for Fibrec when n > 50 were estimated
using the hybrid approach.
n 10 20 30 50 100
Fibrec 8 msec 1 sec 2 min 21 days 109 years
Fibiter 6 msec 1 1 3 2 msec
Figure 2.2. Comparison of modulo 107 Fibonacci algorithms
classic algorithm took more than 26 minutes of computation, the "new" algorithm
was able to perform the same task in less than two and a half seconds.
Ironically it turned out that an efficient algorithm had already been published
in 1942 by Danielson and Lanczos, and all the necessary theoretical groundwork
for Danielson and Lanczos's algorithm had been published by Runge and Konig
in 1924. And if that were not sufficient, Gauss describes a similar algorithm in a
paper written around 1805 and published posthumously in 1866!
2.9 Problems
Problem 2.1. Find a more practical algorithm for calculating the date of Easter
than the one given in Problem 1.2. What will be the date of Easter in the year 2000?
What is the domain of definition of your algorithm?
Problem 2.2. In the game of chess, your opponent's pieces and moves are all
visible. We say that chess is a game with complete information. In games such as
bridge or poker, however, you do not know how the cards have been dealt. Does
Section 2.9 Problems 75
this make it impossible to define an algorithm for playing good bridge or poker?
Would such an algorithm necessarily be probabilistic?
What about backgammon? Here you can see your opponent's pieces and moves,
but you cannot predict how the dice will fall on future throws. Does this alter the
situation?
Problem 2.4. Using the technique called "virtual memory", it is possible to free
a programmer from most worries about the actual size of the storage available on
his machine. Does this mean that the quantity of storage used by an algorithm is
never of interest in practice? Justify your answer.
Problem 2.5. Suppose you measure the performance of a program, perhaps using
some kind of run-time trace, and then you optimize the heavily-used parts of the
code. However, you are careful not to change the underlying algorithm. Would
you expect to obtain (a) a gain in efficiency by a constant factor, whatever the
problem being solved, or (b) a gain in efficiency that gets proportionally greater as
the problem size increases? Justify your answer.
Problem 2.6. A sorting algorithm takes 1 second to sort 1000 items on your local
machine. How long would you expect it to take to sort 10 000 items (a) if you
believe that the algorithm takes a time roughly proportional to n2 , and (b) if you
believe that the algorithm takes a time roughly proportional to n log n?
Problem 2.7. Two algorithms take n2 days and n3 seconds respectively to solve an
instance of size n. Show that it is only on instances requiring more than 20 million
years to solve that the quadratic algorithm outperforms the cubic algorithm.
Problem 2.8. Two algorithms take n2 days and 2n seconds respectively to solve
an instance of size n. What is the size of the smallest instance on which the former
algorithm outperforms the latter? Approximately how long does such an instance
take to solve?
Problem 2.9. Simulate both the insertion sort and the selection sort algorithms of
Section 2.4 on the following two arrays: U = [1, 2,3,4,5,6] and V = [6,5,4,3,2,1].
Does insertion sorting run faster on the array U or the array V? And selection
sorting? Justify your answers.
Problem 2.10. Suppose you try to "sort" an array W [1,1,1,1,1,1] all of whose
-
elements are equal, using (a) insertion sorting and (b) selection sorting. How does
this compare to sorting the arrays U and V of the previous problem?
76 Elementary Algorithmics Chapter 2
Problem 2.11. You are required to sort a file containing integers between 0 and
999 999. You cannot afford to use one million pigeon-holes, so you decide instead
to use one thousand pigeon-holes numbered from 0 to 999. You begin the sort by
putting each integer into the pigeon-hole corresponding to its first three figures.
Next you use insertion sorting one thousand times to sort the contents of each
pigeon-hole separately, and finally you empty the pigeon-holes in order to obtain
a completely sorted sequence.
Would you expect this technique to be faster, slower or the same as simply using
insertion sorting on the whole sequence (a) on the average, and (b) in the worst
case? Justify your answers.
Problem 2.12. Is it reasonable, as a practical matter, to consider division as an
elementary operation (a) always, (b) sometimes, or (c) never? Justify your answer.
If you think it necessary, you may treat the division of integers and the division of
real numbers separately.
Problem 2.13. Suppose n is an integer variable in a program you are writing.
Consider the instruction x - sin(n), where n may be supposed to be in radians.
As a practical matter, would you regard the execution of this instruction as an
elementary operation (a) always, (b) sometimes, or (c) never? Justify your answer.
What about the instruction x - sin(nTr) ?
Problem 2.14. In Section 2.5, we saw that Wilson's theorem could be used to
test any number for primality in constant time if factorials and tests for integer
divisibility were counted at unit cost, regardless of the size of the numbers involved.
Clearly this would be unreasonable.
Use Wilson's theorem together with Newton's binomial theorem to design an algo-
rithm capable of deciding in a time in the order of log n whether or not an integer
n is prime, provided additions, multiplications and tests for integer divisibility are
counted at unit cost, but factorials and exponentials are not. The point of this exer-
cise is not to provide a useful algorithm, but to demonstrate that it is unreasonable
to consider multiplications as elementary operations in general.
Problem 2.17. Show that pigeon-hole sorting takes a time in the order of n to sort
n elements that are within bounds.
Problem 2.18. In Section 2.7.3 we said that the analysis of algorithms for large
integers is not affected by the choice of a measure for the size of the operands: the
number of computer words needed, or the length of their representation in decimal
or binary will do equally well. Show that this remark would in fact be false were
we considering exponential-time algorithms.
Problem 2.19. How much time is required to add or subtract two large integers
of size m and n respectively? Sketch the appropriate algorithm.
Problem 2.20. How much time is required to multiply two large integers of size
m and n, respectively, using multiplication a la russe (a) if the smaller operand is
in the left-hand column, and (b) if the larger operand is in the left-hand column?
Of course you should not take addition, doubling and halving to be elementary
operations in this problem.
Problem 2.21. How much time is required to multiply two large integers of size
m and n, respectively, using multiplication round a rectangle (see Problem 1.9).
Problem 2.22. Calculate gcd(606, 979) (a) by factorizing 606 and 979, and pick-
ing out the common factors to the appropriate power, and (b) using Euclid's
algorithm.
Problem 2.23. Use de Moivre's formula for fn to show that f, is the nearest
integer to qA !15 for all n > 1.
Problem 2.24. Show that when calculating fn using Fibrecfrom Section 2.7.5, there
are in all f,+i calls of Fibrec(i) for i = 1, 2, . . ., n, and fn 1 calls of Fibrec(0).
Problem 2.25. Let g(n) be the number of ways to write a string of n zeros and
ones so that there are never two successive zeros. For example, when n = 1 the
possible strings are 0 and 1, so g(1)= 2; when n = 2 the possible strings are 01, 11
and 10, so g(2)= 3; when n 3 the possible strings are 010, 011, 101, 110 and 111,
so g(3)= 5. Show that g(n) fn+2.
78 Elementary Algorithmics Chapter 2
Asymptotic Notation
3.1 Introduction
An important aspect of this book concerns determining the efficiency of algorithms.
In Section 2.3, we saw that this may, for instance, help us choose among several
competing algorithms. Recall that we wish to determine mathematically the quan-
tity of resources needed by an algorithm as a function of the size (or occasionally
of the value) of the instances considered. Because there is no standard computer to
which all measurements of computing time might refer, we saw also in Section 2.3
that we shall be content to express the time taken by an algorithm to within a mul-
tiplicative constant. To this end, we now introduce formally the asymptotic notation
used throughout the book. In addition, this notation permits substantial simplifi-
cations even when we are interested in measuring something more tangible than
computing time, such as the number of times a given instruction in a program is
executed.
This notation is called "asymptotic" because it deals with the behaviour of func-
tions in the limit, that is for sufficiently large values of its parameter. Accordingly,
arguments based on asymptotic notation may fail to have practical value when
the parameter takes "real-life" values. Nevertheless, the teachings of asymptotic
notation are usually of significant relevance. This is because, as a rule of thumb, an
asymptotically superior algorithm is very often (albeit not always) preferable even
on instances of moderate size.
79
80 Asymptotic Notation Chapter 3
42 16 n2= 42 6 f(n).
113 113
Even though it is natural to use the set-theoretic symbol " E " to denote the
fact that n2 is in the order of n 3 as in " n2 e 0(n 3 )", be warned that the traditional
notation for this is n2 = 0(n 3 ). Therefore, do not be surprised if you find these so-
called "one-way equalities" (for one would never write 0(n 3 )= n 2 ) in other books
or scientific papers. Those using one-way equalities say that n2 is of (or sometimes
on) the order of n3 , or simply n 2 is 0(n 3 ). Another significant difference you may
encounter in the definition of the 0 notation is that some writers allow 0 (f (n))
to include functions from the natural numbers to the entire set of reals-including
negative reals-and define a function to be in 0 (f (n)) if its absolute value is in what
we call 0(f(n)).
For convenience, we allow ourselves to misuse the notation from time to time
(as well as other notation introduced later in this chapter). For instance, we may
say that t(n) is in the order of f(n) even if t(n) is negative or undefined for a
finite number of values of n. Similarly, we may talk about the order of f(n) even
if f(n) is negative or undefined for a finite number of values of n. We shall say
that t(n)e 0(f(n)) if there is a positive real constant c and integer threshold no
such that both t(n) and f(n) are well-defined and 0 < t(n)< cf(n) whenever
n > no, regardless of what happens to these functions when n < no. For example,
it is allowable to talk about the order of n/log n, even though this function is not
defined when n = 0 or n = 1, and it is correct to write n 3 -3n 2 - n - 8 E 0(n 3 )
even though n 3 -3n 2 - n - 8 < 0 when n < 3.
The threshold no is often useful to simplify arguments, but it is never neces-
sary when we consider strictly positive functions. Let f,t : N -i R be two func-
tions from the natural numbers to the strictly positive reals. The threshold rule
states that t (n) e 0(f (n))if and only if there exists a positive real constant c such
that t(n) < cf(n) for each natural number n. One direction of the rule is obvious
since any property that is true for each natural number is also true for each suffi-
ciently large integer (simply take no = 0 as the threshold). Assume for the other
direction that t(n) 0(f(n)). Let c and no be the relevant constants such that
t(n)< cf(n) whenever n > no. Assume no > 0 since.otherwise there is nothing
to prove. Let b = max{t(n)/f(n) 0 < n < nol be the largest value taken by the
ratio of t and f on natural numbers smaller than no (this definition makes sense
precisely because f(n) cannot be zero and no > 0). By definition of the maxi-
mum, b > t(n)/f(n), and therefore t(n)< bf(n), whenever 0 < n < no. We al-
ready know that t(n)< cf(n) whenever n > no. Therefore t(n)< af(n) for each
natural number n, as we had to prove, provided we choose a at least as large as
both b and c, such as a = max(b, c). The threshold rule can be generalized to say
that if t(n)E 0(f(n)) and if f(n) is strictly positive for all n > no for some no,
then this no can be used as the threshold for the 0 notation: there exists a positive
real constant c such that t(n)< cf(n) for all n > no.
A useful tool for proving that one function is in the order of another is the
maximum rule. Let f,g : NR- °>0 be two arbitrary functions from the natural
numbers to the nonnegative reals. The maximum rule says that 0 (f(n) +g(n))=
0 (max(f (n),g(n))). More specifically, let p, q : N - Rlo be defined for each nat-
ural number n by p(n)= f(n)+g(n) and q(n)= max(f(n),g(n)), and consider
82 Asymptotic Notation Chapter 3
an arbitrary function t:N -. lO. The maximum rule says that t (n)e 0(p(n)) if
and only if t(n)e 0(q(n)). This rule generalizes to any finite constant number
of functions. Before proving it, we illustrate a natural interpretation of this rule.
Consider an algorithm that proceeds in three steps: initialization, processing and
finalization. Assume for the sake of argument that these steps take time in 0 (n 2 ),
0(n 3 ) and 0 (n log n), respectively. It is therefore clear (see Section 4.2) that the
complete algorithm takes a time in 0(n 2 + n 3 + nlogn). Although it would not
be hard to prove directly that this is the same as 0(n 3 ), it is immediate from the
maximum rule.
2
0(n + n3 + nlogn) = 0(max(n 2 ,n 3 ,nlogn))
= 0(n3 )
In other words, even though the time taken by an algorithm is logically the sum of
the times taken by its disjoint parts, it is in the order of the time taken by its most
time-consuming part, provided the number of parts is a constant, independent of
input size.
We now prove the maximum rule for the case of two functions. The general
case of any fixed number of functions is left as an exercise. Observe that
and
0 < min(f(n), g(n)) < max(f(n), g(n)).
It follows that
where the second line is obtained from the maximum rule. This does not use the
maximum rule properly since the function -5n 2 is negative. Nevertheless, the
following reasoning is correct:
Even though n3 log n - 5n2 is negative and 36 is larger than 11n 3 log n for small
values of n, all is well since this does not occur whenever n is sufficiently large.
Another useful observation is that it is usually unnecessary to specify the base of
a logarithm inside an asymptotic notation. This is because log, n = log, b x logb n
for all positive reals a, b and n such that neither a nor b is equal to 1. The point
is that loga b is a positive constant when a and b are constants larger than 1.
Therefore log, n and logb n differ only by a constant multiplicative factor. From
this, it is elementary to prove that 0 (loga n)= 0 (logb n), which we usually sim-
plify as 0 (log n). This observation also applies to more complicated functions,
such as positive polynomials involving n and log n, and ratios of such polyno-
mials. For instance, 0 (n Ig n) is the same as the more usual 0 (n log n), and
O(n 2 /(log3 nnnlgn )) is the sarne as 0((n/logn) 1 5 ). However, the base of
the logarithm cannot be ignored when it is smaller than 1, when it is not a con-
stant, as in 0 (log . n) A 0 (log n), or when the logarithm is in the exponent, as in
0(21gn) 0(21ogn).
It is easy to prove that the notation "e 0" is reflexive and transitive. In
other words, f (n) 0(f(n)) for any function f: N - R2°, and it follows
from f (n)E 0(g(n)) and g(n)c O(h(n)) that f (n)e O(h(n)) for any functions
f, g, h: N - W O; see Problems 3.9 and 3.10. As a result, this notation provides a
way to define a partial order on functions and consequently on the relative efficiency
of different algorithms to solve a given problem; see Problem 3.21. However, the
induced order is not total as there exist functions f, g : RN - 0°such that neither
f (n)E 0(g(n)) nor g(n)E 0(f (n)); see Problem 3.11.
We have seen several examples of functions t (n) and f (n) for which it is easy
to prove that t(n)E 0(f (n)). For this, it suffices to exhibit the appropriate con-
stants c and no and show that the desired relation holds. How could we go about
84 Asymptotic Notation Chapter 3
proving that a given function t (n) is not in the order of another function f ( ) ? The
simplest way is to use a proof by contradiction. The following example makes the
case. Let t(n)= 1o n3 and f(n)= 1OOOn 2 . If you try several values of n smaller
than one million, you find that t (n) < f (n),which may lead you to believe by in-
duction that t(n)e 0 (f(n)), taking 1 as the multiplicative constant. If you attempt
to prove this, however, you are likely to end up with the following proof by contra-
diction that it is false. To prove that t (n))t 0 (f (n)), assume for a contradiction that
t(n)e O(f(n)). Using the generalized threshold rule, this implies the existence of
a positive real constant c such that t(n)< cf(n) for all n > 1. But t(n)< cf(n)
means I n3 < lOOOcn 2 , which implies that n < 106 c. In other words, assuming
t(n)( 0(f(n)) implies that every positive integer n is smaller than some fixed
constant, which is clearly false.
The most powerful and versatile tool both to prove that some functions are in
the order of others and to prove the opposite is the limit rule, which states that,
given arbitrary functions f and 9g: NH-. RO,
2. if lim
fl--c g(n) 0 then f(n) 0(g(n))but g(n) 0 (f(n)),and
f (n)
3. if lim + oothenf(n)¢ O(g(n)) butg(n)e 0(f(n)).
n-- gy(n)
We illustrate the use of this rule before proving it. Consider the two functions
f(n)= logn and g(n)= /n. We wish to determine the relative order of these
functions. Since both f (n) and g (n) tend to infinity as n tends to infinity, we use
de l'H6pital's rule to compute
Now the limit rule immediately shows that logn e 0 (/n ) whereas / vi 0 (log n).
In other words, ni grows asymptotically faster than log n. We now prove the limit
rule.
g(n)< cf (n),
and therefore I/c <f (n)Ig(n), for all sufficiently large n. Since
lim f(n)/g(n)
n--o
exists by assumption that it equals 0 and lim l1/c = 1/c exists as well,
Proposition 1.7.8 applies to conclude that 1/c < limn-o f (n)/g(n)= 0, which
is a contradiction since c > 0. We have contradicted the assumption that
g(n) e 0 (f (n)) and therefore established as required that g(n) ¢ 0 (f (n)).
3. Assume lim-, f (n)/g(n)= +oo. This implies that
and therefore the previous case applies mutatis mutandis with f (n) and g(n)
interchanged.
The converse of the limit rule does not necessarily apply: it is not always the case
thatlim-- f (n)/.g(n)e R' whenf(n) O(g(n)) and g(n)e O(f (n)). Althoughit
does follow that the limit is strictly positive if it exists, the problem is that it may
not exist. Consider for instance f (n)= n and g(n)= 2[19g1 It is easy to see that
g(n)< f (n)< 2g(n) for all n > 1, and thus f (n) and g(n) are each in the order
of the other. However, it is equally easy to see that f (n) /g(n) oscillates between
1 and 2, and thus the limit of that ratio does not exist.
Consider again two functions f, t : - lso from the natural numbers to the
nonnegative reals. We say that t (n) is in Omega of f (n), denoted t (n) E Q(f (n)),
if t(n) is bounded below by a positive real multiple of f(n) for all sufficiently
large n. Mathematically, this means that there exist a positive real constant d and
an integer threshold no such that t(n) > df (n) whenever n > no:
It is easy to see the duality rule: t(n)eE2(f (n)) if and only if f (n)E 0(t(n))
because t(n)> df(n) if and only if f(n) < t(n). You may therefore question the
usefulness of introducing a notation for Q when the 0 notation seems to have the
same expressive power. The reason is that it is more natural to say that a given
algorithm takes a time in Q(n 2 ) than to say the mathematically equivalent but
clumsy "n2 is in 0 of the time taken by the algorithm".
Thanks to the duality rule, we know from the previous section that +n/ E
Q(logn) whereas logn Hi ()(n_),among many examples. More importantly,
the duality rule can be used in the obvious way to turn the limit rule, the max-
imum rule and the threshold rule into rules about the Q notation.
Despite strong similarity between the 0 and the Q notation, there is one aspect
in which their duality fails. Recall that we are most often interested in the worst-
case performance of algorithms. Therefore, when we say that an implementation
of an algorithm takes t(n) microseconds, we mean that t(n) is the maximum time
taken by the implementation on all instances of size n. Let f (n) be such that
t(n)E 0(f(n)). This means that there exists a real positive constant c such that
t(n) < cf(n) for all sufficiently large n. Because no instance of size n can take
more time than the maximum time taken by instances of that size, it follows that
the implementation takes a time bounded by cf (n) microseconds on all sufficiently
large instances. Assuming only a finite number of instances of each size exist, there
can thus be only a finite number of instances, all of size less than the threshold, on
which the implementation takes a time greater than cf (n) microseconds. Assum-
ing f (n) is never zero, these can all be taken care of by using a bigger multiplicative
constant, as in the proof of the threshold rule.
In contrast, let us also assume t(n) Q(ef(n)). Again, this means that there
exists a real positive constant d such that t (n)> df(n) for all sufficiently large n.
But because t (n) denotes the worst-case behaviour of the implementation, we may
infer only that, for each sufficiently large n, there exists at least one instance of size n
such that the implementation takes at least df (n) microseconds on that instance.
This does not rule out the possibility of much faster behaviour on other instances
of the same size. Thus, there may exist infinitely many instances on which the
implementation takes less than df(n) microseconds. Insertion sort provides a
typical example of this behaviour. We saw in Section 2.4 that it takes quadratic
time in the worst case, yet there are infinitely many instances on which it runs in
linear time. We are therefore entitled to claim that its worst-case running time is
both in 0(n 2 ) and in Q(n 2 ). Yet the first claim says that every sufficiently large
instance can be sorted in quadratic time, whereas the second merely says that at
Section 3.3 Other asymptotic notation 87
least one instance of each sufficiently large size genuinely requires quadratic time:
the algorithm may be much faster on other instances of the same size.
Some authors define the Q notation in a way that is subtly but importantly
different. They say that t(n)E Q(f(n)) if there exists a real positive constant d
such that t(n)Ž> df(n) for an infinite number of values of n, whereas we require
the relation to hold for all but finitely many values of n. With this definition,
an algorithm that takes a time in Q (f (n)) in the worst case is such that there
are infinitely many instances on which it takes at least df (n) microseconds for the
appropriate real positive constant d. This corresponds more closely to our intuitive
idea of what a lower bound on the performance of an algorithm should look like.
It is more natural than what we mean by "taking a time in f2(f(n)) ". Nevertheless,
we prefer our definition because it is easier to work with. In particular, the modified
definition of Q is not transitive and the duality rule breaks down.
In this book, we use the Q notation mainly to give lower bounds on the running
time (or other resources) of algorithms. However, this notation is often used to give
lower bounds on the intrinsic difficulty of solving problems. For instance, we shall
see in Section 12.2.1 that any algorithm that successfully sorts n elements must take
a time in f2(n log n), provided the only operation carried out on the elements to be
sorted is to compare them pairwise to determine whether they are equal and, if not,
which is the greater. As a result, we say that the problem of sorting by comparisons
has running time complexity in Q (n log n). It is in general much harder to determine
the complexity of a problem than to determine a lower bound on the running time
of a given algorithm that solves it. We elaborate on this topic in Chapter 12.
The threshold rule and the maximum rule, which we formulated in the context
of the 0 notation, apply mutatis inutandis to the 6 notation. More interestingly,
for the 6 notation the limit rule is reformulated as follows. Consider arbitrary
functions f and g : N - W2°.
88 Asymptotic Notation Chapter 3
3. if lim
r f
ri-o gin) = ±oo thenf(n) Q(g(n)) butf(n)t E)(g(n)).
As an exercise in manipulating asymptotic notation, let us now prove a useful fact:
n
Eik E- EH)
( n k+),
for any fixed integer k > 0, where the left-hand side summation is considered as
a function of n. Of course, this is immediate from Proposition 1.7.16, but it is
instructive to prove it directly.
The "0" direction is easy to prove. For this, simply notice that ik < nk when-
ever 1 < i < n. Therefore ,n§1 ik < nlk = nk+l for all n > 1, which proves
y I ik E Of nk+1) using 1 as the multiplicative constant.
To prove the "Q" direction, notice that ik > (n/ 2 )k whenever i > [n/21 and
that the number of integers between [n/21 and n inclusive is greater than n/2.
Therefore, provided n > I (which implies that In/21 > 1),
k > 11 n X k(n
i21 i= [ [f2
t =n)=
a ifon =w1 (3.2)
14t(nI21)±bn otherwise.
Section 3.4 Conditional asymptotic notation 89
We study techniques for solving recurrences in Section 4.7, but unfortunately Equa-
tion 3.2 cannot be handled directly by those techniques because the ceiling func-
tion [n/21 is troublesome. Nevertheless, our recurrence is easy to solve provided
we consider only the case when n is a power of 2: in this case [ n/21 = n/2 and the
ceiling vanishes. The techniques of Section 4.7 yield
t(n)= (a + b)n 2 bn
provided n is a power of 2. Since the lower-order term "b n" can be neglected, it
follows that t(n) is in the exact order of n2 , still provided n is a power of 2. This
is denoted by t(n)e 6 (n 2 n is a power of 2).
More generally, let f, t : J - >O be two functions from the natural numbers
to the nonnegative reals, and let P: N-I true,false} be a property of the integers.
We say that t(n) is in O(f(n) P(n)) if t(n) is bounded above by a positive
real multiple of f(n) for all sufficiently large n such that P(n) holds. Formally,
O(f(n) P(n)) is defined as
The sets Q(f(n) I P(n)) and O(f(n) P(n)) are defined similarly. Abusing nota-
tion in a familiar way, we write t(n)e O(f(n) P(n)) even if t(n) and f(n) are
negative or undefined on an arbitrary-perhaps infinite-number of values of n
on which P(n) does not hold.
Conditional asymptotic notation is more than a mere notational convenience:
its main interest is that it can generally be eliminated once it has been used to
facilitate the analysis of an algorithm. For this, we need a few definitions. A func-
tion f: NRI-l is eventually nondecreasing if there exists an integer threshold no
such that f(n) < f(n + 1) for all n > no. This implies by mathematical induction
that f(n)< f(m) whenever m > n > nO. Let b > 2 be any integer. Function f is
b-smooth if, in addition to being eventually nondecreasing, it satisfies the condition
f (bn) e O (f (n)). In other words, there must exist a constant c (depending on b)
such that f (bn) < cf (n) for all n > no. (There is no loss of generality in using the
same threshold no for both purposes.) A function is smooth if it is b-smooth for
every integer b > 2.
Most functions you are likely to encounter in the analysis of algorithms are
smooth, such as log n, n log n, n2 , or any polynomial whose leading coefficient
is positive. However, functions that grow too fast, such as nl1n, 2n or n!, are not
smooth because the ratio f (2n) /f (n) is unbounded. For example,
from Equation 3.2, whereas it is harder to carry out the analysis of t(n) when n is
not a power of 2. The smoothness rule allows us to infer directly from Equation 3.3
that t(n)se 0(n 2 ), provided we verify that n2 is a smooth function and t(n) is
eventually nondecreasing. The first condition is immediate since n2 is obviously
nondecreasing and (2n )2= 4n2 . The second is easily demonstrated by mathemat-
ical induction from Equation 3.2; see Problem 3.28. Thus the use of conditional
asymptotic notation as a stepping stone yields the final result that t(n)e 0(n 2 )
unconditionally.
We now prove the smoothness rule. Let f (n) be a smooth function and let t(n)
be an eventually nondecreasing function such that t (n) c 3 (f (n) I n is a power
of b) for some integer b Ž 2. Let no be the largest of the thresholds implied by the
above conditions: f(m)< f(m + 1), t(m)< tn(m + 1) and f(bnm)< cf(m) when-
ever m > no, and df(m)s t(m)< af(m) whenever m > no is a power of b, for
appropriate constants a, c and d. For any positive integer n, let n denote the largest
power of b not larger than n (formally, n = blogb ?nt) and let n = bn. By definition,
n/b < n < n < n and n is a power of b. Consider any n > max(1, bno).
t(n) < t(n) < af (n) = af (bn):< acf (n):< acf (n)
This equation uses successively the facts that t is eventually nondecreasing (and
n > n 2 no), t(m) is in the order of f(m) when m is a power of b (and n is a
Section 3.6 Operations on asymptotic notation 91
power of b no smaller than no), n = bn, f is b-smooth (and n > n/b > no), and
f is eventually nondecreasing (and n > n > no). This proves that t(n) < acf (n)
for all values of n > max(1, bno), and therefore t(n)c O(f (n)). The proof that
t(n)e Q(f (n)) is similar.
(There is no need for two distinct thresholds for m and n in V m, n e RI.) Gener-
alization to more than two parameters, of the conditional asymptotic notation and
of the Q and 0 notation, is done in the same spirit.
Asymptotic notation with several parameters is similar to what we have seen
so far, except for one essential difference: the threshold rule is no longer valid.
Indeed, the threshold is sometimes indispensable. This is explained by the fact that
while there are never more than a finite number of nonnegative integers smaller
than any given threshold, there are in general an infinite number of pairs (m, n)
of nonnegative integers such that at least one of m or n is below the threshold;
see Problem 3.32. For this reason, 0 (f (m, n)) may make sense even if f (m, n) is
negative or undefined on an infinite set of points, provided all these points can be
ruled out by an appropriate choice of threshold.
More formally, if op is any binary operator and if X and Y are sets of functions
from N into [R'0 , in particular sets described by asymptotic notation, then "X op Y "
denotes the set of functions that can be obtained by choosing one function from
X and one function from Y, and by "oping" them together pointwise. In keeping
with the spirit of asymptotic notation, we only require the resulting function to be
the correct oped value beyond some threshold. Formally, X op Y denotes
3.7 Problems
Problem 3.1. Consider an implementation of an algorithm that takes a time that
is bounded above by the unlikely function
to solve an instance of size n. Find the simplest possible function f: N - '° such
that the algorithm takes a time in the order of f (n).
Problem 3.2. Consider two algorithms A and B that take time in 0(n 2 ) and 0(n 3),
respectively, to solve the same problem. If other resources such as storage and
programming time are of no concern, is it necessarily the case that algorithm A is
always preferable to algorithm B? Justify your answer.
Section 3.7 Problems 93
Problem 3.3. Consider two algorithms A and B that take time in O(n 2 ) and o(n3 ),
respectively. Could there exist an implementation of algorithm B that would be
more efficient (in terms of computing time) than an implementation of algorithm
A on all instances? Justify your answer.
Problem 3.5. Which of the following statements are true? Prove your answers.
1. n2 c O(n3 )
2. n2 E Q(n)
3. 2n eE ()(2n+')
4. n! e 0((n + 1)!)
Problem 3.7. In contrast with Problem 3.6, prove that 2 f (n) G 0(2n) does not
necessarily follow from f (n) C 0(n).
Problem 3.8. Consider an algorithm that takes a time in0 ( nlg3) to solve instances
of size n. Is itcorrectto saythatittakes a time in 0(n 59 )? In (2(n] 59)? In 0 (n15 9 )?
Justify your answers. (Note: lg 3 1.58496...)
Problem 3.9. Prove that the 0 notation is reflexive: f (n) e 0 (f (n)) for any func-
tion f hiN- Rl20
Problem 3.11. Prove that the ordering on functions induced by the 0 notation
is not total: give explicitly two functions f, y: RI - Rl2° such that f (n) t O (g(n))
and g (n) 0(f(n)). Prove your answer.
Problem 3.12. Prove that the Q notation is reflexive and transitive: for any func-
tionsf,g, h :N
R- 0
1. f(n)e Q(f(n))
2. if f(n)c Q (g(n)) and g(n)E Q (h(n)) then f(n)E Q(h(n)).
Rather than proving this directly (which would be easier!), use the duality rule and
the results of Problems 3.9 and 3.10.
94 Asymptotic Notation Chapter 3
Problem 3.13. As we explained, some authors give a different definition for the
( notation. For the purpose of this problem, let us say that t (n) E Q (f (n)) if there
exists a real positive constant d such that t(n)> df (n) for an infinite number of
values of n. Formally,
Prove that this notation is not transitive. Specifically, give an explicit example of
three functions f, g,h: NRd- such that f(n)e Q(g(n)) and g(n)c n (h(n)),yet
f (n)0 Q2(h(n)).
Problem 3.14. Let f (n)= n2 . Find the error in the following "proof" by mathe-
matical induction that f (n) E 0 (n).
• Basis: The case n = 1 is trivially satisfied since f (1) 1 en, where c = 1.
• Induction step: Consider any n > 1. Assume by the induction hypothesis the
existence of a positive constant c such that f (n - 1) < c (n -1).
2
f(n) =n2= (n -1) +2n -1= f(n -1)+2n -1
<c(n-1)+2n -1 = (c+2)n -c- < (c+2)n
Problem 3.15. Find the error in the following "proof" that 0(n)= 0(n 2 ). Let
f (n)= n2 , g(n)= n and h(n)= g(n)-f (n). It is clear that h(n)< g(n)< f (n)
for all n 2 0. Therefore, f (n)= max(f (n),h(n)). Using the maximum rule, we
conclude
Problem 3.16. Prove by mathematical induction that the maximum rule can be
applied to more than two functions. More precisely, let k be an integer and let
fl,f2,...,fk be functions from N to l 0 . Define g(n)f= maxi fi(n),f 2 (n),...,
fk(n)) and h(n)= fi(n)+f2 (n)±+ + fk(n) for all n > 0. Prove that 0(g(n))
0 (h(n)).
Problem 3.17. Find the error in the following proof that O(n)= O(n 2 ).
2
0(n)= O(max(n,n,...n))= O(n+n+.*.-+n)=O(n),
n times n times
where the middle equality comes from the generalized maximum rule that you
proved in Problem 3.16.
Section 3.7 Problems 95
Problem 3.18. Prove that the E notation is reflexive, symmetric and transitive: for
any functions f g, h: N - R> ,
1. f(n) c O(f(n))
2. if f(n)c 03(g(n)) then g(n)c 0 (f(n))
3. if f(n) 0(g(n)) and g(n)c 9(h(n)) then f(n) cE)(h(n)).
Problem 3.19. For any functions f, : - Rprove that
Problem 3.23. We saw at the end of Section 3.3 that Z.n=1 ik c Q(nk+1) for any fixed
integer k > 0 because Zn= 1 ik> nk+l /2 k+1 for all n. Use the idea behind Figure 1.8
in Section 1.7.2 to derive a tighter constant for the inequality: find a constant d
(depending on k) much larger than 1 /2 k+1 such that y.' I ik > dnk+l holds for
all n. Do not use Proposition 1.7.16.
Problem 3.24. Prove that log(n!)c e(nlogn). Do not use Stirling's formula.
Hint: Mimic theproof that Y i ik e g(nk+l) given at the end of Section 3.3. Resist
the temptation to improve your reasoning along the lines of Problem 3.23.
Problem 3.27. Consider any b-smooth function 1f: N - W2. Let c and no be
constants such that f (bn) < cf (n) for all n > no. Consider any positive integer i.
Prove by mathematical induction that f (bn): cif (n) for all n > no.
Problem 3.35. Find a function f: N - W>0 that is bounded above by some poly-
nomial, yet f(n) no(l).
Analysis of Algorithms
4.1 Introduction
The primary purpose of this book is to teach you how to design your own efficient
algorithms. However, if you are faced with several different algorithms to solve
the same problem, you have to decide which one is best suited for your applica-
tion. An essential tool for this purpose is the analysis of algorithms. Only after you
have determined the efficiency of the various algorithms will you be able to make a
well-informed decision. But there is no magic formula for analysing the efficiency
of algorithms. It is largely a matter of judgement, intuition and experience. Nev-
ertheless, there are some basic techniques that are often useful, such as knowing
how to deal with control structures and recurrence equations. This chapter covers
the most commonly used techniques and illustrates them with examples. More
examples are found throughout the book.
4.2.1 Sequencing
Let PI and P2 be two fragments of an algorithm. They may be single instructions
or complicated subalgorithms. Let t1 and t2 be the times taken by P1 and P2,
respectively. These times may depend on various parameters, such as the instance
98
Section 4.2 Analysing control structures 99
size. The sequencing rule says that the time required to compute "P1 ;P2 ", that
is first P1 and then P2, is simply t1 + t2 . By the maximum rule, this time is in
G (max(t1 , t 2 )).
Despite its simplicity, applying this rule is sometimes less obvious than it may
appear. For example, it could happen that one of the parameters that control t2
depends on the result of the computation performed by PI. Thus, the analysis of
"P1 ; P2" cannot always be performed by considering P1 and P2 independently.
Here and throughout the book, we adopt the convention that when m = 0 this is
not an error; it simply means that the controlled statement P(i) is not executed
at all. Suppose this loop is part of a larger algorithm, working on an instance
of size n. (Be careful not to confuse m and n.) The easiest case is when the
time taken by P(i) does not actually depend on i, although it could depend on
the instance size or, more generally, on the instance itself. Let t denote the time
required to compute P(i). In this case, the obvious analysis of the loop is that P (i)
is performed m times, each time at a cost of t, and thus the total time required by
the loop is simply ? = mt. Although this approach is usually adequate, there is a
potential pitfall: we did not take account of the time needed for loop control. After
all, our for loop is shorthand for something like the following while loop.
i -I
while i < m do
P(i)
i- i-i+
In most situations, it is reasonable to count at unit cost the test i < m, the in-
structions i - 1 and i - i + 1, and the sequencing operations (go to) implicit in
the while loop. Let c be an upper bound on the time required by each of these
operations. The time f taken by the loop is thus bounded above by
VP c for i - 1
+ (m + 1)c for the tests i ' m
+ mt for the executions of P(i)
+ mc for the executions of i - i + 1
± Mc for the sequencing operations
< (t+3c)m+2c.
Moreover this time is clearly bounded below by mt. If c is negligible compared
to t, our previous estimate that f is roughly equal to mt was therefore justified,
except for one crucial case: 4? mt is completely wrong when m = 0 (it is even
worse if m is negative!). We shall see in Section 4.3 that neglecting the time required
for loop control can lead to serious errors in such circumstances.
100 Analysis of Algorithms Chapter 4
Resist the temptation to say that the time taken by the loop is in 0(mt) on
the pretext that the 0 notation is only asked to be effective beyond some threshold
such as m > 1. The problem with this argument is that if we are in fact analysing
the entire algorithm rather than simply the for loop, the threshold implied by the
0 notation concerns n, the instance size, rather than m, the number of times we go
round the loop, and m = 0 could happen for arbitrarily large values of n. On the
other hand, provided t is bounded below by some constant (which is always the case
in practice), and provided there exists a threshold no such that m >1 whenever
n > no, Problem 4.3 asks you to show that X is indeed in 0(mt) when A, m and t
are considered as functions of n.
The analysis of for loops is more interesting when the time t (i) required for
PW)varies as a function of i. (In general, the time required for P(i) could depend
not only on i but also on the instance size n or even on the instance itself.) If we
neglect the time taken by the loop control, which is usually adequate provided
m > 1, the same for loop
for i - 1 to m do P(i)
takes a time given not by a multiplication but rather by a sum: it is X,=1 t (i).
The techniques of Section 1.7.2 are often useful to transform such sums into simpler
asymptotic notation.
We illustrate the analysis of for loops with a simple algorithm for computing
the Fibonacci sequence that we evaluated empirically in Section 2.7.5. We repeat
the algorithm below.
function Fibiter(n)
i - 1; j- 0
for k - 1 to n do j - i+ j
i- j
return j
If we count all arithmetic operations at unit cost, the instructions inside the for loop
take constant time. Let the time taken by these instructions be bounded above by
some constant c. Not taking loop control into account, the time taken by the for loop
is bounded above by n times this constant: nc. Since the instructions before and
after the loop take negligible time, we conclude that the algorithm takes a time in
0(n). Similar reasoning yields that this time is also in 0(n), hence it is in 0((n).
We saw however in Section 2.5 that it is not reasonable to count the additions
involved in the computation of the Fibonacci sequence at unit cost unless n is very
small. Therefore, we should take account of the fact that an instruction as simple
as "j - i + j " is increasingly expensive each time round the loop. It is easy to
program long-integer additions and subtractions so that the time needed to add
or subtract two integers is in the exact order of the number of figures in the larger
operand. To determine the time taken by the k-th trip round the loop, we need
to know the length of the integers involved. Problem 4.4 asks you to prove by
mathematical induction that the values of i and j at the end of the k-th iteration
are fk- 1 and fk, respectively. This is precisely why the algorithm works: it returns
Section 4.2 Analysing control structures 101
the value of j at the end of the n-th iteration, which is therefore f, as required.
Moreover, we saw in Section 2.7.5 that de Moivre's formula tells us that the size of
fk is in 0(k). Therefore, the k-th iteration takes a time 0 (k - 1) +±(k),
which is the
same as 0((k); see Problem 3.34. Let c be a constant such that this time is bounded
above by ck for all k > 1. If we neglect the time required for the loop control and
for the instructions before and after the loop, we conclude that the time taken by
the algorithm is bounded above by
E n (n + l) 2
>Ick =c Yk =c ~~ O (n)
k-l kl 2
Similar reasoning yields that this time is in Q(n 2 ), and therefore it is in 0(n 2 ).
Thus it makes a crucial difference in the analysis of Fibrec whether or not we count
arithmetic operations at unit cost.
The analysis of for loops that start at a value other than 1 or proceed by larger
steps should be obvious at this point. Consider the following loop for example.
Here, P(i) is executed ((m -5) : 2)+1 times provided m > 3. (For a for loop to
make sense, the endpoint should always be at least as large as the starting point
minus the step).
function Fibrec(n)
if n < 2 then return n
else return Fibrec(n - 1)+Fibrec(n- 2)
Let T(n) be the time taken by a call on Fibrec(n). If n < 2, the algorithm simply
returns n, which takes some constant time a. Otherwise, most of the work is spent
in the two recursive calls, which take time T(n -1) and T(n - 2), respectively.
Moreover, one addition involving fn-1 and fn 2 (which are the values returned
by the recursive calls) must be performed, as well as the control of the recursion
and the test "if n < 2". Let h(n) stand for the work involved in this addition and
102 Analysis of Algorithms Chapter 4
control, that is the time required by a call on Fibrec(n) ignoring the time spent inside
the two recursive calls. By definition of T(n) and h(n) we obtain the following
recurrence.
If we count the additions at unit cost, h(n) is bounded by a constant and the
recurrence equation for T (n) is very similar to that already encountered for g (n)
in Section 1.6.4. Constructive induction applies equally well to reach the same
conclusion: T (n) e O (fn). However, it is easier in this case to apply the techniques
of Section 4.7 to solve recurrence 4.1. Similar reasoning shows that T(n)E Q (fn),
hence T(n)e FJ(fn). Using de Moivre's formula, we conclude that Fibrec(n) takes
a time exponential in n. This is double exponential in the size of the instance since
the value of n is exponential in the size of n.
If we do not count the additions at unit cost, h(n) is no longer bounded by
a constant. Instead h(n) is dominated by the time required for the addition of
fn -land fn 2 for sufficiently large n. We have already seen that this addition
takes a time in the exact order of n. Therefore h(n)e 0(n). The techniques of
Section 4.7 apply again to solve recurrence 4.1. Surprisingly, the result is the same
regardless of whether h (n) is constant or linear: it is still the case that T (n) E O(fn).
In conclusion, Fibrec(n) takes a time exponential in n whether or not we count
additions at unit cost! The only difference lies in the multiplicative constant hidden
in the 6 notation.
the lower half. We obtain the following algorithm. (A slightly better algorithm is
given in Section 7.3; see Problem 7.11.)
i= [(i+j): 2]+1andg = j.
Therefore,
d =5- i =1
j -(i+j).2<j -(i+j -1)/2 (j -i+1)/2 =d/2.
Finally, if x = T [k], then i and j are set to the same value and thus d = 1; but d was
at least 2 since otherwise the loop would not have been reentered. We conclude
that d < d/2 whichever case happens, which means that the value of d is at least
halved each time round the loop. Since we stop when dc< 1, the process must
eventually stop, but how much time does it take?
To determine an upper bound on the running time of binary search, let de
denote the value of j - i + 1 at the end of the f -th trip round the loop for 4 > 1 and
let do = n. Since de 1 is the value of j - i + 1 before starting the 4-?th iteration, we
have proved that di s de-I /2 for all 4 > 1. It follows immediately by mathematical
induction that de < n/2e. But the loop terminates when d < 1, which happens at
the latest when ? = [ lg n] . We conclude that the loop is entered at most r lg nl
times. Since each trip round the loop takes constant time, binary search takes a
104 Analysis of Algorithms Chapter 4
i -0
for k - 1 to s do
while U[k]iA 0 do
i- i+I
T[i]- k
U[kb- U[k]-1
To analyse the time required by this process, we use "U [ k]" to denote the value
originally stored in U[k] since all these values are set to 0 during the process. It is
tempting to choose any of the instructions in the inner loop as a barometer. For each
value of k, these instructions are executed U[k] times. The total number of times
they are executed is therefore Y.'=, U[k]. But this sum is equal to n, the number
of integers to sort, since the sum of the number of times that each element appears
gives the total number of elements. If indeed these instructions could serve as a
barometer, we would conclude that this process takes a time in the exact order
of n. A simple example is sufficient to convince us that this is not necessarily
the case. Suppose U[k]= 1 when k is a perfect square and U[k]= 0 otherwise.
This would correspond to sorting an array T containing exactly once each perfect
square between 1 and n2 , using s = n2 pigeon-holes. In this case, the process clearly
takes a time in Q (n2 ) since the outer loop is executed s times. Therefore, it cannot
be that the time taken is in E)(n). This proves that the choice of the instructions in
the inner loop as a barometer was incorrect. The problem arises because we can
only neglect the time spent initializing and controlling loops provided we make
sure to include something even if the loop is executed zero times.
The correct and detailed analysis of the process is as follows. Let a be the
time needed for the test U[k]f 0 each time round the inner loop and let b be
the time taken by one execution of the instructions in the inner loop, including
the implicit sequencing operation to go back to the test at the beginning of the
loop. To execute the inner loop completely for a given value of k takes a time
tk = (1 + U[k])a + U[k]b, where we add 1 to U[k] before multiplying by a to take
account of the fact that the test is performed each time round the loop and one
more time to determine that the loop has been completed. The crucial thing is
that this time is not zero even when U[ k]= 0. The complete process takes a time
c + =1(d + tk), where c and d are new constants to take account of the time
needed to initialize and control the outer loop, respectively. When simplified, this
expression yields c + (a + d)s + (a + b)n. We conclude that the process takes a
time in 0 (n + s). Thus the time depends on two independent parameters n and s;
106 Analysis of Algorithms Chapter 4
Selection sort
Selection sorting, encountered in Section 2.4, provides a good example of the anal-
ysis of nested loops.
Although the time spent by each trip round the inner loop is not constant-it takes
longer when T[j] < minx-this time is bounded above by some constant c (that
takes the loop control into account). For each value of i, the instructions in the
inner loop are executed n -(i + 1) +I = n - i times, and therefore the time taken
by the inner loop is t (i) • (n - i) c. The time taken for the i-th trip round the outer
loop is bounded above by b + t (i) for an appropriate constant b that takes account
of the elementary operations before and after the inner loop and of the loop control
Section 4.4 Supplementary examples 107
for the outer loop. Therefore, the total time spent by the algorithm is bounded
above by
n-l nil n-1
b+ (n -i)c = (b+cn)-c E i
iil iil 11
I cn 2 + b- Ic) n-b,
2
which is in 0(n ). Similar reasoning shows that this time is also in 0(n2 ) in all
cases, and therefore selection sort takes a time in 0 (n 2) to sort n items.
The above argument can be simplified, obviating the need to introduce explicit
constants such as b and c, once we are comfortable with the notion of a barometer
instruction. Here, it is natural to take the innermost test "if T[j]< minx" as a
barometer and count the exact number of times it is executed. This is a good
measure of the total running time of the algorithm because none of the loops can
be executed zero times (in which case loop control could have been more time-
consuming than our barometer). The number of times that the test is executed is
easily seen to be
nl1 n n-1
= k = n(n- 1)/2.
k-l
Thus the number of times the barometer instruction is executed is in 0 (n2 ), which
automatically gives the running time of the algorithm itself.
Insertion sort
We encountered insertion sorting also in Section 2.4.
T[j + 1]- x
Unlike selection sorting, we saw in Section 2.4 that the time taken to sort n items
by insertion depends significantly on the original order of the elements. Here,
we analyse this algorithm in the worst case; the average-case analysis is given in
Section 4.5. To analyse the running time of this algorithm, we choose as barometer
the number of times the while loop condition (j > 0 and x < TI jI) is tested.
Suppose for a moment that i is fixed. Let x = T[i], as in the algorithm.
The worst case arises when x is less than Tfj] for every j between 1 and i - 1,
since in this case we have to compare x to T[i -1], T[i - 2],..., T[1] before we
leave the while loop because j = 0. Thus the while loop test is performed i times
108 Analysis of Algorithms Chapter 4
in the worst case. This worst case happens for every value of i from 2 to n when
the array is initially sorted into descending order. The barometer test is thus per-
formed 1=2 i = n(n + 1)/2 1 times in total, which is in 0(n2 ). This shows that
insertion sort also takes a time in 0 (n 2) to sort n items in the worst case.
Euclid's algorithm
Recall from Section 2.7.4 that Euclid's algorithm is used to compute the greatest
common divisor of two integers.
function Euclid(m, n)
while m > 0 do
t- m
m - n mod m
n- t
return n
The analysis of this loop is slightly more subtle than those we have seen so far
because clearly measurable progress occurs not every time round the loop but
rather every other time. To see this, we first show that for any two integers m and
n such that n > m, it is always true that n mod m < n/2.
• If m > n/2, then 1 < n/m < 2, and so n . m = 1, which implies that
nmodm = n xm(n m)= n-m < n -n/2 = n/2.
• If m <n/2, then n mod m <imn<n/2.
Assume without loss of generality that n > m since otherwise the first trip round
the loop swaps m and n (because n mod m - n when n < m). This condition
is preserved each time round the loop because n mod m is never larger than m.
If we assume that arithmetic operations are elementary, which is reasonable in
many situations, the total time taken by the algorithm is in the exact order of the
number of trips round the loop. Let us determine an upper bound on this number
as a function of n. Consider what happens to m and n after we go round the
loop twice, assuming the algorithm does not stop earlier. Let mo and no denote the
original value of the parameters. After the first trip round the loop, m becomes
no mod mo. After the second trip round the loop, n takes up that value. By the
observation above, n has become smaller than no/2. In other words, the value of
n is at least halved after going round the loop twice. By then it is still the case that
n > m and therefore the same reasoning applies again: if the algorithm has not
stopped earlier, two additional trips round the loop will make the value of n at
least twice as small again. With some experience, the conclusion is now immediate:
the loop is entered at most roughly 2 lg n times.
Formally, it is best to complete the analysis of Euclid's algorithm by treating the
while loop as if it were a recursive algorithm. Let t(e) be the maximum number
of times the algorithm goes round the loop on inputs m and n when m vn < P.
If n < 2, we go round the loop either zero times (if m = 0) or one time. Otherwise,
either we go round the loop less than twice (if m = 0 or m divides n exactly),
or at least twice. In the latter case, the value of n is at least halved-and thus it
Section 4.4 Supplementary examples 109
becomes at most f . 2-and that of m becomes no larger than the new value of n.
Therefore it takes no more than t (1 . 2) additional trips round the loop to complete
the calculation. This yields the following recurrence.
To analyse the execution time of this algorithm, we use the instruction write as
a barometer. Let t(m) denote the number of times it is executed on a call of
Hanoi(m,, ). By inspection of the algorithm, we obtain the following recurrence:
t (M) (0 if M= 0 (4.3)
12t(m- 1)+1 otherwise,
from which the technique of Section 4.7 yields t(m)= 2m -1; see Example 4.7.6.
Since the number of executions of the write instruction is a good measure of the
time taken by the algorithm, we conclude that it takes a time in 0 (2n) to solve
the problem with n rings. In fact, it can be proved that the problem with n rings
cannot be solved in less than 2n - 1 moves and therefore this algorithm is optimal
if one insists on printing the entire sequence of necessary moves.
Computing determinants
Yet another example of analysis of recursion concerns the recursive algorithm for
calculating a determinant. Recall that the determinant of an n x n matrix can be
computed from the determinants of n smaller (n - 1) x (n - 1) matrices obtained
by deleting the first row and some column of the original matrix. Once the n sub-
determinants are calculated, the determinant of the original matrix is obtained very
Section 4.5 Average-case analysis ill
quickly. In addition to the recursive calls, the dominant operation needed consists
of creating the n submatrices whose determinants are to be calculated. This takes a
time in 0 (n3 ) if it is implemented in a straightforward way, but a time in 0((n) suf-
fices if pointers are used instead of copying elements. Therefore, the total time t (n)
needed to calculate the determinant of an n x n matrix by the recursive algorithm is
givenby the recurrence t(n)= nt(n - 1)+h(n) forn > 2, where h(n)E 0(n). This
recurrence cannot be handled by the techniques of Section 4.7. However we saw
in Problem 1.31 that constructive induction applies to conclude that t(n)e a(n!),
which shows that this algorithm is very inefficient. The same conclusion holds if
we are less clever and need h (n)E 0(n3 ). Recall from Section 2.7.1 that the deter-
minant can be calculated in a time in 0 (n 3 ) by Gauss-Jordan elimination, and even
faster by another recursive algorithm of the divide-and-conquer family. More work
is needed to analyse these algorithms if the time taken for arithmetic operations is
taken into account.
Let k be this partial rank. We choose again as barometer the number of times
the while loop condition (j > 0 and x < T[j]) is tested. By definition of partial
rank, and since Till . . i - 1] is sorted, this test is performed exactly i - k + 1 times.
Because each value of k between 1 and i has probability 1/i of occurring, the
average number of times the barometer test is performed for any given value of
i is
t
i k= 2
These events are independent for different values of i. The total average number
of times the barometer test is performed when sorting n items is therefore
n n i + 1 (n -1) (n + 4)
Eci i 2 4
i=2 i=2
for i 1 to n do P
If P takes a time in O (log n) in the worst case, it is correct to conclude that the loop
takes a time in 0 (n log n), but it may be that it is much faster even in the worst case.
This could happen if P cannot take a long time ([ (log n)) unless it has been called
many times previously, each time at small cost. It could be for instance that P takes
constant time on the average, in which case the entire loop would be performed in
linear time.
The meaning of "on the average" here is entirely different from what we en-
countered in Section 4.5. Rather than taking the average over all possible inputs,
which requires an assumption on the probability distribution of instances, we take
the average over successive calls. Here the times taken by the various calls are
highly dependent, whereas we implicitly assumed in Section 4.5 that each call was
independent from the others. To prevent confusion, we shall say in this context
that each call on P takes amortized constant time rather than saying that it takes
constant time on the average.
Saying that a process takes amortized constant time means that there exists
a constant c (depending only on the implementation) such that for any positive
Section 4.6 Amortized analysis 113
n and any sequence of n calls on the process, the total time for those calls is
bounded above by cn. Therefore, excessive time is allowed for one call only if
very short times have been registered previously, not merely if further calls would
go quickly. Indeed, if a call were allowed to be expensive on the ground that it
prepares for much quicker later calls, the expense would be wasted should that
call be the final one.
Consider for instance the time needed to get a cup of coffee in a common coffee
room. Most of the time, you simply walk into the room, grab the pot, and pour
coffee into your cup. Perhaps you spill a few drops on the table. Once in a while,
however, you have the bad luck to find the pot empty, and you must start a fresh
brew, which is considerably more time-consuming. While you are at it, you may
as well clean up the table. Thus, the algorithm for getting a cup of coffee takes
substantial time in the worst case, yet it is quick in the amortized sense because a
long time is required only after several cups have been obtained quickly. (For this
analogy to work properly, we must assume somewhat unrealistically that the pot
is full when the first person walks in; otherwise the very first cup would consume
too much time.)
A classic example of this behaviour in computer science concerns storage allo-
cation with occasional need for "garbage collection". A simpler example concerns
updating a binary counter. Suppose we wish to manage the counter as an array
of bits representing the value of the counter in binary notation: array C[1 . . m]
represents EYm' 2 m- C[j]. For instance, array [0,1,1, 0,1, 1] represents 27. Since
such a counter can only count up to 2m - 1, we shall assume that we are happy to
count modulo 2". (Alternatively, we could produce an error message in case of
overflow.) Here is the algorithm for adding 1 to the counter.
procedure count(C[l .. im])
{This procedure assumes m > I
and C[j] {0, 1} for each 1 < j < m}
j- mi +I
repeat
j-gj--i
C[jP- 1 - C[j]
until C[j]= 1 or j = 1
Called on our example [0,1, 1,0, 1,1], the arraybecomes [0, 1, 1,0,1,0] the firsttime
round the loop, [0,1,1, 0, 0, 0] the second time, and [0,1,1,1, 0, 0] the third time
(which indeed represents the value 28 in binary); the loop then terminates with
j = 4 since C[4] is now equal to 1. Clearly, the algorithm's worst case occurs when
C[j] = 1 for all j, in which case it goes round the loop m times. Therefore, n calls
on count starting from an all-zero array take total time in 0 (nm). But do they take
a time in 0 (nm) ? The answer is negative, as we are about to show that count takes
constant amortized time. This implies that our n calls on count collectively take a
time in 0(n), with a hidden constant that does not depend on m. In particular,
counting from 0 to n = 2m - 1 can be achieved in a time linear in n, whereas careless
worst-case analysis of count would yield the correct but pessimistic conclusion that
it takes a time in 0 (n log n).
114 Analysis of Algorithms Chapter 4
There are two main techniques to establish amortized analysis results: the
potential function approach and the accounting trick. Both techniques apply best
to analyse the number of times a barometer instruction is executed.
Potential functions
Suppose the process to be analysed modifies a database and its efficiency each time
it is called depends on the current state of that database. We associate with the
database a notion of "cleanliness", known as the potentialfunction of the database,
Calls on the process are allowed to take more time than average provided they
clean up the database. Conversely, quick calls are allowed to mess it up. This is
precisely what happens in the coffee room! The analogy holds even further: the
faster you fill up your cup, the more likely you will spill coffee, which in turn means
that it will take longer when the time comes to clean up. Similarly, the faster the
process goes when it goes fast, the more it messes up the database, which in turn
requires more time when cleaning up becomes necessary.
Formally, we introduce an integer-valued potential function F of the state of
the database. Larger values of 4 correspond to dirtier states. Let (Po be the value
of 4 on the initial state; it represents our standard of cleanliness. Let Oi be the
value of 4F on the database after the i-th call on the process, and let t1 be the time
needed by that call (or the number of times the barometer instruction is performed).
We define the amortized time taken by that call as
ti = ti + cki - fi/;1
Thus, ii is the actual time required to carry out the i-th call on the process plus
the increase in potential caused by that call. It is sometimes better to think of it as
the actual time minus the decrease in potential, as this shows that operations that
clean up the database will be allowed to run longer without incurring a penalty in
terms of their amortized time.
Let T. denote the total time required for the first n calls on the process, and
denote the total amortized time by Tn.
n n
Tn Y.
E i = (ti + Oii - OZi-l)
n n n
= ti + Y. ti - Z' 1
i11 i~l ~I
Tn + O-n + Pn-i + + (Pi
- Pn-i -¢i (Po
=Tn + -4). - (Po
Therefore
T. Tn - (kn - PO)
Section 4.6 Amortized analysis 115
The significance of this is that Tn < Tn holds for all n provided kn never becomes
smaller than 4 o. In other words, the total amortized time is always an upper bound
on the total actual time needed to perform a sequence of operations, as long as the
database is never allowed to become "cleaner" than it was initially. (This shows
that overcleaning can be harmful!) This approach is interesting when the actual
time varies significantly from one call to the next, whereas the amortized time is
nearly invariant. For instance, a sequence of operations takes linear time when the
amortized time per operation is constant, regardless of the actual time required for
each operation.
The challenge in applying this technique is to figure out the proper potential
function. We illustrate this with our example of the binary counter. A call on
count is increasingly expensive as the rightmost zero in the counter is farther to the
left. Therefore the potential function that immediately comes to mind would be
m minus the largest j such that C [j]= 0. It turns out, however, that this choice of
potential function is not appropriate because a single operation can mess up the
counter arbitrarily (adding 1 to the counter representing 2 k 2 causes this potential
function to jump from 0 to k). Fortunately, a simpler potential function works well:
define 4>(C) as the number of bits equal to 1 in C. Clearly, our condition that the
potential never be allowed to decrease below the initial potential holds since the
initial potential is zero.
What is the amortized cost of adding 1 to the counter, in terms of the number
of times we go round the loop? There are three cases to consider.
• If the counter represents an even integer, we go round the loop once only as we
flip the rightmost bit from 0 to 1. As a result, there is one more bit set to 1 than
there was before. Therefore, the actual cost is 1 trip round the loop, and the
increase in potential is also 1. By definition, the amortized cost of the operation
isl+1 =2.
• If all the bits in the counter are equal to 1, we go round the loop m times, flipping
all those bits to 0. As a result, the potential drops from m to 0. Therefore, the
amortized cost is m - m = 0.
• In all other cases, each time we go round the loop we decrease the potential by
1 since we flip a bit from 1 to 0, except for the last trip round the loop when we
increase the potential by 1 since we flip a bit from 0 to 1. Thus, if we go round
the loop k times, we decrease the potential k - 1 times and we increase it once,
for a net decrease of k - 2. Therefore, the amortized cost is k - (k - 2)= 2.
One of the first lessons experience will teach you if you try solving recurrences is
that discontinuous functions such as the floor function (implicit in n 2) are hard
to analyse. Our first step is to replace n . 2 with the better-behaved "n /2" with a
suitable restriction on the set of values of n that we consider initially. It is tempting
to restrict n to being even since in that case n . 2 = n/2, but recursively dividing
an even number by 2 may produce an odd number larger than 1. Therefore, it
is a better idea to restrict n to being an exact power of 2. Once this special case
is handled, the general case follows painlessly in asymptotic notation from the
smoothness rule of Section 3.4.
First, we tabulate the value of the recurrence on the first few powers of 2.
n 1 2 4 8 16 32
T(n) 1 5 19 65 211 665
Each term in this table but the first is computed from the previous term. For in-
stance, T(16)= 3 x T(8)+16 = 3 x 65 + 16 = 211. But is this table useful? There is
certainly no obvious pattern in this sequence! What regularity is there to look for?
The solution becomes apparent if we keep more "history" about the value
of T(n). Instead of writing T(2)= 5, it is more useful to write T(2)= 3 x 1 + 2.
Then,
T(4)=3xT(2)+4=3x (3x1+2)+4 =3 2 x1 +3x22+ 4.
We continue in this way, writing n as an explicit power of 2.
n T(n)
1 1
2 3x 1+2
22 32 x1 +3x2+22
23 33 x1 +32 x2+3x22+23
24 34 x 1 +33x2+32 x22 +3x23 +24
25 35 x1 +34 x2+33 x 22 +32 x23 +3 x24 +25
The pattern is now obvious.
It is easy to check this formula against our earlier tabulation. By induction (not math-
ematical induction), we are now convinced that Equation 4.5 is correct.
118 Analysis of Algorithms Chapter 4
n 1 2 4 8 16 32
T(n) -2n -1 1 11 49 179 601
T(n)- n 0 3 15 57 195 633
T(n) 1 5 19 65 211 665
T(n)2+n 2 7 23 73 227 697
T(n) +2n 3 9 27 81 243 729
T(n)= 3n 1g 3 - 2n (4.6)
where the t1 are the values we are looking for. In addition to Equation 4.7, the values
of ti on k values of i (usually 0 < i < k - 1 or 1 • i < k) are needed to determine the
sequence. These initial conditions will be considered later. Until then, Equation 4.7
typically has infinitely many solutions. This recurrence is
• linearbecause it does not contain terms of the form t, _tn j, t2 - , and so on;
c homogeneous because the linear combination of the t,_j is equal to zero; and
c with constant coefficients because the a1 are constants.
Consider for instance our now familiar recurrence for the Fibonacci sequence.
fn = f.-1 + f.-2
This recurrence easily fits the mould of Equation 4.7 after obvious rewriting.
fn - fn 1 - fn-2 =0
=cxO+dx O=O.
k
p(x)= f(x - ri)
z=1
where the ri may be complex numbers. Moreover, these ri are the only solutions
of the equation p(x)= 0.
Consider any root r1 of the characteristic polynomial. Since p (ri) 0 it follows
that x = ri is a solution to the characteristic equation and therefore rin is a solution
to the recurrence. Since any linear combination of solutions is also a solution, we
conclude that
k
t' ii (4.8)
satisfies the recurrence for any choice of constants Cl, C2. Ck. The remarkable
fact, which we do not prove here, is that Equation 4.7 has only solutions of this form
provided all the ri are distinct. In this case, the k constants can be determined from
k initial conditions by solving a system of k linear equations in k unknowns.
Example 4.7.1. (Fibonacci) Consider the recurrence
n if n = Oorn = 1
Lfn- + fn 2 otherwise
First we rewrite this recurrence to fit the mould of Equation 4.7.
fn -fn-1 -fnf2 =0
X2 _ x -1
fnc= cI
rl + C2r2. (4.9)
It remains to use the initial conditions to determine the constants cl and c2. When
n = 0, Equation 4.9 yields o = C1 + C2 . But we know that fo = 0. Therefore,
Section 4.7 Solving recurrences 121
Cl I and c
T5. 2
Thus
f15 ( 2 ) ( 2 ) 11]
which is de Moivre's famous formula for the Fibonacci sequence. Notice how much
easier the technique of the characteristic equation is than the approach by construc-
tive induction that we saw in Section 1.6.4. It is also more precise since all we were
able to discover with constructive induction was that fan grows exponentially in
a number close to P)"; now we have an exact formula. D
If it surprises you that the solution of a recurrence with integer coefficients and
initial conditions involves irrational numbers, try Problem 4.31 for an even bigger
surprise!
Example 4.7.2. Consider the recurrence
0 if n=O
tn5 if n= I
3tn-I + 4tn-2 otherwise
First we rewrite the recurrence.
tn -3tn-1- 4t, 2 = 0
whose roots are ri = -1 and r2 = 4. The general solution is therefore of the form
tn = 4n -(-1)-.
D~
122 Analysis of Algorithms Chapter 4
The situation becomes slightly more complicated when the characteristic poly-
nomial has multiple roots, that is when the k roots are not all distinct. It is still true
that Equation 4.8 satisfies the recurrence for any values of the constants ci, but this
is no longer the most generalsolution. To find other solutions, let
be the characteristic polynomial of our recurrence, and let r be a multiple root. By def-
inition of multiple roots, there exists a polynomial q (x) of degree k - 2 such that
p(x)= (x - r) 2 q(x). For every n > k consider the n-th degree polynomials
Observe that vn(x)= x x un(x), where uW(x) denotes the derivative of un(x)
with respect to x. But un (x) can be rewritten as
Using the rule for computing the derivative of a product of functions, we obtain
that the derivative of un (x) with respect to x is
f mi-l
tn = E' >. cij nirin
i~l j=O
is the general solution to Equation 4.7. Again, the constants cij, 1 < i <1 and
0 < j < mi -1, are to be determined by the k initial conditions. There are k such
constants because Y e., mi = k (the sum of the multiplicities of the distinct roots
is equal to the total number of roots). For simplicity, we shall normally label the
constants cl, C2,. . ., ck rather than using two indexes.
Section 4.7 Solving recurrences 123
In if n = 0,1 or 2
5t, 1 - 8t,-2 + 4t,-3 otherwise
x 3 -5x 2
+8x- 4= (x- 1)(x- 2)2.
tn = Cll' + C 2 21 + c 3 n2l.
tn = 2n+l-n2n-l - 2.
The left-hand side is the same as before, but on the right-hand side we have bn p (n),
where
o b is a constant; and
o p(n) is a polynomial in n of degree d.
t, -2t, 1= (4.11)
(1.
124 Analysis of Algorithms Chapter 4
C 2 3` 1
Looking back on Examples 4.7.4 and 4.7.5, we see that part of the characteristic
polynomial comes from the left-hand side of Equation 4.10 and the rest from the
right-hand side. The part that comes from the left-hand side is exactly as if the
equation had been homogeneous: (x - 2) for both examples. The part that comes
from the right-hand side is a result of our manipulation.
Generalizing, we can show that to solve Equation 4.10 it is sufficient to use the
following characteristic polynomial.
(aoXk + a xk1++ +ak) (X - b)d+l
(Recall that d is the degree of polynomial p (n).) Once this polynomial is obtained,
proceed as in the homogeneous case, except that some of the equations needed to
determine the constants are obtained not from the initial conditions but from the
recurrence itself.
Example 4.7.6. The number of movements of a ring required in the Towers of
Hanoi problem (see Section 4.4) is given by Equation 4.3.
t (M) 0if M -0
12t(m-1)+1 otherwise
This can be written as
t(m)-2t(m -1)= 1, (4.18)
which is of the form of Equation 4.10 with b = 1 and p(n)= 1, a polynomial of
degree 0. The characteristic polynomial is therefore
(x - 2)(x - 1)
where the factor (x - 2) comes from the left-hand side of Equation 4.18 and the
factor (x -1) from its right-hand side. The roots of this polynomial are 1 and 2,
both of multiplicity 1, so all solutions of this recurrence are of the form
m
t(mn) = cl + c2 2m . (4.19)
We need two initial conditions to determine the constants cl and C2. We know that
t (0) = 0; to find the second initial condition we use the recurrence itself to calculate
W()= 2t(0)+1 = 1.
This gives us two linear equations in the unknown constants.
Cl + C2 = 0 m=0
cl+ 2c2 = 1 m =1
From this, we obtain the solution cl - 1 and c2 = 1 and therefore
t(m)= 2" - 1.
If all we want is to determine the exact order of t (m), there is no need to cal-
culate the constants in Equation 4.19. This time we do not even need to substitute
Equation 4.19 into the original recurrence. Knowing that t(m)= Cl + c22 m is suf-
ficient to conclude that c2 > 0 and thus t(m) GE)(2m ). For this, note that t(m),
the number of movements of a ring required, is certainly neither negative nor a
constant since clearly t(m)>im. D
Section 4.7 Solving recurrences 127
t= 2t,_1 + n.
(x - 2)(x - 1)2
with roots 2 (multiplicity 1) and 1 (multiplicity 2). All solutions are therefore of the
form
tn = cl 2' + c 2 I' +c 3 n In. (4.20)
Provided to > 0,and therefore tn > 0 for all n, we conclude immediately that
t, E 0(2n). Further analysis is required to assert that tn E O (2n).
If we substitute Equation 4.20 into the original recurrence, we obtain
n = -tn 12tnI
from which we read directly that 2c3 - C2 = 0 and -C3 = 1, regardless of the initial
condition. This implies that C3 = -1 and c2 = -2. At first we are disappointed
because it is cl that is relevant to determining the exact order of tn as given by
Equation 4.20, and we obtained the other two constants instead. However, those
constants turn Equation 4.20 into
tn = cl 2 - n -2. (4.21)
Provided to > 0, and therefore tn > 0 for all n, Equation 4.21 implies that cl must
be strictly positive. Therefore, we are entitled to conclude that tn E)(2n) with no
need to solve explicitly for cl. Of course, c1 can now be obtained easily from the
initial condition if so desired.
Alternatively, all three constants can be determined as functions of to by setting
up and solving the appropriate system of linear equations obtained from Equa-
tion 4.20 and the value of tj and t2 computed from the original recurrence. L
By now, you may be convinced that, for all practical purposes, there is no
need to worry about the constants: the exact order of tn can always be read off
directly from the general solution. Wrong! Or perhaps you think that the constants
obtained by the simpler technique of substituting the general solution into the
original recurrence are always sufficient to determine its exact order. Wrong again!
Consider the following example.
128 Analysis of Algorithms Chapter 4
51 if =0
t 14th-l - 2n otherwise
t -4t,_1 = -2n,
which is of the form of Equation 4.10 with b = 2 and p(n)= -1, a polynomial of
degree 0. The characteristic polynomial is thus
(x - 4) (x - 2)
with roots 4 and 2, both of multiplicity 1. All solutions are therefore of the form
t, = cl 4n + c 2 2'. (4.22)
You may be tempted to assert without further ado that tn X 0(4n) since that is
clearly the dominant term in Equation 4.22.
If you are in less of a hurry, you may wish to substitute Equation 4.22 into the
original recurrence to see what comes out.
-2n tn -4tn-I
cl 4 + c 2 2' -4(c, 4n-1 ±c
+ 2 2" -1)
-C22'
T(n)= I~3~/)- if n 1
=3T(n/2)+n ifnisapowerof2,n>1
To transform this into a form that we know how to solve, we replace n by 2K.
This is achieved by introducing a new recurrence ti, defined by ti = T(2i). This
transformation is useful because n/2 becomes (2i)/2 = 2i -1. In other words, our
original recurrence in which T(n) is defined as a function of T(n/2) gives way to
one in which t, is defined as a function of ti-1, precisely the type of recurrence we
have learned to solve.
t, = T(2') = 3T(2' 1)+21
= 3ti 1 + 2'
Once rewritten as
ti - 3ti-1 = 2',
this recurrence is of the form of Equation 4.10. The characteristic polynomial is
(x - 3) (x - 2)
We use the fact that T(2T)= t, and thus T(n)= tlgn when n = 2i to obtain
However, we need to show that cl is strictly positive before we can assert something
about the exact order of T(n).
We are now familiar with two techniques to determine the constants. For the
sake of the exercise, let us apply each of them to this situation. The more direct
approach, which does not always provide the desired information, is to substitute
Section 4.7 Solving recurrences 131
the solution provided by Equation 4.25 into the original recurrence. Noting that
(1/2)1g3= 1/3, this yields
n= T(n)-3T(n/2)
= (cl n g3 + C2 n) -3 (C, (n/2)g3+C2 (n/2))
= -C2 n/2
and therefore c2 = -2. Even though we did not obtain the value of cl, which is
the more relevant constant, we are nevertheless in a position to assert that it must
be strictly positive, for otherwise Equation 4.25 would falsely imply that T(n) is
negative. The fact that
is thus established. Of course, the value of c1 would now be easy to obtain from
Equation 4.25, the fact that c2 = -2, and the initial condition T(1) = 1, but this is not
necessary if we are satisfied to solve the recurrence in asymptotic notation. More-
over we have learned that Equation 4.26 holds regardless of the initial condition,
provided T(n) is positive.
The alternative approach consists of setting up two linear equations in the two
unknowns cl and c2. It is guaranteed to yield the value of both constants. For this,
we need the value of T (n) on two points. We already know that T (1) = 1. To obtain
another point, we use the recurrence itself: T(2)= 3T(1 ) +2 = 5. Substituting n = 1
and n = 2 in Equation 4.25 yields the following system.
ci + C2 =1 n= 1
3ci + 2C2 =5 n =2
Solving these equations, we obtain cl = 3 and c2 = -2. Therefore
T(n)= 3nlg3 - 2n
T(n)= 4T(n/2)+n 2
t, - 4t- 1 I = 4'
132 Analysis of Algorithms Chapter 4
The characteristic polynomial is (x - 4)2 and hence all solutions are of the form
ti cl 4' + C2 i4'.
n2 = T(n) -4T(n/2)= C2 2
T(n)= 2T(n/2)+nlgn
ti - 2ti-1 = i2'
Remark: In the preceding examples, the recurrence given for T(n) only applies
when n is a power of 2. It is therefore inevitable that the solution obtained should
be in conditional asymptotic notation. In each case, however, it is sufficient to add
the condition that T(n) is eventually nondecreasing to be able to conclude that
the asymptotic results obtained apply unconditionally for all values of n. This
follows from the smoothness rule (Section 3.4) since the functions nIg3, n 2 log n
and n log2 n are smooth.
Example 4.7.13. We are now ready to solve one of the most important recurrences
for algorithmic purposes. This recurrence is particularly useful for the analysis
of divide-and-conquer algorithms, as we shall see in Chapter 7. The constants
no > 1, 1? :1, b > 2 and k > 0 are integers, whereas c is a strictly positive real
number. Let T: N-. R' be an eventually nondecreasing function such that
when n/no is an exact power of b, that is when n c {bno, b2 no, b3 no,... }. This
time, the appropriate change of variable is n = b no.
The right-hand side is of the required form a1 p(i) where p(i)= cnois a con-
stant polynomial (of degree 0) and a = bk. Thus, the characteristic polynomial is
(x - ?)(x - bk) whose roots are 4 and bk. From this, it is tempting (but false in
general!) to conclude that all solutions are of the form
ti = cl il +c 2 (bk)L. (4.30)
To write this in terms of T(n), note that i = logb(n/no) when n is of the proper
form, and thus di = (nf/no)l19b d for arbitrary positive values of d. Therefore,
= c3 nlogb e + C4 fk
for appropriate new constants C3 and C4 . To learn about these constants, we sub-
stitute Equation 4.31 into the original recurrence.
• If l > bk then C4 < 0 and logb 1 > k. The fact that C4 is negative implies that
c3 is positive, for otherwise Equation 4.31 would imply that T(n) is nega-
tive, contrary to the specification that T: N -R '. Therefore the term C3 nllogb
dominates Equation 4.31. Furthermore nlogb e is a smooth function and T(n)
is eventually nondecreasing. Therefore T(n) c-0(nlog, e),
o If 13= bk, however, we are in trouble because the formula for C4 involves a divi-
sion by zero! What went wrong is that in this case the characteristic polynomial
has a single root of multiplicity 2 rather than two distinct roots. Therefore Equa-
tion 4.30 does not provide the general solution to the recurrence. Rather, the
general solution in this case is
ti = C5 (b k)i+C6 i (bkni
for appropriate constants C7 and c8. Substituting this into the original recur-
rence, our usual manipulation yields a surprisingly simple c8 = c. Therefore,
c nk logb n is the dominant term in Equation 4.32 because c was assumed to
be strictly positive at the beginning of this example. Since nk log n is smooth
and T(n) is eventually nondecreasing, we conclude that T(n)E O(nk log n).
Putting it all together,
f6(nk) if - < bk
T(n)e E(nk logn) if f = bk (4.33)
0 (nlogb 4) if e > b k
Problem 4.44 gives a generalization of this example. D
Remark: It often happens in the analysis of algorithms that we derive a recurrence
in the form of an inequality. For instance, we may get
when n/no is an exact power of b, instead of Equation 4.29. What can we say
about the asymptotic behaviour of such a recurrence? First note that we do not
have enough information to determine the exact order of T(n) because we are given
Section 4.7 Solving recurrences 135
only an upper bound on its value. (For all we know, it could be that T(n)= 1 for
all n.) The best we can do in this case is to analyse the recurrence in terms of
the 0 notation. For this we introduce an auxiliary recurrence patterned after the
original but defined in terms of an equation (not an inequation). In this case
t(n)= |T(no) if n = no
ln -
{PT(n/b) +cnk if n/no is a power of b, n > no.
This new recurrence falls under the scope of Example 4.7.13, except that we
have no evidence that T(n) is eventually nondecreasing. Therefore Equation 4.33
holds for T(n), provided we use conditional asymptotic notation to restrict n/no
to being a power of b. Now, it is easy to prove by mathematical induction that
T(n)< T(n) for all n > no such that n/no is a power of b. But clearly if
O(nk) if ? < bk
T(n)E 0(nklogn) ifl =bk
l O(n'9b r) if P > bk
We shall study further recurrences involving inequalities in Section 4.7.6.
So far, the changes of variable we have used have all been of the same logarith-
mic nature. Rather different changes of variable are sometimes useful. We illustrate
this with one example that comes from the analysis of the divide-and-conquer al-
gorithm for multiplying large integers (see Section 7.1).
Example 4.7.14. Consider an eventually nondecreasing function T(n) such that
for all sufficiently large n, where c is some positive real constant. As explained in
the remark following the previous example, we have to be content here to analyse
the recurrence in terms of the 0 notation rather than the 8 notation.
Let no > 1 be large enough that T(m)> T(n) for all m > n > no/ 2 and Equa-
tion 4.34 holds for all n > no. Consider any n > no. First observe that
Now make a change of variable by introducing a new function T such that T (n)
T(n + 2) for all n. Consider again any n > no.
In particular,
T(n)< 3T(n/2)+dn n > no
when n/no is a power of 2, where d = 2c. This is a special case of the recurrence
analysed in the remark following Example 4.7.13, with f = 3, b = 2 and k = 1. Since
F > bk, we obtain t (n)e O(nlg3 ). Finally, we use one last time the fact that T(n)
is eventually nondecreasing: T(n) < T(n + 2)= T(n) for any sufficiently large n.
Therefore any asymptotic upper bound on T(n) applies equally to T(n), which
concludes the proof that T(n) Q(nig3 ). D
ti = T(2') 2i T 2 (2' 1)
2' ti 1
At first glance, none of the techniques we have seen applies to this recurrence since
it is not linear; furthermore the coefficient 2i is not a constant. To transform the
range, we create yet another recurrence by using ui to denote lg ti.
u= lg ti i + 2g t1
i+2ui-1
(x - 2)(x 1)2
Ui =c2'+C21-i+C3i11 .
i = ui - 2uj-j
=c 1 2'+C2+C3i -2(c 2'-±c2+C3(i 1))
= (2c3 - C2) -C3 i
and thus C3 = -1 and C2 = 2C3 = -2. Therefore, the general solution for ui, if the
initial condition is not taken into account, is ui cl 2i - i - 2. This gives us the
general solution for t1 and T(n).
tj = 2ui = 2 cjT-i-2
22n
4n 3n'
when n is sufficiently large, where all we know about f (n) is that it is in the exact
order of n, and we know nothing specific about the initial condition that defines
T(n) except that it is positive for all n. Such an equation is called an asymptotic re-
currence. Fortunately, the asymptotic solution of a recurrence such as Equation 4.36
is virtually always identical to that of the simpler Equation 4.35. The general tech-
nique to solve an asymptotic recurrence is to "sandwich" the function it defines
between two recurrences of the simpler type. When both simpler recurrences have
138 Analysis of Algorithms Chapter 4
the same asymptotic solution, the asymptotic recurrence must have the same solu-
tion as well. We illustrate this with our example.
For arbitrary positive real constants a and b, define the recurrence
Ta,b(n)= (a + b)n 2 - bn
for all n > 1. Since both Tr,s (n) and T,,v(n) are in the exact order of n 2, it will
follow that T (n) e 0(n 2 ) as well. One interesting benefit of this approach is that
Ta,b (n) is nondecreasing for any fixed positive values of a and b, which makes it
possible to apply the smoothness rule. On the other hand, this rule could not have
been invoked to simplify directly the analysis of T(n) by restricting our attention
to the case when n is a power of 2 because Equation 4.36 does not provide enough
information to be able to assert that T(n) is nondecreasing.
We still have to prove the existence of r, s, u and v. For this, note that
Ta,a(n) = a T1 ,j(n) for all a and Ta,b(n)< Ta',b'(n) when a < a' and b < b' (two
more easy proofs by mathematical induction). Let c and d be positive real constants
and let no be an integer sufficiently large that cn s f(n) s dn and
T(n)> r Tl j (n)
=T,,r (n) > Tr,, (n)
and
T(n) s u Tlij (n)= Tusu(n)< Tu,v (n)
for all n S nO. This forms the basis of the proof by mathematical induction that
Equation 4.37 holds. For the induction step, consider an arbitrary n > no and
Section 4.8 Problems 139
assume by the induction hypothesis that Tr,s (m) < T(m) < TU,V (m) for all m < n.
Then,
T(n) = 4 T(n 2) +f (n)
<4T(n 2)+dn
- 4 T.,, (n . 2) +dn by the induction hypothesis
< 4 T,,, (n 2) +vn since d < v
= Tu,v (n).
The proof that Tr,s(n) < T(n) is similar, which completes the argument.
Example 4.7.16. The important class of recurrences solved in Example 4.7.13 can
be generalized in a similar way. Consider a function T: N - R+ such that
T(n)= f T(n . b)+f(n)
for all sufficiently large n, where f > 1 and b > 2 are constants, and f (n) e 6 (nk
for some k > 0. For arbitrary positive real constants x and y, define
Tx~yX (n = kif n =1
T (n) I, Ty (n b) +ynk if n > 1.
4.8 Problems
Problem 4.1. How much time does the following "algorithm" require as a function
of n?
fr - o
for i -1 to n do
for j - 1 to n2 do
for k - I to n3 do
Express your answer in 0 notation in the simplest possible form. You may consider
that each individual instruction (including loop control) is elementary.
140 Analysis of Algorithms Chapter 4
Problem 4.2. How much time does the following "algorithm" require as a function
of n?
e- o
for i -1 to n do
for j - 1 to i do
for k - j to n do
e - e+i
Express your answer in 8 notation in the simplest possible form. You may consider
that each individual instruction (including loop control) is elementary.
for i - 1 to m do P
which is part of a larger algorithm that works on an instance of size n. Let t be the
time needed for each execution of P, which we assume independent of i for the
sake of this problem (but t could depend on n). Prove that this loop takes a time
in O ( mt) provided t is bounded below by some constant and provided there exists a
threshold no such that m > 1 whenever n > no. (Recall that we saw in Section 4.2.2
that the desired conclusion would not hold without those restrictions.)
Problem 4.4. Prove by mathematical induction that the values of i and j at the
end of the k-th iteration of Fibiter in Section 4.2.2 are fk-, and fk, respectively,
where f, denotes the n-th Fibonacci number.
function C(n, k)
if k = O or k = n then return 1
else return C(n -1, k - 1)+C(n -1, k)
Analyse the time taken by this algorithm under the (unreasonable) assumption
that the addition C(n - 1, k - 1)+C(n -1, k) can be carried out in constant time
once both C(n - 1, k - 1) and C(n - 1, k) have been obtained recursively. Let t(n)
denote the worst time that a call on C(n, k) may take for all possible values of k,
o < k < n. Express t(n) in the simplest possible form in 0 notation.
Problem 4.6. Consider the following "algorithm".
procedure DC(n)
If n < 1 then return
for i - 1 to 8 do DC(n 2)
for i - 1 to n3 do dummy - 0
Section 4.8 Problems 141
Write an asymptotic recurrence equation that gives the time T(n) taken by a call
of DC(n). Use the result of Example 4.7.16 to determine the exact order of T(n) in
the simplest possible form. Do not reinvent the wheel here: apply Example 4.7.16
directly. The complete solution of this problem should not take more than 2 or
3 lines!
Note: This is how easy it is to analyse the running time of most divide-and-conquer
algorithms; see Chapter 7.
Problem 4.7. Rework Problem 4.6 if the constant 8 that appears in the middle line
of algorithm DC is replaced by 9.
Problem 4.8. Rework Problem 4.6 if the constant 8 that appears in the middle line
of algorithm DC is replaced by 7.
Problem 4.9. Consider the following "algorithm".
procedure waste(n)
for i 1 to n do
for j - 1 to i do
write i, j, n
if n > 0 then
for i - 1 to 4 do
waste(n . 2)
Let T(n) stand for the number of lines of output generated by a call of waste(n).
Provide a recurrence equation for T(n) and use the result of Example 4.7.16 to
determine the exact order of T(n) in the simplest possible form. (We are not asking
you to solve the recurrence exactly.)
Problem 4.10. Prove by mathematical induction that if do = n and de < df- I/ 2
for all f > 1, then de < n/2t for all l?2 0. (This is relevant to the analysis of the
time taken by binary search; see Section 4.2.4.)
Problem 4.11. Consider the following recurrence for n > 1.
i~) Ic if n 1
{t(n : 2)+b otherwise
Use the technique of the characteristic equation to solve this recurrence when n is
a power of 2. Prove by mathematical induction that f (n) is an eventually nonde-
creasing function. Use the smoothness rule to show that I (n)e O)(logn). Finally,
conclude that t(n)E O(logn), where t(n) is given by Equation 4.2. Can we con-
clude from Equation 4.2 that t (n) E G(log n) ? Why or why not?
Problem 4.12. Prove that the initialization phase of pigeon-hole sorting (Sec-
tion 2.7.2) takes a time in O(n + s).
Problem 4.13. We saw in Section 4.2.4 that binary search can find an item in a
sorted array of size n in a time in O(logn). Prove that in the worst case a time in
Q (log n) is required. On the other hand, what is the time required in the best case?
142 Analysis of Algorithms Chapter 4
Problem 4.14. How much time does insertion sorting take to sort n distinct items
in the best case? State your answer in asymptotic notation.
Problem 4.15. We saw in Section 4.4 that a good barometer for the worst-case
analysis of insertion sorting is the number of times the while loop condition is
tested. Show that this barometer is also appropriate if we are concerned with the
best-case behaviour of the algorithm (see Problem 4.14).
Problem 4.16. Prove that Euclid's algorithm performs least well on inputs of any
given size when computing the greatest common divisor of two consecutive num-
bers from the Fibonacci sequence.
Problem 4.17. Give a nonrecursive algorithm to solve the Towers of Hanoi prob-
lem (see Section 4.4). It is cheating simply to rewrite the recursive algorithm using
an explicit stack to simulate the recursive calls!
Problem 4.18. Prove that the Towers of Hanoi problem with n rings cannot be
solved with fewer than 2n - 1 movements of rings.
Problem 4.19. Give a procedure similar to algorithm count from Section 4.6to
increase an m-bit binary counter. This time, however, the counter should remain
all ones instead of cycling back to zero when an overflow occurs. In other words,
if the current value represented by the counter is v, the new value after a call on
your algorithm should be min(v + 1, 2m - 1). Give the amortized analysis of your
algorithm. It should still take constant amortized time for each call.
Problem 4.20. Prove Equation 4.6 from Section 4.7.1 by mathematical induction
when n is a power of 2. Prove also by mathematical induction that the function
T (n) defined by Equation 4.4 is nondecreasing (for all n, not just when n is a power
of 2).
Problem 4.21. Consider arbitrary positive real constants a and b. Use intelligent
guesswork to solve the following recurrence when n > 1.
t (n) aif n 1
Lnt(n-l)+bn otherwise.
You are allowed a term of the form X 1/i! in your solution. Prove your an-
swer by mathematical induction. Express t (n) in 6 notation in the simplest
possible form. What is the value of liming t(n)/n! as a function of a and b?
(Note: l/i!
II = e -1, where e = 2.7182818 ... is the base of the natural loga-
rithm.) Although we determined the asymptotic behaviour of this recurrence in
Problem 1.31 using constructive induction, note that this time we have obtained a
more precise formula for t(n). In particular, we have obtained the limit of t(n)/n!
as n tends to infinity. Recall that this problem is relevant to the analysis of the
recursive algorithm for calculating determinants.
Problem 4.22. Solve the recurrence of Example 4.7.2 by intelligent guesswork.
Resist the temptation to "cheat" by looking at the solution before working out this
problem!
Section 4.8 Problems 143
Problem 4.23. Prove that Equation 4.7 (Section 4.7.2) has only solutions of the
form
k
t, = E cri'
til
provided the roots rl, r2,..., rk of the characteristic polynomial are distinct.
Problem 4.24. Complete the solution of Example 4.7.7 by determining the value
of cl as function of to.
Problem 4.25. Complete the solution of Example 4.7.11 by determining the value
of cl as function of to.
Problem 4.26. Complete the solution of Example 4.7.12 by determining the value
of cl as function of to.
Problem 4.27. Complete the solution of Example 4.7.13 by showing that C =c.
Problem 4.28. Complete the remark that follows Example 4.7.13 by proving by
mathematical induction that T(n) < T(n) for all n > no such that n/no is a power
of b.
Problem 4.29. Solve the following recurrence exactly.
In if n = Oorn= 1
|5t1 - 6tn-2 otherwise
t n if n =Oorn =1
|2tn-1 -2tn-2 otherwise
Prove that t, = 2"/2 sin(nrT/4), not by mathematical induction but by using the
technique of the characteristic equation.
in if n =0,1,2or3
+
1tn- tn-3 - tn-4 otherwise
tn+1 if n =Oorn =
L3tn-, - 2tn 2+ 3 x 2n 2 otherwise
T(n)= a if n =Oorn= 1
IT(n -l)+T(n -2)+c otherwise
Express your answer as simply as possible using the 0 notation and the golden
ratio P = (1 + 5\f) /2. Note that this is Recurrence 4.1 from Section 4.2.3 if h(n) = c,
which represents the time taken by a call on Fibrec(n) if we count the additions at
unit cost.
Problem 4.35. Solve the following recurrence exactly.
T(n)- La if n = or n = 1
IT(n-l)+T(n -2)+cn otherwise
Express your answer as simply as possible using the 0 notation and the golden ratio
P = (1 + 5 ) /2. Note that this is Recurrence 4.1 from Section 4.2.3 if h(n) = cn,
which represents the time taken by a call on Fibrec(n) if we do not count the
additions at unit cost. Compare your answer to that of Problem 4.34, in which
additions were counted at unit cost.
Problem 4.36. Solve the following recurrence exactly for n a power of 2.
T(n)=- if n 1
=4T(n/2)+n otherwise
T~n)=
1 if n =1
( 2T(n/2)+lgn otherwise
Problem 4.39. Solve the following recurrence exactly for n of the form 2 2 k.
(1 if n= 2
T(n) l2T( /n)++lgn otherwise
[n if n=Oorn 1
T(n)==
T l2T2(n -1)+VT2(n - 2)+n otherwise
1 if n = 1
T(n)= 3/2 ifn =2
l2T(n/2)- T(n/4)-1/n otherwise
t = O ifn 0=
l1/(4- tn 1) otherwise
Problem 4.43. Solve the following recurrence exactly as a function of the initial
conditions a and b.
a if n= 0
T(n)= b ifn =1
(1 + T(n - 1))/T(n - 2) otherwise
log,(n/no)
T(n)= E af
f(n/b ).
P) if f(n) O(nP/(logn)l+E)
T(n)c ()(f(n)lognloglogn) iff (n)O(nP/logn)
(ff (n)logn) iff(n) (nP(logn)E
) 1)
O (f(n)) if f (n)e E(nP+-).
3. As a special case of the first alternative, T(n) e O(nP) whenever f(n)e 0 (n)
for some real constant r < p.
4. The first alternative can be generalized to include cases such as
The use of well-chosen data structures is often a crucial factor in the design of
efficient algorithms. Nevertheless, this book is not intended to be a manual on
data structures. We assume the reader already has a good working knowledge of
such basic notions as arrays, records, and the various structured data types ob-
tained using pointers. We also suppose that the mathematical concepts of directed
and undirected graphs are reasonably familiar, and that the reader knows how to
represent such objects efficiently on a computer.
The chapter begins with a brief review of the more important aspects of these
elementary data structures. The review includes a summary of their essential
properties from the point of view of algorithmics. For this reason even readers
who know the basic material well should skim the first few sections. The last three
sections of the chapter introduce the less elementary notions of heaps and disjoint
sets. Chosen because they will be used in subsequent chapters, these structures
also offer interesting examples of the analysis of algorithms. Most readers will
probably need to read the sections concerning these less familiar data structures
quite thoroughly.
Here tab is an array of 50 integers indexed from 1 to 50; tab[l] refers to the first item
of the array, tab[50] to the last. It is natural to think of the items as being arranged
147
148 Some Data Structures Chapter 5
from left to right, so we may also refer to tab[l] as the left-hand item, and so on.
We often omit to specify the type of the items in the array when the context makes
this obvious.
From the point of view that interests us in this book, the essential property of
an array is that we can calculate the address of any given item in constant time.
For example, if we know that the array tab above starts at address 5000, and that
integer variables occupy 4 bytes of storage each, then the address of the item with
index k is easily seen to be 4996 + 4k. Even if we think it worthwhile to check that
k does indeed lie between 1 and 50, the time required to compute the address can
still be bounded by a constant. It follows that the time required to read the value
of a single item, or to change such a value, is in 0 (1): in other words, we can treat
such operations as elementary.
On the other hand, any operation that involves all the items of an array will
tend to take longer as the size of the array increases. Suppose we are dealing with
an array of some variable size n; that is, the array consists of n items. Then an
operation such as initializing every item, or finding the largest item, usually takes
a time proportional to the number of items to be examined. In other words, such
operations take a time in 0(n). Another common situation is where we want to
keep the values of successive items of the array in order-numerical, alphabetic,
or whatever. Now whenever we decide to insert a new value we have to open up a
space in the correct position, either by copying all the higher values one position to
the right, or else by copying all the lower values one position to the left. Whichever
tactic we adopt (and even if we sometimes do one thing, sometimes the other), in
the worst case we may have to shift n /2 items. Similarly, deleting an element may
require us to move all or most of the remaining items in the array. Again, therefore,
such operations take a time in 0((n).
A one-dimensional array provides an efficient way to implement the data struc-
ture called a stack. Here items are added to the structure, and subsequently re-
moved, on a last-in-first-out (LIFO) basis. The situation can be represented using an
array called stack, say, whose index runs from 1 to the maximum required size of
the stack, and whose items are of the required type, along with a counter. To set
the stack empty, the counter is given the value zero; to add an item to the stack,
the counter is incremented, and then the new item is written into stack[counter];
to remove an item, the value of stack[counter] is read out, and then the counter is
decremented. Tests can be added to ensure that no more items are placed on the
stack than were provided for, and that items are not removed from an empty stack.
Adding an item to a stack is usually called a push operation, while removing one
is called pop.
The data structure called a queue can also be implemented quite efficiently in
a one-dimensional array Here items are added and removed on a first-in-first-out
(FIFO) basis; see Problem 5.2. Adding an item is called an enqueue operation, while
removing one is called dequeue. For both stacks and queues, one disadvantage
of using an implementation in an array is that space usually has to be allocated at
the outset for the maximum number of items envisaged; if ever this space is not
sufficient, it is difficult to add more, while if too much space is allocated, waste
results.
Section 5.1 Arrays, stacks and queues 149
The items of an array can be of any fixed-length type: this is so that the address
of any particular item can be easily calculated. The index is almost always an integer.
However, other so-called ordinal types can be used. For instance,
is one possible way of declaring an array of 26 values, indexed by the letters from 'a'
to 'z'. It is not permissible to index an array using real numbers, nor do we allow an
array to be indexed by structures such as strings or sets. If such things are allowed,
accessing an array item can no longer be considered an elementary operation.
However a more general data structure called an associative table, described in
Section 5.6, does allow such indexes.
The examples given so far have all involved one-dimensional arrays, that is,
arrays whose items are accessed using just one index. Arrays with two or more
indexes can be declared in a similar way. For instance,
is one possible way to declare an array containing 400 items of type complex. A ref-
erence to any particular item, such as matrix[5, 7], now requires two indexes. The
essential point remains, however, that we can calculate the address of any given
item in constant time, so reading or modifying its value can be taken to be an
elementary operation. Obviously if both dimensions of a two-dimensional array
depend on some parameter n, as in the case of an n x n matrix, then operations
such as initializing every item of the array, or finding the largest item, now take a
time in 9 (n 2 ).
We stated above that the time needed to initialize all the items of an array of size
n is in 0((n). Suppose however we do not need to initialize each item, but simply to
know whether it has been initialized or not, and if so to obtain its value. Provided
we are willing to use rather more space, the technique called virtual initialization
allows us to avoid the time spent setting all the entries in the array. Suppose the
array to be virtually initialized is T[1 . . n]. Then we also need two auxiliary arrays
of integers the same size as T, and an integer counter. Call these auxiliary arrays
a[1. .n] and b[l . . n], say, and the counter ctr. At the outset we simply set ctr
to zero, leaving the arrays a, b and T holding whatever values they happen to
contain.
Subsequently ctr tells us how many elements of T have been initialized, while
the values a [1] to afctr] tell us which these elements are: a[1] points to the element
initialized first, a[2] to the element initialized second, and so on; see Figure 5.1,
where three items of the array T have been initialized. Furthermore, if T[i] was
the k-th element to be initialized, then b[i]= k. Thus the values in a point to T
and the values in b point to a, as in the figure.
To test whether T [i] has been assigned a value, we first check that 1 < b [i] < ctr.
If not, we can be sure that T[i] has not been initialized. Otherwise we are not sure
whether T[i] has been initialized or not: it could be just an accident that b[i] has a
plausible value. However if T[i] really has been assigned a value, then it was the
b[ i] -th element of the array to be initialized. We can check this by testing whether
150 Some Data Structures Chapter 5
1 2 3 4 5 6 7 8
-6| 17 24 T
4 7 2 a (ctr = 3)
3 1 2 b
a [ b[i]] i. Since the first ctr values of the array a certainly have been initialized,
and 1 < b[i]< ctr, it cannot be an accident if a[b[i]]= i, so this test is conclusive:
if it is satisfied, then T[i] has been initialized, and if not, not.
To assign a value to T[i] for the first time, increment the counter ctr, set alctr]
to i, set b[i] to ctr, and load the required value into T[i]. On subsequent assign-
ments to T[i], of course, none of this is necessary. Problem 5.3 invites you to fill in
the details. The extra space needed to use this technique is a constant multiple of
the size of T.
then the array class holds 50 records. The attributes of the seventh member of
the class are referred to as class[7]. name, class[7]. age, and so on. As with arrays,
provided a record holds only fixed-length items, the address of any particular item
can be calculated in constant time, so consulting or modifying the value of a field
may be considered as an elementary operation.
Section 5.3 Lists 151
says that boss is a pointer to a record whose type is person. Such a record can be
created dynamically by a statement such as
Now boss t (note that here the arrow follows the name) means "the record that boss
points to". To refer to the fields of this record, we use boss T.name, boss t. age, and
so on. If a pointer has the special value nil, then it is not currently pointing to any
record.
5.3 Lists
A list is a collection of items of information arranged in a certain order. Unlike
arrays and records, the number of items in a list is generally not fixed, nor is it
usually bounded in advance. The corresponding data structure should allow us
to determine, for example, which is the first item in the structure, which is the
last, and which are the predecessor and the successor (if they exist) of any given
item. On a machine, the storage corresponding to any given item of information
is often called a node. Besides the information in question, a node may contain one
or more pointers. Such a structure is frequently represented graphically by boxes
and arrows, as in Figure 5.2. The information attached to a node is shown inside
the corresponding box, and the arrows show links from a node to its successor.
the items of a list occupy the slots value[1] to value[counter], and the order of the
items is the same as the order of their indexes in the array. Using this implemen-
tation, we can find the first and the last items of the list rapidly, as we can the
predecessor and the successor of a given item. On the other hand, as we saw in
Section 5.1, inserting a new item or deleting one of the existing items requires a
worst-case number of operations in the order of the current size of the list. It was
noted there, however, that this implementation is particularly efficient for the im-
portant structure known as the stack; and a stack can be considered as a kind of
list where addition and deletion of items are allowed only at one designated end
of the list. Despite this, such an implementation of a stack may present the ma-
jor disadvantage of requiring that all the storage potentially required be reserved
throughout the life of the program.
On the other hand, if pointers are used to implement a list structure, the nodes
are usually records with a form similar to the following:
indicated, also called edges. In both directed and undirected graphs, sequences
of edges may form paths and cycles. A graph is connected if you can get from any
node to any other by following a sequence of edges; in the case of a directed graph,
you are allowed to go the wrong way along an arrow. A directed graph is strongly
connected if you can get from any node to any other by following a sequence of
edges, but this time respecting the direction of the arrows.
There are never more than two arrows joining any two given nodes of a directed
graph, and if there are two arrows, they must go in opposite directions; there is
never more than one line joining any two given nodes of an undirected graph.
Formally, a graph is therefore a pair G = (N, A), where N is a set of nodes and A is
a set of edges. An edge from node a to node b of a directed graph is denoted by
the ordered pair (a, b), whereas an edge joining nodes a and b in an undirected
graph is denoted by the set {a, b} . (Remember that a set is an unorderedcollection of
elements; see Section 1.4.2.) For example, Figure 5.3 is an informal representation
of the graph G (N, A) where
If the graph includes an edge from node i to node j, then adjacent[i,j]= true;
otherwise adjacent[i, j]= false. In the case of an undirected graph, the matrix is
necessarily symmetric.
With this representation it is easy to see whether or not there is an edge between
two given nodes: to look up a value in an array takes constant time. On the other
hand, should we wish to examine all the nodes connected to some given node, we
have to scan a complete row of the matrix, in the case of an undirected graph, or
154 Some Data Structures Chapter 5
both a complete row and a complete column, in the case of a directed graph. This
takes a time in O (nbnodes), the number of nodes in the graph, independent of the
number of edges that enter or leave this particular node. The space required to
represent a graph in this fashion is quadratic in the number of nodes.
A second possible representation is the following.
Here we attach to each node i a list of its neighbours, that is, of those nodes j
such that an edge from i to j (in the case of a directed graph), or between i and j
(in the case of an undirected graph), exists. If the number of edges in the graph is
small, this representation uses less storage than the one given previously. It may
also be possible in this case to examine all the neighbours of a given node in less
than nbnodes operations on the average. On the other hand, determining whether
a direct connection exists between two given nodes i and j requires us to scan the
list of neighbours of node i (and possibly of node j too, in the case of a directed
graph), which is less efficient than looking up a Boolean value in an array.
5.5 Trees
A tree (strictly speaking, a free tree) is an acyclic, connected, undirected graph.
Equivalently, a tree may be defined as an undirected graph in which there exists
exactly one path between any given pair of nodes. Since a tree is a kind of graph,
the same representations used to implement graphs can be used to implement
trees. Figure 5.4(a) shows two trees, each of which has four nodes. You can easily
verify that these are the only distinct trees with four nodes; see Problem 5.7. Trees
have a number of simple properties, of which the following are perhaps the most
important:
• A tree with n nodes has exactly n - I edges.
• If a single edge is added to a tree, then the resulting graph contains exactly one
cycle.
c If a single edge is removed from a tree, then the resulting graph is no longer
connected.
In this book we shall most often be concerned with rooted trees. These are trees
in which one node, called the root, is special. When drawing a rooted tree, it is
customary to put the root at the top, like a family tree, with the other edges coming
down from it. Figure 5.4(b) illustrates four different rooted trees, each with four
nodes. Again, you can easily verify that these are the only rooted trees with four
nodes that exist. When there is no danger of confusion, we shall use the simple
term "tree" instead of the more correct "rooted tree", since almost all our examples
are of this kind.
Section 5.5 Trees 155
0 b
(a) (b)
Figure 5.4. (a) Trees, and (b) Rooted trees with 4 nodes
Extending the analogy with a family tree, it is customary to use such terms as
"parent" and "child" to describe the relationship between adjacent nodes. Thus
in Figure 5.5, alpha, the root of the tree, is the parent of beta and gamma; beta is the
parent of delta, epsilon and zeta, and the child of alpha; while epsilon and zeta are the
siblings of delta. An ancestor of a node is either the node itself (this is not the same
as the everyday definition), or its parent, its parent's parent, and so on. Thus both
alpha and zeta are ancestors of zeta. A descendant of a node is defined analogously,
again including the node itself.
A leaf of a rooted tree is a node with no children; the other nodes are called internal
nodes. Although nothing in the definition indicates this, the branches of a rooted
tree are often considered to be ordered from left to right: in the previous example
beta is situated to the left of gamma, and-by analogy with a family tree once again-
delta is called the eldest sibling of epsilon and zeta. The two trees in Figure 5.6 may
therefore be considered distinct.
On a computer, any rooted tree may be represented using nodes of the following
type.
The rooted tree shown in Figure 5.5 would be represented as in Figure 5.7, where
the arrows show the direction of the pointers used in the computer representation,
not the direction of edges in the tree (which is, of course, an undirected graph).
We emphasize that this representation can be used for any rooted tree; it has the
advantage that all the nodes can be represented using the same record structure,
no matter how many children or siblings they have. However many operations
are inefficient using this minimal representation: it is not obvious how to find the
parent of a given node, for example (but see Problem 5.10).
Another representation suitable for any rooted tree uses nodes of the type
where now each node contains only a single pointer leading to its parent. This
representation is about as economical with storage space as one can hope to be, but
it is inefficient unless all the operations on the tree involve starting from a node and
going up, never down. (For an application where this is exactly what we need, see
Section 5.9.) Moreover it does not represent the order of siblings.
A suitable representation for a particular application can usually be designed
by starting from one of these general representations and adding supplementary
pointers, for example to the parent or to the eldest sibling of a given node. In this
Section 5.5 Trees 157
way we can speed up the operations we want to perform efficiently at the price of
an increase in the storage needed.
We shall often have occasion to use binary trees. In such a tree, each node can
have 0, 1, or 2 children. In fact, we almost always assume that a node has two
pointers, one to its left and one to its right, either of which can be nil. When we
do this, although the metaphor becomes somewhat strained, we naturally tend to
talk about the left child and the right child, and the position occupied by a child
is significant: a node with a left child but no right child can never be the same as
a node with a right child but no left child. For instance, the two binary trees in
Figure 5.8 are not the same: in the first case b is the left child of a and the right
child is missing, whereas in the second case b is the right child of a and the left
child is missing.
If each node of a rooted tree can have no more than k children, we say it is a k-
ary tree. There are several ways of representing a k-ary tree on a computer. One
obvious representation uses nodes of the following type.
shows the value contained in each node. This structure is interesting because, as
the name implies, it allows efficient searches for values in the tree. In the example,
although the tree contains 7 items, we can find 27, say, with only 3 comparisons.
The first, with the value 20 stored at the root, tells us that 27 is in the right subtree
(if it is anywhere); the second, with the value 34 stored at the root of the right
subtree, tells us to look down to the left; and the third finds the value we seek.
The search procedure sketched above can be described more formally as follows.
function search(x, r)
{The pointer r points to the root of a search tree.
The function searches for the value x in this tree
and returns a pointer to the node containing x.
If x is missing, the function returns nil.}
if r = nil then {x is not in the tree}
return nil
else if x = rt. value then return r
else if x < rt. value then return search(x, rt .left-child)
else return search(x, r I. right-child)
Here we suppose that the search tree is composed of nodes of the type binary-node
defined above. For efficiency it may be better to rewrite the algorithm avoiding the
recursive calls, although some compilers automatically remove this so-called tail
recursion. Problem 5.11 invites you to do this.
It is simple to update a search tree, that is, to delete a node or to add a new value,
without destroying the search tree property. However, if this is done carelessly, the
resulting tree can become unbalanced. By this we mean that many of the nodes
in the tree have only one child, not two, so its branches become long and stringy
When this happens, searching the tree is no longer efficient. In the worst case,
every node in the tree may have exactly one child, except for a single leaf that
has no children. With such an unbalanced tree, finding an item in a tree with n
elements may involve comparing it with the contents of all n nodes.
A variety of methods are available to keep the tree balanced, and hence to
guarantee that such operations as searches or the addition and deletion of nodes
Section 5.6 Associative tables 159
take a time in 0 (log n) in the worst case, where n is the number of nodes in the tree.
These methods may also allow the efficient implementation of several additional
operations. Among the older techniques are the use of AVL trees and 2-3 trees; more
recent suggestions include red-black trees and splay trees. Since these concepts are
not used in the rest of the book, here we only mention their existence.
For example, Figure 5.10 gives the height, depth and level for each node of the tree
illustrated in Figure 5.5. Informally, if the tree is drawn with successive generations
of nodes in neat layers, then the depth of a node is found by numbering the layers
downwards from 0 at the root; the level of a node is found by numbering the layers
upwards from 0 at the bottom; only the height is a little more complicated.
Finally we define the height of the tree to be the height of its root; this is also the
depth of the deepest leaf and the level of the root.
unless N >> m 2 , where m is the number of different indexes actually used; see Prob-
lem 5.14. Many solutions for this difficulty have been proposed. The simplest is list
hashing or chaining. Each entry of array A[O.. N -1] is of type table list: A[i] con-
tains the list of all indexes that hash to value i, together with their relevant in-
formation. Figure 5.11 illustrates the situation after the following four requests to
associative table T.
T["Laurel"]- 3
T["Chaplin"]- 1
T["Hardy"]- 4
Tf"Keaton"]- 1
2 Hardy 1 4 V|
The loadfactorof the table is m/N, where m is the number of distinct indexes used
in the table and N is the size of the array used to implement it. If we suppose that
every index, every value stored in the table and every pointer occupy a constant
amount of space, the table takes space in 0 (N + m) and the average length of the
lists is equal to the load factor. Thus increasing N reduces the average list length
but increases the space occupied by the table. If the load factor is kept between '/2
and 1, the table occupies a space in 0((m), which is optimal up to a small constant
factor, and the average list length is less than 1, which is likely to imply efficient
access to the table. It is tempting to improve the scheme by replacing the N collision
lists by balanced trees, but this is not worthwhile if the load factor is kept small,
unless it is essential to improve the worst-case performance.
The load factor can be kept small by rehashing. With the compiler application in
mind, for instance, the initial value of N is chosen so we expect small programs to
use less than N different identifiers. We allow the load factor to be smaller than 1/2
when the number of identifiers is small. Whenever more than N different identifiers
are encountered, causing the load factor to exceed 1, it is time to double the size
162 Some Data Structures Chapter 5
of the array used to implement the hash table. At that point, the hash function
must be changed to double its range and every entry already in the table must be
rehashed to its new position in a list of the larger array. Rehashing is expensive,
but so infrequent that it does not cause a dramatic increase in the amortized time
required per access to the table. Rehashing is repeated each time the load factor
exceeds 1; after rehashing the load factor drops back to 1/2.
Unfortunately, a small average list length does not guarantee a small average
access time. The problem is that the longer a list is, the more likely it is that one of
its elements will be accessed. Thus bad cases are more likely to happen than good
ones. In the extreme scenario, there could be one list of length N and N - 1 lists of
length zero. Even though the average list length is 1, the situation is no better than
when we used the simple table list approach. If the table is used in a compiler, this
would occur if all the identifiers of a program happened to hash to the same value.
Although this is unlikely, it cannot be ruled out. Nevertheless, it can be proved
that each access takes constant expected time in the amortized sense, provided
rehashing is performed each time the load factor exceeds 1 and provided we make
the unnatural assumption that every possible identifier is equally likely to be used.
In practice, hashing works well most of the time even though identifiers are not
chosen at random. Moreover, we shall see in Section 10.7.3 how to remove this
assumption about the probability distribution of the instances to be handled and
still have provably good expected performance.
5.7 Heaps
A heap is a special kind of rooted tree that can be implemented efficiently in an array
without any explicit pointers. This interesting structure lends itself to numerous
applications, including a remarkable sorting technique called heapsort, presented
later in this section. It can also be used for the efficient representation of certain
dynamic priority lists, such as the event list in a simulation or the list of tasks to be
scheduled by an operating system.
A binary tree is essentially complete if each internal node, with the possible
exception of one special node, has exactly two children. The special node, if there
is one, is situated on level 1; it has a left child but no right child. Moreover, either all
the leaves are on level 0, or else they are on levels 0 and 1, and no leaf on level 1 is
to the left of an internal node at the same level. Intuitively, an essentially complete
tree is one where the internal nodes are pushed up the tree as high as possible,
the internal nodes on the last level being pushed over to the left; the leaves fill the
last level containing internal nodes, if there is still any room, and then spill over
onto the left of level 0. For example, Figure 5.12 illustrates an essentially complete
binary tree containing 10 nodes. The five internal nodes occupy level 3 (the root),
level 2, and the left side of level 1; the five leaves fill the right side of level 1 and
then continue at the left of level 0.
If an essentially complete binary tree has height k, then there is one node
(the root) on level k, there are two nodes on level k -1, and so on; there are 2 k 1
nodes on level 1, and at least 1 and not more than 2 k on level 0. If the tree contains n
Section 5.7 Heaps 163
nodes in all, counting both internal nodes and leaves, it follows that 2 k < n < 2k+1.
Equivalently, the height of a tree containing n nodes is k [lg n], a result we shall
use later.
This kind of tree can be represented in an array T by putting the nodes of
depth k, from left to right, in the positions T[2k], T[2k + 1], ... , T[2k+1 -1], with the
possible exception of level 0, which maybe incomplete. Figure 5.12 indicates which
array element corresponds to each node of the tree. Using this representation, the
parent of the node represented in T[i] is found in T[i . 2] for i > 1 (the root T[l]
does not have a parent), and the children of the node represented in T[i] are found
in T[2i] and T[2i + 1], whenever they exist. The subtree whose root is in T[i] is
also easy to identify.
Now a heap is an essentially complete binary tree, each of whose nodes includes
an element of information called the value of the node, and which has the property
that the value of each internal node is greater than or equal to the values of its
children. This is called the heap property. Figure 5.13 shows an example of a heap
with 10 nodes. The underlying tree is of course the one shown in Figure 5.12, but
now we have marked each node with its value. The heap property can be easily
checked. For instance, the node whose value is 9 has two children whose values
are 5 and 2: both children have a value less than the value of their parent. This
same heap can be represented by the following array.
11017191417151212 1 61
Since the value of each internal node is greater than or equal to the values of its
children, which in turn have values greater than or equal to the values of their
children, and so on, the heap property ensures that the value of each internal node
is greater than or equal to the values of all the nodes that lie in the subtrees below it.
In particular, the value of the root is greater than or equal to the values of all the
other nodes in the heap.
164 Some Data Structures Chapter 5
The crucial characteristic of this data structure is that the heap property can be
restored efficiently if the value of a node is modified. If the value of a node increases
to the extent that it becomes greater than the value of its parent, it suffices to
exchange these two values, and then to continue the same process upwards in
the tree if necessary until the heap property is restored. We say that the modified
value has been percolatedup to its new position in the heap. (This operation is often
called sifting up, a curiously upside-down metaphor.) For example, if the value 1
in Figure 5.13 is modified so that it becomes 8, we can restore the heap property
by exchanging the 8 with its parent 4, and then exchanging it again with its new
parent 7, obtaining the result shown in Figure 5.14.
If on the contrary the value of a node is decreased so that it becomes less than the
value of at least one of its children, it suffices to exchange the modified value with
the larger of the values in the children, and then to continue this process downwards
in the tree if necessary until the heap property is restored. We say that the modified
Section 5.7 Heaps 165
value has been sifted down to its new position. For example, if the 10 in the root of
Figure 5.14 is modified to 3, we can restore the heap property by exchanging the 3
with its larger child, namely 9, and then exchanging it again with the larger of its
new children, namely 5. The result we obtain is shown in Figure 5.15.
The following procedures describe more formally the basic processes for manipu-
lating a heap. For clarity, they are written so as to reflect the preceding discussion
as closely as possible. If you intend to use heaps for a "real" application, we en-
courage you to figure out how to avoid the inefficiency caused by our use of the
"exchange" instruction.
procedure alter-heap(T[L. . n], i, v)
{ T[1 . . n] is a heap. The value of T[i] is set to v and the
heap property is re-established. We suppose that 1 < i c n.}
x- T[i]
T[i]- v
if v < x then sift-down(T, i)
else percolate(T, i)
procedure sift-down(T[I. . n], i)
{This procedure sifts node i down so as to re-establish the heap
property in T[L .n]. We suppose that T would be a heap if T[i]
were sufficiently large. We also suppose that 1 < i < n.}
k- i
repeat
j - k
{find the larger child of node j}
if 2j < n and T[2j]> T[k] then k - 2j
if 2j <nand T[2j+ 1]> T[k] then k -2j11
exchange T[j] and T[k]
{if j = k, then the node has arrived at its final position}
until j =k
166 Some Data Structures Chapter 5
The heap is an ideal data structure for finding the largest element of a set, removing
it, adding a new node, or modifying a node. These are exactly the operations we
need to implement dynamic priority lists efficiently: the value of a node gives the
priority of the corresponding event, the event with highest priority is always found
at the root of the heap, and the priority of an event can be changed dynamically at
any time. This is particularly useful in computer simulations and in the design of
schedulers for an operating system. Some typical procedures are illustrated below.
functionfind-max(T[l. . n])
{Returns the largest element of the heap T[L. . n] }
return Ti[1]
procedure insert-node(T[1..n], v)
{Adds an element whose value is v to the heap T[1. . n]
and restores the heap property in T[L.. n + 1]}
T[n + 1]- v
percolate(T[l1..n + 1],n +1)
It remains to be seen how to create a heap starting from an array T[L..n] of elements
in an undefined order. The obvious solution is to start with an empty heap and to
add elements one by one.
However this approach is not particularly efficient; see Problem 5.19. There exists
a cleverer algorithm for making a heap. Suppose, for example, that our starting
point is the following array:
represented by the tree in Figure 5.16a. We first make each of the subtrees whose
roots are at level 1 into a heap; this is done by sifting down these roots, as illustrated
in Figure 5.16b. The subtrees at the next higher level are then transformed into
heaps, again by sifting down their roots. Figure 5.16c shows the process for the left
subtree. The other subtree at level 2 is already a heap. This results in an essentially
complete binary tree corresponding to the array:
111019171715 2 2 4 6
It only remains to sift down its root to obtain the desired heap. The final process
thus goes as follows:
10 1 9 7 7 5 2 2 4 6
10 7 9 1 7 5 2 2 4 6
10 7 9 4 7 5 2 2 1 6
The tree representation of the final form of the array is shown previously as Fig-
ure 5.13.
Here is a formal description of the algorithm.
= 2 k+2 - 2k < 4n
168 Some Data Structures Chapter 5
0 0
(b) The level I subtrees are made into heaps.
24 6
(c) One level 2 subtree is made into a heap (the other already is a heap).
where we used Proposition 1.7.12 to sum the infinite series. The required work
is therefore in 0(n).
2. Let t(k) be the time needed in the worst case to build a heap of height at
most k. Assume that k Ž 2. To construct the heap, the algorithm first transforms
each of the two subtrees attached to the root into heaps of height at most k -1.
(The right subtree could be of height k - 2.) The algorithm then sifts the root down
a path whose length is at most k, which takes a time s (k) e 0 (k) in the worst case.
We thus obtain the asymptotic recurrence
This is similar to Example 4.7.7, which yields t (k) e 0 (2 k). But a heap containing
n elements is of height [lg nJ, hence it can be built in at most t ( [lg n I) steps, which
is in 0(n) because 2[1 I< n. <
Williams invented the heap to serve as the underlying data structure for the
following sorting algorithm.
Proof Let t(n) be the time taken to sort an array of n elements in the worst case. The
make-heap operation takes a time linear in n, and the height of the heap that is
produced is [lg n 1. The for loop is executed n - 1 times. Each time round the
loop, the "exchange" instruction takes constant time, and the sift-down operation
then sifts the root down a path whose length is at most lg n, which takes a time in
the order of log n in the worst case. Hence
It can be shown that t(n) e E (n log n): the exact order of t(n) is also n log n,
both in the worst case and on the average, supposing all initial permutations of the
objects to be sorted to be equally likely. However this is more difficult to prove.
The basic concept of a heap can be improved in several ways. For applications
that need percolate more often than sift-down (see for example Problem 6.16), it pays
to have more than two children per internal node. This speeds up percolate(because
the heap is shallower) at the cost of slowing down operations such as sift-down that
must consider every child at each level. It is still possible to represent such heaps
in an array without explicit pointers, but a little care is needed to do it correctly;
see Problem 5.23.
For applications that tend to sift down an updated root node almost to the
bottom level, it pays to ignore temporarily the new value stored at the root, choosing
rather to treat this node as if it were empty, that is, as if it contained a value smaller
than any other value in the tree. The empty node will therefore be sifted all the
way down to a leaf. At this point, put the relevant value back into the empty leaf,
and percolate it to its proper position. The advantage of this procedure is that it
requires only one comparison at each level while the empty node is being sifted
down, rather than two with the usual procedure. (This is because we must compare
170 Some Data Structures Chapter 5
the children to each other, but we do not need to compare the greater child with its
parent.) Experiments have shown that this approach yields an improvement over
the classic heapsort algorithm.
We shall sometimes have occasion to use an inverted heap. By this we mean an
essentially complete binary tree where the value of each internal node is less than
or equal to the values of its children, and not greater than or equal as in the ordinary
heap. In an inverted heap the smallest item is at the root. All the properties of an
ordinary heap apply mutatis mutandis.
Although heaps can implement efficiently most of the operations needed to
handle dynamic priority lists, there are some operations for which they are not
suited. For example, there is no good way of searching for a particular item in a
heap. In procedures such as sift-down and percolate above, we provided the address
of the node concerned as one of the parameters of the procedure. Furthermore there
is no efficient way of merging two heaps of the kind we have described. The best
we can do is to put the contents of the two heaps side by side in an array, and then
call procedure make-heap on the combined array.
As the following section will show, it is not hard to produce mergeable heaps at
the cost of some complication in the data structures used. However in any kind of
heap searching for a particular item is inefficient.
Bo B. B2 B3
containing the largest element in the heap. Figure 5.18 illustrates a binomial heap
with 11 items. It is easy to see that a binomial heap containing n items comprises
not more than f lg n] binomial trees.
max \
(D------ ----
Suppose we have two binomial trees Bi, the same size but possibly containing
different values. Assume they both have the heap property. Then it is easy to
combine them into a single binomial tree Bi+,1 still with the heap property. Make
the root with the smaller value into the (i + 1)-st child of the root with the larger
value, and we are done. Figure 5.19 shows how two B2's can be combined into a B3
in this way. We shall call this operation linking two binomial trees. Clearly it takes
a time in 0(1).
Next we describe how to merge two binomial heaps H, and H2 . Each consists
of a collection of binomial trees arranged in increasing order of size. Begin by
looking at any Bo's that may be present. If neither H1 nor H2 includes a Bo, there is
nothing to do at this stage. If just one of H, and H 2 includes a Bo, keep it to form
part of the result. Otherwise link the two Bo's from the two heaps into a B1. Next
172 Some Data Structures Chapter 5
look at any Bl's that may be present. We may have as many as three of them to
deal with, one from each of HI and H2, and one "carried" from the previous stage.
If there are none, there is nothing to do at this stage. If there is just one, keep it
to form part of the result. Otherwise link any two of the Bl's to form a B2; should
there be a third one, keep it to form part of the result. Next look at any B2 's that
may be present, and so on.
In general, at stage i we have up to three Bi's on hand, one from each of HI
and H2, and one "carried" from the previous stage. If there are none, there is
nothing to do at this stage; if there is just one, we keep it to form part of the result;
otherwise we link any two of the Bi's to form a B,+ 1, keeping the third, if there is
one, to form part of the result. If a BiI is formed at this stage, it is "carried" to
the next stage. As we proceed, we join the roots of those binomial trees that are to
be kept, and we also keep track of the root with the largest value so as to set the
corresponding pointer in the result. Figure 5.20 illustrates how a binomial heap
with 6 elements and another with 7 elements might be merged to form a binomial
heap with 13 elements.
The analogy with binary addition is close. There at each stage we have up to
three Is, one carried from the previous position, and one from each of the operands.
If there are none, the result contains 0 in this position; if there is just one, the result
contains a 1; otherwise two of the is generate a carry to the next position, and the
result in this position is 0 or 1 depending whether two or three is were present
initially.
If the result of a merge operation is a binomial heap comprising n items, it can
be constructed in at most Ilg n ] + 1 stages. Each stage requires at most one linking
operation, plus the adjustment of a few pointers. Thus one stage can be done in a
time in 0(1), and the complete merge takes a time in 0 (log n).
Finding the largest item in a binomial heap is simply a matter of returning
the item indicated by the appropriate pointer. Clearly this can be done in a time
in 0 (1). Deleting the largest item of a binomial heap H is done as follows.
(i) Let B be the binomial tree whose root contains the largest item of H. Remove B
Section 5.8 Binomial heaps 173
p Q13
6 merged with
yields 0
except that we put off some housekeeping operations until they are absolutely nec-
essary, two heaps can be merged in an amortized time in 0 (1). The Fibonacciheap is
another data structure that allows us to merge priority lists in constant amortized
time. In addition, the value of a node in a Fibonacci heap can be increased and the
node in question percolated to its new place in constant amortized time. We shall
see in Problem 6.17 how useful this can be. Those heaps are based on Fibonacci
trees; see Problem 5.29. The structure called a double-ended heap, or deap, allows
both the largest and the smallest member of a set to be found efficiently.
functionfindl(x)
{Finds the label of the set containing x}
return set[x]
procedure mergel(a, b)
{Merges the sets labelled a and b; we assume a 76 b}
i - min(a, b)
j - max(a, b)
for k -1 to N do
if set[k]= j then set[k] - i
operations therefore take a time in a0(n), while N - 1 merge operations take a time
in 0(N 2 ). If n and N are comparable, the whole sequence of operations takes a
time in E)(n 2 ).
Let us try to do better than this. Still using a single array, we can represent each
set as a rooted tree, where each node contains a single pointer to its parent (as for
the type treenode2 in Section 5.5). We adopt the following scheme: if set[i]= i, then
i is both the label of its set and the root of the corresponding tree; if set[i]= j i,
then j is the parent of i in some tree. The array
12 23 12 1 13 14 1313 14
therefore represents the trees shown in Figure 5.21, which in turn represent the sets
{1, 5}, {2, 4,7, 101 and {3, 6,8,91. To merge two sets, we need now to change only
a single value in the array; on the other hand, it is harder to find the set to which
an object belongs.
procedure merge2(a, b)
{Merges the sets labelled a and b; we assume a / b}
if a < b then set[b]- a
else set[a]- b
5
)
o Basis: The theorem is clearly true when k = 1, for a tree with only 1 node
has height 0, and O < [ig 1J.
o Induction step: Consider any k > 1. Assume the induction hypothesis that
the theorem is true for all m such that 1 < m < k. A tree containing k
nodes can be obtained only by merging two smaller trees. Suppose these
two smaller trees contain respectively a and b nodes, where we may as-
sume without loss of generality that a • b. Now a > 1 since there is no
way of obtaining a tree with 0 nodes starting from the initial situation,
and k = a + b. It follows that a < k/2 and b < k -1. Since k > 1, both
k/2 < k and k -1 < k, and hence a < k and b < k. Let the heights of
the two smaller trees be ha and hb respectively, and let the height of the
resulting merged tree be hk. Two cases arise.
- If ha # hb, then hk = max(ha,hb)< max lg ag, [1gb1), where we
used the induction hypothesis twice to obtain the inequality. Since
both a and b are less than k, it follows that hk < Llg k .
- If ha = hb, then hk = ha + 1 < [Iga] + 1, again using the induction
hypothesis. Now Llgal < [lg(k/2)] = Llg(k)-11 = [lgkl -1, and so
hk < Llgk].
Thus the theorem is true when k = 1, and its truth for k = n > 1 follows from its
assumed truth for k = 1, 2,..., n - 1. By the principle of generalized mathematical
induction, the theorem is therefore true for all k > 1. U
178 Some Data Structures Chapter 5
procedure merge3(a, b)
{Merges the sets labelled a and b; we assume a i b}
if height[a]=height[b]
then
height[a>- height[a]+l
set[b]- a
else
if height[a]>height[b]
then set[b] - a
else set[a].- b
(b) After
(a) Before
contents of the root node. However path compression can only reduce the height
of a tree, never increase it, so if a is the root, it remains true that height[a] is an
upper bound on the height of the tree; see Problem 5.31. To avoid confusion we
call this value the rank of the tree; the name of the array used in merge3 should be
changed accordingly. Thefind function is now as follows.
functionfind3(x)
{Finds the label of the set containing object x}
r- x
while set[r]# r do r - set[r]
{r is the root of the tree}
i E x
while i X r do
j -set[i]
set[i]- r
i- j
return r
From now on, when we use this combination of two arrays and of proceduresfind3
and merge3 to deal with disjoint sets of objects, we say we are using a disjoint set
structure; see also Problems 5.32 and 5.33 for variations on the theme.
It is not easy to analyse the time needed for an arbitrary sequence of find and
merge operations when path compression is used. In this book we content ourselves
with giving the result. First we need to define two new functions A (i, j) and oc (i, j).
180 Some Data Structures Chapter 5
The function A (i, j) is a slight variant of Ackermann 'sfunction; see Problem 5.38.
It is defined for i > 0 and j > 1.
2j if i= 0
A(i,j) 2 ifj =1
A(i - 1,A(i, j - 1)) otherwise
The facts below follow easily from the definition:
c A (1,1) = 2 and A (1, j + 1) = A (0, A (1, j)) = 2A (1, j) for j > 1. It follows that
A(1, j)= 2i for all j.
• A(2,1)= 2 and A(2,j + 1)= A(1,A(2,j))= 2 A(2i) for j 2 1. Therefore
A(2,1) 2, A(2,2)= 2 A(2,1) = 22 = 4, A(2,3)= 2 A(2,2) = 222 = 16, A(2,4)
2 A(2,3) = 2222 = 65536, and in general
A(2,j)= 2 } 2S
and so on.
It is evident that the function A grows extremely fast.
Now the function a(i, j) is defined as a kind of inverse of A:
o(i,j)= min{klk > 1 and A(k,4[i/jl)> lgj}.
Whereas A grows very rapidly, a grows extremely slowly. To see this, observe that
for any fixed j, a(i, j) is maximized when i < j, in which case 4[i/j] = 4, so
a(ij)< min{klk > 1 and A(k,4)> lgj}.
Therefore a(i,j)> 3 only when
which is huge. Thus for all except astronomical values of j (i,j) U,a < 3.
With a universe of N objects and the given initial situation, consider an arbitrary
sequence of n calls of find3 and m < N - 1 calls of merge3. Let c = n + m. Using
the functions above, Tarjan was able to show that such a sequence can be executed
in a time in O (cot(c, N)) in the worst case. (We continue to suppose, of course,
that each consultation or modification of an array element counts as an elementary
operation.) Since for all practical purposes we may suppose that ax(c, N):< 3, the
time taken by the sequence of operations is essentially linear in c. However no
known algorithm allows us to carry out the sequence in a time that is truly linear
in c.
Section 5.10 Problems 181
5.10 Problems
Problem 5.1. You are to implement a stack of items of a given type. Give the nec-
essary declarations and write three procedures respectively to initialize the stack,
to add a new item, and to remove an item. Include tests in your procedures to
prevent adding or removing too many items. Also write a function that returns
the item currently at the top of the stack. Make sure this function behaves sensibly
when the stack is empty.
Problem 5.2. A queue can be represented using an array of items of the required
type, along with two pointers. One gives the index of the item at the head of
the queue (the next to leave), and the other gives the index of the item at the
end of the queue (the last to arrive). Give the necessary declarations and write
three procedures that initialize the queue, add a new item, and remove an item,
respectively. If your array has space for n items, what is the maximum number
of items that can be in the queue? How do you know when it is full, and when
it is empty? Include tests in your procedures to prevent adding or removing too
many items. Also write a function that returns the item currently at the head of the
queue. Make sure this function behaves sensibly when the queue is empty.
Hint: As items are added and removed, the queue tends to drift along the array.
When a pointer runs off the array, do not copy all the items to a new position, but
rather let the pointer wrap around to the other end of the array.
Problem 5.3. Fill in the details of the technique called virtual initialization de-
scribed in Section 5.1 and illustrated in Figure 5.1. You should write three algo-
rithms.
procedure init
{Virtually initializes T[L. . n] }
procedure store(i, v)
{Sets T[i] to the value v}
function val(i)
{Returns the value of T[i] if this has been assigned;
returns a default value (such as -1) otherwise}
A call on any of these (including init!) should take constant time in the worst case.
Problem 5.4. What changes in your solution to Problem 5.3 if the index to the
array T, instead of running from 1 to n, goes from n1 to n2, say?
Problem 5.5. Without writing the detailed algorithms, sketch how to adapt virtual
initialization to a two-dimensional array.
Problem 5.6. Show in some detail how the directed graph of Figure 5.3 could be
represented on a machine using (a) the adjgraph type of representation, and (b) the
lisgraph type of representation.
182 Some Data Structures Chapter 5
Problem 5.7. Draw the three different trees with five nodes, and the six different
trees with six nodes. Repeat the problem for rooted trees. For this problem the
order of the branches of a rooted tree is immaterial. You should find nine rooted
trees with five nodes, and twenty with six nodes.
Problem 5.8. Following Problem 5.7, how many rooted trees are there with six
nodes if the order of the branches is taken into account?
Problem 5.9. Show how the four rooted trees illustrated in Figure 5.4b would be
represented using pointers from a node to its eldest child and to its next sibling.
Problem 5.10. At the cost of one extra bit of storage in records of type treenodel,
we can make it possible to find the parent of any given node. The idea is to use
the next-sibling field of the rightmost member of a set of siblings to point to their
common parent. Give the details of this approach. In a tree containing n nodes,
how much time does it take to find the parent of a given node in the worst case?
Does your answer change if you know that a node of the tree can never have more
than k children?
Problem 5.11. Rewrite the algorithm search of Section 5.5 avoiding the recursive
calls.
function init(T, x)
function val (T, x)
procedure set (T, x, y)
These determine if T[x] has been initialized, access its current value if it has one,
and set T[x] toy (either by creating entry TI[x] or by changing its value if it already
exists), respectively.
Problem 5.14. Prove that even if the array used to implement a hash table is of
size N = mi2 , where mn is the number of elements to be stored in the table, the
probability of collision is significant. Assume that the hash function sends each
element to a random location in the array.
Problem 5.15. Prove that the cost of rehashing can be neglected even in the worst
case, provided we perform an amortized analysis. In other words, show that ac-
cesses to the table can put enough tokens in the bank account to pay the complete
cost of rehashing each time the load factor exceeds 1. Assume that both the choice
of a new hash function and rehashing one entry in the table take constant time.
Section 5.10 Problems 183
Problem 5.16. Prove that if hashing is used to implement the symbol table of a
compiler, if the load factor is kept below 1, and if we make the unnatural assump-
tion that every possible identifier is equally likely to be used, the probability that
any identifier collides with more than t others is less than 1it, for any integer t.
Conclude that the average time needed for a sequence of n accesses to the table is
in 0(n).
Problem 5.17. Propose strategies other than chaining for handling collisions in a
hash table.
Problem 5.18. Sketch an essentially complete binary tree with (a) 15 nodes and
(b) 16 nodes.
Problem 5.19. In Section 5.7 we saw an algorithm for making a heap (slow-make-
heap) that we described as "rather inefficient". Analyse the worst case for this
algorithm, and compare it to the linear-time algorithm make-heap.
Problem 5.20. Let T[1 .. 12] be an array such that T[i]= i for each i < 12. Exhibit
the state of the array after each of the following procedure calls. The calls are
performed one after the other, each one except the first working on the array left
by its predecessor.
make-heap(T)
alter-heap(T, 12, 10)
alter-heap(T, 1, 6)
alter-heap(T, 5, 8)
Problem 5.21. Exhibit a heap T containing n distinct values, such that the follow-
ing sequence results in a different heap.
m - find-max(T[I.. n])
delete-max (T [1 . . n] )
insert-node(T[1. . n - 1], m)
Draw the heap after each operation. You may choose n to suit yourself.
Problem 5.23. (k-ary heaps) In Section 5.7 we defined heaps in terms of an es-
sentially complete binary tree. It should be clear that the idea can be generalized
to essentially complete k-ary trees, for any k > 2. Show that we can map the
nodes of a k-ary tree containing n nodes to the elements T[0] to T[n - 1] of an
array in such a way that the parent of the node represented in T[i] is found in
T[(i -1) . k] for i > 0, and the children of the node represented in T[i] are found
184 Some Data Structures Chapter 5
in T[ik + 1], T[ik + 2],..., T[(i +1)k]. Note that for binary trees, this is not the
mapping we used in Section 5.7; there we used a mapping onto T[1. .n], not onto
T[O.. n -1].
Write procedures sift-down (T, k, i) and percolate( T, k, i) for these generalized heaps.
What are the advantages and disadvantages of such generalized heaps? For an
application where they may be useful, see Problem 6.16.
Problem 5.24. For heapsort,what are the best and the worst initial arrangements of
the elements to be sorted, as far as the execution time of the algorithm is concerned?
Justify your answer.
Problem 5.25. Prove that the binomial tree Bi defined in Section 5.8 contains 2i
nodes, of which (k) are at depth k, 0 < k • i.
Problem 5.26. Prove that a binomial heap containing n items comprises at most
I Ig nI binomial trees, the largest of which contains 2tLgnI items.
Problem 5.27. Consider the algorithm for inserting a new item into a binomial
heap H given in Section 5.8. A simpler method would be to create a binomial tree
Bo as in step (i) of the algorithm, make it into a binomial heap, and merge this new
heap with H. Why did we prefer the more complicated algorithm?
Problem 5.28. Using the accounting trick described in Section 5.8, what is the
amortized cost of deleting the largest item from a binomial heap?
Problem 5.29. (Fibonacci trees) It is convenient to define the Fibonacci tree F-1 to
consist of a single node. Then the i-th Fibonacci tree Fi, i > 0, is defined recursively
to consist of a root node with i children, where the j-th child, 1 < j < i, is in turn
the root of a Fibonacci tree Fj-2. Figure 5.23 shows Fo to F5 . Prove that the Fibonacci
tree Fi, i > 0, has fi, nodes, where fk is the k-th member of the Fibonacci sequence;
see Section 1.6.4.
Fo F. F2 F3 F4
Problem 5.31. When using path compression, we are content to use an array rank
that gives us an upper bound on the height of a tree, rather than the exact height.
Estimate how much time it would take to recompute the exact height of the tree
after each path compression.
Problem 5.32. In Section 5.9 we discussed merging two trees so the tree whose
height is least becomes a subtree of the other. A second possible tactic is to ensure
that the tree containing the smaller number of nodes always becomes a subtree of
the other. Path compression does not change the number of nodes in a tree, so
it is easy to store this value exactly, whereas we could not keep track efficiently
of the exact height of a tree after path compression. Write a procedure merge4 to
implement this tactic, and prove a result corresponding to Theorem 5.9.1.
Problem 5.33. The root of a tree has no parent, and we never use the value of rank
for a node that is not a root. Use this to implement a disjoint set structure with just
one array of length N rather than two (set and rank).
Problem 5.34. Let A be the variant of Ackermann's function defined in Section 5.9.
Show that A(i, 2) = 4 for all i.
Problem 5.35. Let A be the variant of Ackermann's function defined in Section 5.9.
Show that A(i + 1, j)> A(i, j) and A(i, j + 1)> A(i, j) for all i and j.
Problem 5.36. Let A be the variant of Ackermann's function defined in Section 5.9,
and define a (i, n) by
a(i, n)= mint jJA(iJ)> 1gn}.
The function lg*n increases very slowly: lg*n is 4 or less for every n < 65536.
Let a(i, n) be the function defined in the previous problem. Show that a(2, n) is
in O(lg* n).
Problem 5.38. Ackermann's function (the genuine thing this time) is defined by
j
j+l if i= 0
A(ij)= A(i -1,1) if i > Oj = 0
A(i -1,A(i, j -1)) otherwise.
Greedy Algorithms
If greedy algorithms are the first family of algorithms that we examine in detail in
this book, the reason is simple: they are usually the most straightforward. As the
name suggests, they are shortsighted in their approach, taking decisions on the
basis of information immediately at hand without worrying about the effect these
decisions may have in the future. Thus they are easy to invent, easy to implement,
and-when they work-efficient. However, since the world is rarely that simple,
many problems cannot be solved correctly by such a crude approach.
Greedy algorithms are typically used to solve optimization problems. Ex-
amples later in this chapter include finding the shortest route from one node to
another through a network, or finding the best order to execute a set of jobs on a
computer. In such a context a greedy algorithm works by choosing the arc, or the
job, that seems most promising at any instant; it never reconsiders this decision,
whatever situation may arise later. There is no need to evaluate alternatives, nor
to employ elaborate book-keeping procedures allowing previous decisions to be
undone. We begin the chapter with an everyday example where this tactic works
well.
6.1 Making change (1)
Suppose we live in a country where the following coins are available: dollars
(100 cents), quarters (25 cents), dimes (10 cents), nickels (5 cents) and pennies
(1 cent). Our problem is to devise an algorithm for paying a given amount to a
customer using the smallest possible number of coins. For instance, if we must
pay $2.89 (289 cents), the best solution is to give the customer 10 coins: 2 dollars,
3 quarters, 1 dime and 4 pennies. Most of us solve this kind of problem every
day without thinking twice, unconsciously using an obvious greedy algorithm:
starting with nothing, at every stage we add to the coins already chosen a coin of
the largest value available that does not take us past the amount to be paid.
187
188 Greedy Algorithms Chapter 6
o Yet another function, the selection function, indicates at any time which of the
remaining candidates, that have neither been chosen nor rejected, is the most
promising.
To solve our problem, we look for a set of candidates that constitutes a solution,
and that optimizes (minimizes or maximizes, as the case may be) the value of the
objective function. A greedy algorithm proceeds step by step. Initially the set
of chosen candidates is empty. Then at each step we consider adding to this set
the best remaining untried candidate, our choice being guided by the selection
function. If the enlarged set of chosen candidates would no longer be feasible, we
reject the candidate we are currently considering. In this case the candidate that
has been tried and rejected is never considered again. However if the enlarged set
is still feasible, then we add the current candidate to the set of chosen candidates,
where it will stay from now on. Each time we enlarge the set of chosen candidates,
we check whether it now constitutes a solution to our problem. When a greedy
algorithm works correctly, the first solution found in this way is always optimal.
It is clear why such algorithms are called "greedy": at every step, the procedure
chooses the best morsel it can swallow, without worrying about the future. It never
changes its mind: once a candidate is included in the solution, it is there for good;
once a candidate is excluded from the solution, it is never reconsidered.
The selection function is usually related to the objective function. For example,
if we are trying to maximize our profit, we are likely to choose whichever remaining
candidate has the highest individual value. If we are trying to minimize cost, then
we may select the cheapest remaining candidate, and so on. However, we shall
see that at times there may be several plausible selection functions, so we have to
choose the right one if we want our algorithm to work properly.
Returning for a moment to the example of making change, here is one way in
which the general features of greedy algorithms can be equated to the particular
features of this problem.
• The candidates are a set of coins, representing in our example 100, 25,10, 5 and
1 units, with sufficient coins of each value that we never run out. (However
the set of candidates must be finite.)
• The solution function checks whether the value of the coins chosen so far is
exactly the amount to be paid.
c A set of coins is feasible if its total value does not exceed the amount to be paid.
• The selection function chooses the highest-valued coin remaining in the set of
candidates.
• The objective function counts the number of coins used in the solution.
It is obviously more efficient to reject all the remaining 100-unit coins (say) at
once when the remaining amount to be represented falls below this value. Using
integer division to calculate how many of a particular value of coin to choose is also
more efficient than proceeding by successive subtraction. If either of these tactics
is adopted, then we can relax the condition that the available set of coins must be
finite.
Let G = (N, A) be a connected, undirected graph where N is the set of nodes and
A is the set of edges. Each edge has a given nonnegative length. The problem is to
find a subset T of the edges of G such that all the nodes remain connected when
only the edges in T are used, and the sum of the lengths of the edges in T is as
small as possible. Since G is connected, at least one solution must exist. If G has
edges of length 0, then there may exist several solutions whose total length is the
same but that involve different numbers of edges. In this case, given two solutions
with equal total length, we prefer the one with least edges. Even with this proviso,
Section 6.3 Graphs: Minimum spanning trees 191
the problem may have several different solutions of equal value. Instead of talking
about length, we can associate a cost to each edge. The problem is then to find a
subset T of the edges whose total cost is as small as possible. Obviously this change
of terminology does not affect the way we solve the problem.
Let G' = (N, T) be the partial graph formed by the nodes of G and the edges
in T, and suppose there are n nodes in N. A connected graph with n nodes must
have at least n - 1 edges, so this is the minimum number of edges there can be in T.
On the other hand, a graph with n nodes and more than n - 1 edges contains at
least one cycle; see Problem 6.7. Hence if G' is connected and T has more than n -1
edges, we can remove at least one of these without disconnecting G', provided we
choose an edge that is part of a cycle. This will either decrease the total length of
the edges in T, or else leave the total length the same (if we have removed an edge
with length 0) while decreasing the number of edges in T. In either case the new
solution is preferable to the old one. Thus a set T with n or more edges cannot be
optimal. It follows that T must have exactly n -1 edges, and since G' is connected,
it must therefore be a tree.
The graph G' is called a minimum spanning tree for the graph G. This problem
has many applications. For instance, suppose the nodes of G represent towns, and
let the cost of an edge {a, b I be the cost of laying a telephone line from a to b. Then a
minimum spanning tree of G corresponds to the cheapest possible network serving
all the towns in question, provided only direct links between towns can be used
(in other words, provided we are not allowed to build telephone exchanges out in
the country between the towns). Relaxing this condition is equivalent to allowing
the addition of extra, auxiliary nodes to G. This may allow cheaper solutions to be
obtained: see Problem 6.8.
At first sight, at least two lines of attack seem possible if we hope to find a
greedy algorithm for this problem. Clearly our set of candidates must be the set A
of edges in G. One possible tactic is to start with an empty set T, and to select at
every stage the shortest edge that has not yet been chosen or rejected, regardless
of where this edge is situated in G. Another line of attack involves choosing a
node and building a tree from there, selecting at every stage the shortest available
edge that can extend the tree to an additional node. Unusually, for this particular
problem both approaches work! Before presenting the algorithms, we show how
the general schema of a greedy algorithm applies in this case, and present a lemma
for later use.
• The objective function to minimize is the total length of the edges in the solution.
192 Greedy Algorithms Chapter 6
Lemma 6.3.1 Let G = (N, A) be a connected undirected graph where the length
of each edge is given. Let B c N be a strict subset of the nodes of G. Let T c A be
a promising set of edges such that no edge in T leaves B. Let v be the shortest edge
that leaves B (or one of the shortest if ties exist). Then T u {v } is promising.
Proof Let U be a minimum spanning tree of G such that T c U. Such a U must exist since
T is promising by assumption. If v c U, there is nothing to prove. Otherwise, when
we add the edge v to U, we create exactly one cycle. (This is one of the properties
of a tree: see Section 5.5.) In this cycle, since v leaves B, there necessarily exists
at least one other edge, u say, that also leaves B, or the cycle could not close; see
Figure 6.1. If we now remove u, the cycle disappears and we obtain a new tree V
that spans G. However the length of v is by definition no greater than the length of
u, and therefore the total length of the edges in V does not exceed the total length of
the edges in U. Therefore V is also a minimum spanning tree of G, and it includes
v. To complete the proof, it remains to remark that T c V because the edge u that
was removed leaves B, and therefore it could not have been an edge of T. U
u in U
N\ B B
v of minimal length
The set T of edges is initially empty. As the algorithm progresses, edges are added
to T. So long as it has not found a solution, the partial graph formed by the nodes of
G and the edges in T consists of several connected components. (Initially when T is
empty, each node of G forms a distinct trivial connected component.) The elements
of T included in a given connected component form a minimum spanning tree for
the nodes in this component. At the end of the algorithm only one connected
component remains, so T is then a minimum spanning tree for all the nodes of G.
To build bigger and bigger connected components, we examine the edges of
G in order of increasing length. If an edge joins two nodes in different connected
components, we add it to T. Consequently, the two connected components now
form only one component. Otherwise the edge is rejected: it joins two nodes in the
same connected component, and therefore cannot be added to T without forming
a cycle (because the edges in T form a tree for each component). The algorithm
stops when only one connected component remains.
To illustrate how this algorithm works, consider the graph in Figure 6.2. In in-
creasing order of length the edges are: {1, 21, {2, 31, {4, 51, {6, 71, {i, 41, {2, 51,
{4, 71, {3, 51, {2, 41, {3, 6}, {5, 71 and {5, 61. The algorithm proceeds as follows.
When the algorithm stops, T contains the chosen edges {1, 21, {2, 31, {4, 51,
{6, 71, {1, 41 and {4, 71. This minimum spanning tree is shown by the heavy lines
in Figure 6.2; its total length is 17.
Proof The proof is by mathematical induction on the number of edges in the set T. We
shall show that if T is promising at any stage of the algorithm, then it is still
promising when an extra edge has been added. When the algorithm stops, T gives
a solution to our problem; since it is also promising, this solution is optimal.
194 Greedy Algorithms Chapter 6
Hence the conditions of Lemma 6.3.1are fulfilled, and we conclude that the set
T u {e} is also promising.
This completes the proof by mathematical induction that the set T is promising at
every stage of the algorithm, and hence that when the algorithm stops, T gives not
merely a solution to our problem, but an optimal solution. U
disjoint set structures; see Section 5.9. For this algorithm it is preferable to represent
the graph as a vector of edges with their associated lengths rather than as a matrix
of distances; see Problem 6.9. Here is the algorithm.
We can evaluate the execution time of the algorithm as follows. On a graph with
n nodes and a edges, the number of operations is in
* 0 (2aoc(2a, n)) for all the find and merge operations, where oa is the slow-
growing function defined in Section 5.9 (this follows from the results in Sec-
tion 5.9 since there are at most 2a find operations and n - 1 merge operations
on a universe containing n elements); and
We conclude that the total time for the algorithm is in 0 (a log n) because
0 (a (2a, n)) c 0 (log n). Although this does not change the worst-case analysis,
it is preferable to keep the edges in an inverted heap (see Section 5.7): thus the
shortest edge is at the root of the heap. This allows the initialization to be carried
out in a time in 0 (a), although each search for a minimum in the repeat loop now
takes a time in E)(loga)= E)(logn). This is particularly advantageous if the min-
imum spanning tree is found at a moment when a considerable number of edges
remain to be tried. In such cases, the original algorithm wastes time sorting these
useless edges.
196 Greedy Algorithms Chapter 6
To illustrate the algorithm, consider once again the graph in Figure 6.2. We arbi-
trarily choose node 1 as the starting node. Now the algorithm might progress as
follows.
Step {u,v} B
Initialization - {1}
1 {1,2} {1,2}
2 {2,3} {1,2,3}
3 {1,4} {1,2,3,4}
4 {4,5} {1,2,3,4,5}
5 {4,7} {1,2,3,4,5,7}
6 {7,6} {1,2,3,4,5,6,7}
When the algorithm stops, T contains the chosen edges {1,2}, {2,3}, {1,4}, {4,5},
{4,7} and {7,6}. The proof that the algorithm works is similar to the proof of
Kruskal's algorithm.
Section 6.3 Graphs: Minimum spanning trees 197
Proof The proof is by mathematical induction on the number of edges in the set T.
We shall show that if T is promising at any stage of the algorithm, then it is still
promising when an extra edge has been added. When the algorithm stops, T gives
a solution to our problem; since it is also promising, this solution is optimal.
c Basis: The empty set is promising.
• Induction step: Assume that T is promising just before the algorithm adds a
new edge e = {u, vl. Now B is a strict subset of N (for the algorithm stops
when B = N), T is a promising set of edges by the induction hypothesis, and
e is by definition one of the shortest edges that leaves B. Hence the conditions
of Lemma 6.3.1 are fulfilled, and T u {e} is also promising.
This completes the proof by mathematical induction that the set T is promising
at every stage of the algorithm. When the algorithm stops, T therefore gives an
optimal solution to our problem. U
The main loop of the algorithm is executed n -1 times; at each iteration the enclosed
for loops take a time in 0 (n). Thus Prim's algorithm takes a time in 0 (n 2 ).
We saw that Kruskal's algorithm takes a time in 0 (a log n), where a is the
number of edges in the graph. For a dense graph, a tends towards n(n -1)/2.
In this case, Kruskal's algorithm takes a time in 0 (n2 log n), and Prim's algorithm
is probably better. For a sparse graph, a tends towards n. In this case, Kruskal's
algorithm takes a time in G)(nlogn), and Prim's algorithm as presented here is
probably less efficient. However Prim's algorithm, like Kruskal's, can be imple-
mented using heaps. In this case-again like Kruskal's algorithm-it takes a time
in 0 (a log n). There exist other algorithms more efficient than either Prim's or
Kruskal's; see Section 6.8.
Consider now a directed graph G = (N, A) where N is the set of nodes of G and
A is the set of directed edges. Each edge has a nonnegative length. One of the
nodes is designated as the source node. The problem is to determine the length of
the shortest path from the source to each of the other nodes of the graph. As in
Section 6.3 we could equally well talk about the cost of an edge instead of its length,
and pose the problem of finding the cheapest path from the source to each other
node.
This problem can be solved by a greedy algorithm often called Dijkstra's algo-
rithm. The algorithm uses two sets of nodes, S and C. At every moment the set S
contains those nodes that have already been chosen; as we shall see, the minimal
distance from the source is already known for every node in S. The set C contains
all the other nodes, whose minimal distance from the source is not yet known, and
which are candidates to be chosen at some later stage. Hence we have the invari-
ant property N = S u C. At the outset, S contains only the source itself; when the
algorithm stops, S contains all the nodes of the graph and our problem is solved.
At each step we choose the node in C whose distance to the source is least, and add
it to S.
We shall say that a path from the source to some other node is special if all the
intermediate nodes along the path belong to S. At each step of the algorithm, an
array D holds the length of the shortest special path to each node of the graph. At
the moment when we add a new node v to S, the shortest special path to v is also
the shortest of all the paths to v. (We shall prove this later.) When the algorithm
stops, all the nodes of the graph are in S, and so all the paths from the source to
some other node are special. Consequently the values in D give the solution to the
shortest path problem.
For simplicity, we again assume that the nodes of G are numbered from 1 to
n, so N = {1, 2,..., n} . We can suppose without loss of generality that node 1 is
the source. Suppose also that a matrix L gives the length of each directed edge:
L[i, j]> 0 if the edge (i, j)e A, and L[i, j]= co otherwise. Here is the algorithm.
Section 6.4 Graphs: Shortest paths 199
functionDijkstra(L[1..n,1..n]):array [2..n]
array Df2.. n]
{initialization}
C - {2, 3, . .. , n} {S = N \ C exists only implicitly
for i - 2 to n do D[i]- L[1, i]
{greedy loop}
repeat n - 2 times
v -some element of C minimizing D[v]
C -C \ {v} {and implicitly S - S u {v}}
for each w E C do
D[w]- min(D[w],D[v]+L[v,w])
return D
Step V C D
Initialization - {2,3,4,5} [50,30,100,10]
1 5 {2,3,4} [50,30,20,10]
2 4 {2,3} [40,30,20,10]
3 3 {2} [35,30,20,10]
Clearly D would not change if we did one more iteration to remove the last element
of C. This is why the main loop is repeated only n - 2 times.
To determine not only the length of the shortest paths, but also where they pass,
add a second array P[2. . ni, where P[v] contains the number of the node that
precedes v in the shortest path. To find the complete path, follow the pointers P
backwards from a destination to the source. The necessary modifications to the
algorithm are simple:
200 Greedy Algorithms Chapter 6
(a) if a node i X 1 is in S, then D[i] gives the length of the shortest path from the
source to i; and
(b) if a node i is not in S, then D[i] gives the length of the shortest special path
from the source to i.
• Basis: Initially only node 1, the source, is in S. so condition (a) is vacuously
true. For the other nodes, the only special path from the source is the direct
path, and D is initialized accordingly Hence condition (b) also holds when the
algorithm begins.
o Induction hypothesis: The induction hypothesis is that both conditions (a) and
(b) hold just before we add a new node v to S. We detail separately the induc-
tion steps for conditions (a) and (b).
o Induction step for condition (a): For every node already in S before the addition
of v, nothing changes, so condition (a) is still true. As for node v, it will now
belong to S. Before adding it to S. we must check that D[v] gives the length
of the shortest path from the source to v. By the induction hypothesis, D [v I
certainly gives the length of the shortest special path. We therefore have to
verify that the shortest path from the source to v does not pass through any of
the nodes that do not belong to S.
Suppose the contrary; that is, suppose that when we follow the shortest path
from the source to v, we encounter one or more nodes (not counting v itself)
that do not belong to S. Let x be the first such node encountered; see Figure 6.4.
Now the initial segment of this path, as far as x, is a special path, so the distance
to x is D [x], by part (b) of the induction hypothesis. The total distance to v via
x is certainly no shorter than this, since edge lengths are nonnegative. Finally
D [x] is not less than D[v], since the algorithm chose v before x. Therefore the
total distance to v via x is at least D [v], and the path via x cannot be shorter
than the shortest special path leading to v.
Section 6.4 Graphs: Shortest paths 201
shortest path
ecial path
We have thus verified that when v is added to S, part (a) of the induction
remains true.
o Induction step for condition (b): Consider now a node w, different from v, which
is not in S. When v is added to S, there are two possibilities for the shortest
special path from the source to w: either it does not change, or else it now
passes through v (and possibly through other nodes in S as well). In the
second case, let x be the last node of S visited before arriving at w. The length
of such a path is D[x]+L[x,w]. It seems at first glance that to compute the
new value of D [w] we should compare the old value of D[w] with the values
of D [x] +L [x, w] for every node x in S (including v). However for every node
x in S except v, this comparison was made when x was added to S. and D [x ]
has not changed since then. Thus the new value of D[w] can be computed
simply by comparing the old value with D [v I+L [v, w].
Since the algorithm does this explicitly, it ensures that part (b) of the induction
also remains true whenever a new node v is added to S.
To complete the proof that the algorithm works, note that when the algorithm stops,
all the nodes but one are in S (although the set S is not constructed explicitly). At this
point it is clear that the shortest path from the source to the remaining node is a
special path. C
type lisgraph of Section 5.4). This allows us to save time in the inner for loop, since
we only have to consider those nodes w adjacent to v; but how are we to avoid
taking a time in Q(n 2 ) to determine in succession the n - 2 values taken by v?
The answer is to use an inverted heap containing one node for each element v
of C, ordered by the value of D[v]. Thus the element v of C that minimizes D[v]
will always be found at the root. Initialization of the heap takes a time in @(n).
The instruction " C - C \ {v I " consists of eliminating the root from the heap, which
takes a time in 0 (log n). As for the inner for loop, it now consists of looking, for
each element w of C adjacent to v, to see whether D[v] +L[v, w] is less than D[w].
If so, we must modify D [w ] and percolate w up the heap, which again takes a time
in 0 (log n). This does not happen more than once for each edge of the graph.
To sum up, we have to remove the root of the heap exactly n - 2 times and to
percolate at most a nodes, giving a total time in 0 ((a + n)log n). If the graph is
connected, a > n - 1, and the time is in O (a log n). The straightforward imple-
mentation is therefore preferable if the graph is dense, whereas it is preferable to
use a heap if the graph is sparse If a c i(n 2 / log n), the choice of representation
may depend on the specific implementation. Problem 6.16 suggests a way to speed
up the algorithm by using a k-ary heap with a well-chosen value of k; other, still
faster algorithms are known; see Problem 6.17 and Section 6.8.
where vi > 0, wi > 0 and 0 < xi < 1 for 1 < i < n. Here the conditions on vi andw1
are constraints on the instance; those on xi are constraints on the solution. We shall
use a greedy algorithm to solve the problem. In terms of our general schema, the
candidates are the different objects, and a solution is a vector (xi, . . ., x ) telling us
what fraction of each object to include. A feasible solution is one that respects the
constraints given above, and the objective function is the total value of the objects
in the knapsack. What we should take as the selection function remains to be seen.
If Zn= 1 wi < W, it is clearly optimal to pack all the objects in the knapsack.
We can therefore assume that in any interesting instance of the problem-' I1 wi > W.
It is also clear that an optimal solution must fill the knapsack exactly, for otherwise
Section 6.5 The knapsack problem (1) 203
we could add a fraction of one of the remaining objects and increase the value
of the load. Thus in an optimal solution = xiwi = W. Since we are hoping
to find a greedy algorithm that works, our general strategy will be to select each
object in turn in some suitable order, to put as large a fraction as possible of the
selected object into the knapsack, and to stop when the knapsack is full. Here is
the algorithm.
There are at least three plausible selection functions for this problem: at each stage
we might choose the most valuable remaining object, arguing that this increases
the value of the load as quickly as possible; we might choose the lightest remaining
object, on the grounds that this uses up capacity as slowly as possible; or we might
avoid these extremes by choosing the object whose value per unit weight is as high
as possible. Figures 6.5 and 6.6 show how these three different tactics work in one
particular instance. Here we have five objects, and W = 100. If we select the objects
in order of decreasing value, then we choose first object 3, then object 5, and finally
we fill the knapsack with half of object 4. The value of the solution obtained in this
way is 66 + 60 + 40/2 = 146. If we select the objects in order of increasing weight,
then we choose objects 1,2, 3 and 4 in that order, and now the knapsack is full. The
value of this solution is 20 + 30 + 66 + 40 = 156. Finally if we select the objects in
order of decreasing vi /wi, we choose first object 3, then object 1, next object 2, and
finally we fill the knapsack with four-fifths of object 5. Using this tactic, the value
of the solution is 20 + 30 + 66 + 0.8 x 60 = 164.
n = 5, W = 100
w 10 20 30 40 50
v 20 30 66 40 60
v/w 2.0 1.5 2.2 1.0 1.2
Figure 6.5. An instance of the knapsack problem
This example shows that the solution obtained by a greedy algorithm that maxi-
mizes the value of each object it selects is not necessarily optimal, nor is the solution
obtained by minimizing the weight of each object that is chosen. Fortunately the
204 Greedy Algorithms Chapter 6
Select: xi Value
Max vi 0 0 1 0.5 1 146
Minw 1 1 1 1 1 0 156
Max v1 /w1 1 1 1 0 0.8 164
Figure 6.6. Three greedy approaches to the instance in Figure 6.5
following proof shows that the third possibility, selecting the object that maximizes
the value per unit weight, does lead to an optimal solution.
Theorem 6.5.1 If objects are selected in order of decreasing vi lwi, then algorithm
knapsackfinds an optimal solution.
Proof Suppose without loss of generality that the available objects are numbered in order
of decreasing value per unit weight, that is, that
Vi/W1 > V2/W2 > ... > Vn/wn
Let X = (xI, . . ., xn) be the solution found by the greedy algorithm. If all the xi are
equal to 1, this solution is clearly optimal. Otherwise, let j be the smallest index
such that xj < 1. Looking at the way the algorithm works, it is clear that xi 1
when i < j, that xi 0 when i > j, and that Yn 1xiwi = W. Let the value of the
solution X be V(X) n 1 xivi.
v
Now let Y = (yl,.yy) be any feasible solution. Since Y is feasible,
Y I yiwi 5<W, and hence Y" I (xi - yi)wi > 0. Let the value of the solution
Y be V(Y) Zn y iv1 . Now
n n
V(X) V(Y)= (xi- yi)vi Z(xi- yi)wiV
ii 11
We have thus proved that no feasible solution can have a value greater than V(X),
so the solution X is optimal. U
Implementation of the algorithm is straightforward. If the objects are already
sorted into decreasing order of vi/ wi, then the greedy loop clearly takes a time in
0 (n); the total time including the sort is therefore in 0 (n log n). As in Section 6.3.1,
it may be worthwhile to keep the objects in a heap with the largest value of vi / wI at
the root. Creating the heap takes a time in 0 (n), while each trip round the greedy
loop now takes a time in 0 (log n) since the heap property must be restored after
the root is removed. Although this does not alter the worst-case analysis, it may
be faster if only a few objects are needed to fill the knapsack.
Section 6.6 Scheduling 205
6.6 Scheduling
In this section we present two problems concerning the optimal way to schedule
jobs on a single machine. In the first, the problem is to minimize the average time
that a job spends in the system. In the second, the jobs have deadlines, and a
job brings in a certain profit only if it is completed by its deadline: our aim is to
maximize profitability. Both these problems can be solved using greedy algorithms.
6.6.1 Minimizing time in the system
A single server, such as a processor, a petrol pump, or a cashier in a bank, has
n customers to serve. The service time required by each customer is known in
advance: customer i will take time ti, 1 < i < n. We want to minimize the average
time that a customer spends in the system. Since n, the number of customers, is
fixed, this is the same as minimizing the total time spent in the system by all the
customers. In other words, we want to minimize
n
T = (time in system for customer i).
Proof Let P = P1 P2 * p, be any permutation of the integers from 1 to n, and let si = tp,.
If customers are served in the order P, then the service time required by the i-th
customer to be served is si, and the total time passed in the system by all the
customers is
T(P) = SI + (SI + S2)+(Sl + S2 + S3)+
= ns, + (n -1)s 2 + (n - 2)s 3 +
n
= (n- k+1)Sk.
k=l
Suppose now that P does not arrange the customers in order of increasing service
time. Then we can find two integers a and b with a < b and Sa > Sb. In other
words, the a-th customer is served before the b-th customer even though the
former needs more service time than the latter; see Figure 6.7. If we exchange the
positions of these two customers, we obtain a new order of service P', which is
simply P with the integers Pa and Pb interchanged. The total time passed in the
system by all the customers if schedule P' is used is
The same result can be obtained less formally from Figure 6.7. Comparing sched-
ules P and P', we see that the first a - 1 customers leave the system at exactly the
same time in both schedules. The same is true of the last n - b customers. Cus-
tomer a now leaves when customer b used to, while customer b leaves earlier than
customer a used to, because Sb < Sa. Finally those customers served in positions
a + 1 to b -1 also leave the system earlier, for the same reason. Overall, P' is
therefore better than P.
Thus we can improve any schedule in which a customer is served before some-
one else who requires less service. The only schedules that remain are those ob-
tained by putting the customers in order of nondecreasing service time. All such
schedules are clearly equivalent, and therefore all optimal. O
Implementing the algorithm is so straightforward that we omit the details.
In essence all that is necessary is to sort the customers into order of nondecreasing
service time, which takes a time in 0 (n log n). The problem can be generalized to
a system with s servers, as can the algorithm: see Problem 6.20.
Section 6.6 Scheduling 207
I.a I a a +l .. b- I b b+I. n
P
PI
i 1 2 3 4
gi 50 10 15 30
di 2 1 2 1
the schedules to consider and the corresponding profits are
Sequence Profit Sequence Profit
1 50 2,1 60
2 10 2,3 25
3 15 3,1 65
4 30 4,1 80 - optimum
1,3 65 4,3 45
The sequence 3,2 for instance is not considered because job 2 would be executed at
time t = 2, after its deadline d2 = 1. To maximize our profit in this example, we
should execute the schedule 4,1.
A set of jobs isfeasible if there exists at least one sequence (also called feasible)
that allows all the jobs in the set to be executed no later than their respective dead-
lines. An obvious greedy algorithm consists of constructing the schedule step by
step, adding at each step the job with the highest value of gi among those not yet
considered, provided that the chosen set of jobs remains feasible.
In the preceding example we first choose job 1. Next, we choose job 4; the set
{1, 4} is feasible because it can be executed in the order 4,1. Next we try the set
{1, 3, 4}, which turns out not to be feasible; job 3 is therefore rejected. Finally we try
{1, 2, 4}, which is also infeasible, so job 2 is also rejected. Our solution-optimal in
this case-is therefore to execute the set of jobs {1, 4}, which can only be done in
the order 4,1. It remains to be proved that this algorithm always finds an optimal
schedule and to find an efficient way of implementing it.
208 Greedy Algorithms Chapter 6
Let J be a set of k jobs. At first glance it seems we might have to try all the k!
permutations of these jobs to see whether J is feasible. Happily this is not the case.
Lemma 6.6.2 Let J be a set of k jobs. Suppose without loss of generality that the
jobs are numbered so that d, < d2 < ... < dk. Then the set J infeasible if and only
if the sequence 1, 2, . . ., k is feasible.
Proof The "if " is obvious. For the "only if ", suppose the sequence 1, 2,..., k is not feasible.
Then at least one job in this sequence is scheduled after its deadline. Let r be any
such job, so dr < r -1. Since the jobs are scheduled in order of nondecreasing
deadline, this means that at least r jobs have deadlines r - 1 or earlier. However
these are scheduled, the last one will always be late. a
Theorem 6.6.3 The greedy algorithm outlined earlier always finds an optimal
schedule.
Proof Suppose the greedy algorithm chooses to execute a set of jobs I, and suppose the
set J is optimal. Let SI and Sj be feasible sequences, possibly including gaps, for
the two sets of jobs in question. By rearranging the jobs in SI and those in Sj, we
can obtain two feasible sequences S and SJ, which also may include gaps, such
that every job common to I and J is scheduled at the same time in both sequences;
see Figure 6.8.
p y q x r SI
r S t v q w iS
after reorganization,
if this task is a
y p r q s
u P r vV q W IS
that one will be b
common tasks
To see this, imagine that some job a occurs in both the feasible sequences SI and
Sj, where it is scheduled at times t, and tj respectively. If t1 = tj there is nothing
to do. Otherwise, suppose t1 < tj. Since the sequence Sj is feasible, it follows that
the deadline for job a is no earlier than tj. Modify sequence SI as follows: if there
is a gap in sequence SI at time tj, move job a back from time to into the gap at
time tj; if there is some job b scheduled in SI at time tj, exchange jobs a and b in
sequence SI. The resulting sequence is still feasible, since in either case a will be
executed by its deadline, and in the second case moving job b to an earlier time
can certainly do no harm. Now job a is scheduled at the same time tj in both the
modified sequence SI and in Sj. A similar argument applies when t1 > tj, except
that in this case it is Sj that has to be modified.
Once job a has been treated in this way, it is clear that we never need to move
it again. If sequences SI and Sj have m jobs in common, therefore, after at most m
modifications of either SI or Sj we can ensure that all the jobs common to I and J
are scheduled at the same time in both sequences. The resulting sequences S and
SJ may not be the same if I i J. So suppose there is a time when the job scheduled
in SI is different from that scheduled in SJ.
• If some job b is scheduled in SJ opposite a gap in SI, the set I u {b} would
be feasible, so the greedy algorithm would have included b in I. This is also
impossible since it did not do so.
O The only remaining possibility is that some job a is scheduled in S' opposite a
different job b in SJ. In this case a does not appear in J and b does not appear
in I. There are apparently three possibilities.
- If 9a > Ytb, one could substitute a for b in J and improve it. This is
impossible because J is optimal.
We conclude that for each time slot the sequences SI and SJ either schedule no
jobs, or the same job, or two different jobs yielding the same profit. The total profit
from I is therefore equal to the profit from the optimal set J, so I is optimal too. U
For our first implementation of the algorithm, suppose without loss of gen-
erality that the jobs are numbered so that gi > 92 > ... >!n. The algorithm
can be implemented more efficiently (and more easily) if we suppose further that
n > 0 and di > 0, 1 < i < n, and that additional storage is available at the front of
210 Greedy Algorithms Chapter 6
the arrays d (that holds the deadlines) and j (in which we construct the solution).
These additional cells are known as "sentinels". By storing an appropriate value
in the sentinels we avoid repeated time-consuming range checks.
The k jobs in the array j are in order of increasing deadline. When job i is being
considered, the algorithm checks whether it can be inserted into j at the appropriate
place without pushing some job already in j past its deadline. If so, i is accepted;
otherwise i is rejected. The exact values of the gi are unnecessary provided the
jobs are correctly numbered in order of decreasing profit. Figure 6.9 gives gi and di
for an example with six jobs, and Figure 6.10 illustrates how the algorithm works
on this example. (Figure 6.10 calls it the "slow" algorithm since we shall shortly
describe a better one.)
i 1 2 3 4 5 6
gi 20 15 10 7 5 3
di 3 1 1 3 1 3
Figure 6.9. An example with six jobs
Analysis of the algorithm is straightforward. Sorting the jobs into order of decreas-
ing profit takes a time in @(n log n). The worst case for the algorithm is when
this procedure turns out also to sort the jobs by order of decreasing deadline, and
when they can all fit into the schedule. In this case, when job i is being considered
the algorithm looks at each of the k = i -1 jobs already in the schedule to find a
place for the newcomer, and then moves them all along one position. In terms of
the program above, there are Y_' k trips round the while loop and En_ ½ m trips
round the inner for loop. The algorithm therefore takes a time in Q (n2 ).
A more efficient algorithm is obtained if we use a different technique to verify
whether a given set of jobs is feasible. The new technique depends on the following
lemma.
Section 6.6 Scheduling 211
d[j[i]]
3 |
Initialization: Ft I I I I I [i]
k
1 3
Try 2: 21 1 1
Try 3: unchanged
1 3 3
Try 4: 2 1I | 4 |
T
Try 5: unchanged
Try 6: unchanged
Lemma 6.6.4 A set of n jobs J is feasible if and only if we can construct afeasible
sequence including all the jobs in J as follows. Start with an empty schedule of
length n. Thenfor each job i c J in turn, schedule i at time t, where t is the largest
integer such that 1 < t < min(n, di) and the job to be executed at time t is not yet
decided.
In other words, starting with an empty schedule, consider each job in turn, and
add it to the schedule being built as late as possible, but no later than its deadline.
If a job cannot be scheduled in time to meet its deadline, then the set J is infeasible.
Proof The "if" is obvious. For the "only if", note first that if a feasible sequence exists at
all, then there exists a feasible sequence of length n. Since there are only n jobs to
schedule, any longer sequence must contain gaps, and we can always move a job
into an earlier gap without affecting the feasibility of the sequence.
When we try to add a new job, the sequence being built always contains at least
one gap. Suppose we are unable to add a job whose deadline is d. This can happen
212 Greedy Algorithms Chapter 6
only if all the slots from t = 1 to t = r are already allocated, where r = min(n, d).
Let s > r be the smallest integer such that the slot t = s is empty. The schedule
already built therefore includes s -1 jobs whose deadlines are earlier than s, no
job with deadline exactly s, and possibly others with deadlines later than s. The
job we are trying to add also has a deadline less than s. Hence J includes at least s
jobs whose deadline is s -1 or earlier. However these are scheduled, the last one
is sure to be late. i
The lemma suggests that we should consider an algorithm that tries to fill
one by one the positions in a sequence of length n. For any position t, define
nt = max {k < t position k is free}. Also define certain sets of positions as follows:
two positions i and j are in the same set if ni = nj; see Figure 6.11. For a given
set K of positions, let F(K) be the smallest member of K. Finally define a fictitious
position 0 that is always free.
ni = njI j
Free
position
Z Occupied
position
As we assign new jobs to vacant positions, these sets merge to form larger sets:
disjoint set structures are intended for just this purpose. We obtain an algorithm
whose essential steps are the following:
- Find the set that contains F(K) -1. Call this set L (it cannot be the same
as K).
- Merge K and L. The value of F for this new set is the old value of F(L).
Figure 6.12 illustrates the working of this algorithm on the example given in
Figure 6.9.
Section 6.6 Scheduling 213
F= 0 1 2
(0 0 (0 0D
Try 1: dl = 3, assign task I to position 3
F: 0 2
000D
3)
Try 2: d2 = 1, assign task 2 to position I
F= 2
F= 0
Here is a more precise statement of the fast algorithm. To simplify the description,
we assume that the label of the set produced by a merge operation is necessarily the
label of one of the sets that were merged. The schedule first produced may contain
gaps; the algorithm ends by moving jobs forward to fill these.
214 Greedy Algorithms Chapter 6
If the instance is given with the jobs already ordered by decreasing profit, so that an
optimal sequence can be obtained merely by calling the preceding algorithm, most
of the time will be spent manipulating disjoint sets. Since there are at most 2n find
operations and n merge operations to execute, the required time is in 0 (n o (2n, n)),
where a is the slow-growing function of Section 5.9. This is essentially linear. If, on
the other hand, the jobs are given in arbitrary order, then we have to begin by sorting
them, and obtaining the initial sequence takes a time in 0 (n log n).
6.7 Problems
Problem 6.1. Is selection sort (see Section 2.4) a greedy algorithm? If so, what
are the various functions involved (the function to check feasibility, the selection
function, and so on)?
Problem 6.3. The Portuguese coinage includes coins for 1, 22, 5,10, 20, 25 and 50
escudos. However prices are always for an integer number of escudos. Prove or
give a counterexample: when an unlimited supply of coins of each denomination
is available, the greedy algorithm of Section 6.1 always finds an optimal solution.
Section 6.7 Problems 215
Problem 6.4. Suppose the coinage includes the values given in Section 6.1, but you
have run out of nickels. Show that using the greedy algorithm with the remaining
values does not necessarily produce an optimal solution.
Problem 6.5. Prove or give a counter-example: provided that each coin in the
series is worth at least twice the next lower denomination, that the series includes a
1-unit coin, and that an unlimited supply of coins of each denomination is available,
the greedy algorithm of Section 6.1 always finds an optimal solution.
Problem 6.7. Prove that a graph with n nodes and more than n -1 edges must
contain at least one cycle.
Problem 6.8. Suppose the cost of laying a telephone cable from point a to point b
is proportional to the Euclidean distance from a to b. A certain number of towns
are to be connected at minimum cost. Find an example where it costs less to lay
the cables via an exchange situated in between the towns than to use only direct
links.
Problem 6.9. What can you say about the time required by Kruskal's algorithm if,
instead of providing a list of edges, the user supplies a matrix of distances, leaving
to the algorithm the job of working out which edges exist?
Problem 6.10. Suppose Kruskal's algorithm and Prim's algorithm are implemented
as shown in Sections 6.3.1 and 6.3.2 respectively. What happens (a) in the case of
Kruskal's algorithm (b) in the case of Prim's algorithm if by mistake we run the
algorithm on a graph that is not connected?
Problem 6.11. A graph may have several different minimum spanning trees.
Is this the case for the graph in Figure 6.2? If so, where is this possibility reflected
in the algorithms explained in Sections 6.3.1 and 6.3.2?
Problem 6.12. The problem of finding a subset T of the edges of a connected graph
G such that all the nodes remain connected when only the edges in T are used, and
the sum of the lengths of the edges in T is as small as possible, still makes sense
even if G may have edges with negative lengths. However, the solution may no
longer be a tree. Adapt either Kruskal's algorithm or Prim's algorithm to work on
a graph that may include edges of negative length.
Problem 6.13. Show that Prim's algorithm can, like Kruskal's algorithm, be im-
plemented using heaps. Show that it then takes a time in O(alogn).
Problem 6.15. Show by giving an explicit example that if the edge lengths can
be negative, then Dijkstra's algorithm does not always work correctly. Is it still
sensible to talk about shortest paths if negative distances are allowed?
Problem 6.16. In the analysis of the implementation of Dijkstra's algorithm that
uses a heap, we saw that up to a nodes can be percolated, whereas less than n
roots are eliminated. Eliminating the root has for effect to sift down the node
that takes its place. In general, percolating up is somewhat quicker than sifting
down, since at each level we compare the value of a node to the value of its parent,
rather than making comparisons with both children. Using an k-ary heap (see
Section 5.7 and Problem 5.23) may make percolation run faster still, at the cost of
slowing down sifting. Let k = max(2, [a/n]). Show how to use a k-ary heap to
calculate the shortest paths from a source to all the other nodes of a graph in a
time in O(alogk n). Note that this gives 0(n 2 ) if a n 2 and 0(alogn) if a n.
It therefore gives the best of both worlds.
Problem 6.17. A Fibonacci heap, mentioned in Section 5.8, has the following prop-
erties. A heap containing n items can be built in a time in 0 (n); finding the largest
item, inserting a new item, increasing the value of an item and restoring the heap
property, and merging two heaps all take an amortized time in 0(1); and deleting
any item, including in particular the largest, from a heap containing n items takes
an amortized time in 0 (log n). An inverted Fibonacci heap is similar, except that
the corresponding operations involve decreasing the value of an item, and finding
or deleting the smallest item. Show how an inverted Fibonacci heap can be used
to implement Dijkstra's algorithm in a time in 0 (a + n log n).
Problem 6.18. In Section 6.5 we assumed that we had available n objects num-
bered 1 to n. Suppose instead that we have n types of object available, with an
adequate supply of each type. Formally, this simply replaces the old constraint
o < xi <1 by the looser constraint xi > 0. Does the greedy algorithm of Section 6.5
still work? Is it still necessary?
Problem 6.19. Prove or give a counter-example: for the problem of scheduling
with deadlines of Section 6.6.1, scheduling the customers in order of decreasing
service time leads to the worst possible schedule.
Problem 6.20. As in Section 6.6.1 we have n customers. Customer i, 1 < i < n,
requires a known service time ti. Without loss of generality, suppose the customers
are numbered so that t1 5 t2 s ... < tn. If there are s identical servers, prove that
to minimize the total (and hence the average) time spent in the system by the
customers, server j, I < j < s, should serve customers j, j + s, j + 2s, ... in that
order.
Problem 6.21. Let PI, P2,..., Pn be n programs to be stored on a disk. Program Pi
requires si kilobytes of storage, and the capacity of the disk is D kilobytes, where
D< s.
1si
(a) We want to maximize the number of programs held on the disk. Prove or
give a counter-example: we can use a greedy algorithm that selects programs in
order of nondecreasing si.
Section 6.8 References and further reading 217
(b) We want to use as much of the capacity of the disk as possible. Prove or
give a counter-example: we can use a greedy algorithm that selects programs in
order of nonincreasing si.
Problem 6.22. Let PF,P2 ,...,Pn be n programs to be stored on a tape. Program Pi
requires Si kilobytes of storage; the tape is long enough to hold all the programs. We
know how often each program is used: a fraction 7Ti of requests concern program
i (and so Yt' vi = 1). Information is recorded along the tape at constant density,
and the speed of the tape drive is also constant. After a program is loaded, the tape
is rewound to the beginning. If the programs are held in the order il, i2 . in the
average time required to load a program is therefore
n Mi ~
j=l k-1
where the constant c depends on the recording density and the speed of the drive.
We want to minimize T using a greedy algorithm. Prove or give a counter-example
for each of the following: we can select the programs (a) in order of nondecreasing
si; (b) in order of nonincreasing iTs; (c) in order of nonincreasing Trir/Si.
Problem 6.23. Suppose the two schedules SI and SJ introduced in the proof of
optimality in Section 6.6.2 are given in the form of arrays SI[1 . .r] and SJ[1 . . r],
where r = maxi<i n di. An array element holds i if job i is to be executed at the
corresponding moment, and 0 represents a gap in the schedule. Write an algorithm
that produces the schedules SI and SJ in arrays SI and SJ respectively.
The solution to Problem 6.8 involves the notion of Steiner trees. The problem of
finding a minimum Steiner tree is XP-hard-see Section 12.5.5-and thus proba-
bly much harder than finding a minimum spanning tree. For more on this problem
see Garey, Graham and Johnson (1977) and Winter (1987).
An important greedy algorithm that we have not discussed is used to derive
optimal Huffman codes; see Schwartz (1964). Other greedy algorithms for a variety
of problems are described in Horowitz and Sahni (1978).
Chapter 7
Divide-and-Conquer
219
220 Divide-and-Conquer Chapter 7
We illustrate the process with the example used in Section 1.2: the multiplica-
tion of 981 by 1234. First we pad the shorter operand with a nonsignificant zero to
make it the same length as the longer one; thus 981 becomes 0981. Then we split
each operand into two halves: 0981 gives rise to w = 09 and x = 81, and 1234 to
y = 12 and z = 34. Notice that 981 = 10 2 W + x and 1234 = 10 2 y + z. Therefore,
the required product can be computed as
If you think we have merely restated the algorithm of Section 1.2 in more symbols,
you are perfectly correct. The above procedure still needs four half-size multipli-
cations: wy, wz, xy and xz.
The key observation is that there is no need to compute both wz and xy; all we
really need is the sum of these two terms. Is it possible to obtain wz + xy at the
cost of a single multiplication? This seems impossible until we remember that we
also need the values of wy and xz to apply the above formula. With this in mind,
consider the product
After only one multiplication, we obtain the sum of all three terms needed to
calculate the desired product. This suggests proceeding as follows.
p = wy=09xl2 =108
q = xz= 81x34 = 2754
r = (w+x)x(y+z) =90x46 4140,
and finally
981 x 1234 = 10 4 p + 102 (r - p - q)+q
= 1080000 + 127800 + 2754 1210554.
Thus the product of 981 and 1234 can be reduced to three multiplications of two-fig-
ure numbers (09 x 12, 81 x 34 and 90 x 46) together with a certain number of shifts
(multiplications by powers of 10), additions and subtractions.
To be sure, the number of additions-counting subtractions as if they were
additions-is larger than with the original divide-and-conquer algorithm of Sec-
tion 1.2. Is it worth performing four more additions to save one multiplication?
The answer is no when we are multiplying small numbers like those in our exam-
ple. However, it is worthwhile when the numbers to be multiplied are large, and
it becomes increasingly so when the numbers get larger. When the operands are
large, the time required for the additions and shifts becomes negligible compared
to the time taken by a single multiplication. It thus seems reasonable to expect
that reducing four multiplications to three will enable us to cut 25% of the com-
puting time required for large multiplications. As we shall see, our saving will be
significantly better.
Section 7.1 Introduction: Multiplying large integers 221
Because h(n) E(n 2 ) and g(n)e 0(n), the term g(n) is negligible compared to
4h(n) when n is sufficiently large, which means that we have gained about 25%
in speed compared to the classic algorithm, as we anticipated. Although this im-
provement is not to be sneezed at, we have not managed to change the order of the
time required: the new algorithm still takes quadratic time.
To do better than this, we come back to the question posed in the opening
paragraph: how should the subinstances be solved? If they are small, the classic
algorithm may still be the best way to proceed. However, when the subinstances
are sufficiently large, might it not be better to use our new algorithm recursively?
The idea is analogous to profiting from a bank account that compounds interest
payments! When we do this, we obtain an algorithm that can multiply two n-figure
numbers in a time t (n)= 3t (n /2) +g(n) when n is even and sufficiently large. This
is similar to the recurrence we studied in Section 4.7.1 and Example 4.7.10; solving
it yields t (n) C 9)(n 1gI n is a power of 2). We have to be content with conditional
asymptotic notation because we have not yet addressed the question of how to
multiply numbers of odd length; see Problem 7.1.
Since lg 3 1.585 is smaller than 2, this algorithm can multiply two large in-
tegers much faster than the classic multiplication algorithm, and the bigger n, the
more this improvement is worth having. A good implementation will probably not
use base 10, but rather the largest base for which the hardware allows two "digits"
to be multiplied directly. Recall that the performance of this algorithm and of the
classic algorithm are compared empirically at the end of Section 2.7.3.
An important factor in the practical efficiency of this approach to multiplica-
tion, and indeed of any divide-and-conquer algorithm, is knowing when to stop
dividing the instances and use the classic algorithm instead. Although the divide-
and-conquer approach becomes more worthwhile as the instance to be solved gets
larger, it may in fact be slower than the classic algorithm on instances that are too
small. Therefore, a divide-and-conquer algorithm must avoid proceeding recur-
sively when the size of the subinstances no longer justifies this. We come back to
this issue in the next section.
222 Divide-and-Conquer Chapter 7
For simplicity, several important issues have been swept under the rug so far.
How do we deal with numbers of odd length? Even though both halves of the
multiplier and the multiplicand are of size n/2, it can happen that their sum over-
flows and is of size 1 bigger. Therefore, it was slightly incorrect to claim that
r = (w + x)x(y + z) involves a half-size multiplication. How does this affect the
analysis of the running time? How do we multiply two numbers of different sizes?
Are there arithmetic operations other than multiplication that we can handle more
efficiently than by using classic algorithms?
Numbers of odd length are easily multiplied by splitting them as nearly down
the middle as possible: an n-figure number is split into a [n/2J-figure number and
a [n/21-figure number. The second question is trickier. Consider multiplying 5678
by 6789. Our algorithm splits the operands into w = 56, x = 78, y = 67 and z = 89.
The three half-size multiplications involved are
p = wy = 56 x 67
q = xz = 78 x 89, and
r = (w + x)x(y + z)= 134 x 156.
The third multiplication involves three-figure numbers, and thus it is not really half-
size compared with the original multiplication of four-figure numbers. However,
the size of w + x and y + z cannot exceed 1 + [n/21.
To simplify the analysis, let t (n) denote the time taken by this algorithm in
the worst case to multiply two numbers of size at most n (rather than exactly n).
By definition, t (n) is a nondecreasing function. When n is sufficiently large, our
algorithm reduces the multiplication of two numbers of size at most n to three
smaller multiplications p = wy, q = xz and r = (w + x) x (y + z) of sizes at most
Ln/2J, In/21 and 1 + [n/21, respectively, in addition to easy manipulations that
take a time in 0 (n). Therefore, there exists a positive constant c such that
for all sufficiently large n. This is precisely the recurrence we studied in Exam-
ple 4.7.14, which yields the now-familiar t (n) e 0 (n1g 3 ). Thus it is always possible
to multiply n-figure numbers in a time in 0 (n 1g 3 ). A worst-case analysis of this
algorithm shows that in fact t (n) c O(n 1g3), but this is of limited interest because
even faster multiplication algorithms exist; see Problems 7.2 and 7.3.
Turning to the question of multiplying numbers of different size, let u and v
be integers of size m and n, respectively. If m and n are within a factor of two
of each other, it is best to pad the smaller operand with nonsignificant zeros to
make it the same length as the other operand, as we did when we multiplied 981
by 1234. However, this approach is to be discouraged when one operand is much
larger than the other. It could even be worse than using the classic multiplication
algorithm! Without loss of generality, assume that m • n. The divide-and-conquer
algorithm used with padding and the classic algorithm take time in e (nlg3 ) and
G(mn), respectively, to compute the product of u and v. Considering that the
Section 7.2 The general template 223
hidden constant of the former is likely to be larger than that of the latter, we see
that divide-and-conquer with padding is slower than the classic algorithm when
m < nfg( 3 /2 ), and thus in particular when m < /n.
Nevertheless, it is simple to combine both algorithms to obtain a truly better
algorithm. The idea is to slice the longer operand v into blocks of size m and to
use the divide-and-conquer algorithm to multiply u by each block of v, so that the
divide-and-conquer algorithm is used to multiply pairs of operands of the same
size. The final product of u and v is then easily obtained by simple additions and
shifts. The total running time is dominated by the need to perform [n /rmn multi-
plications of m-figure numbers. Since each of these smaller multiplications takes
a time in O(m'g3 ) and since [n/rn IOE (n/rm), the total running time to multiply
an n-figure number by an mr-figure number is in O (nmlg(3 2)) when m < n.
Multiplication is not the only interesting operation involving large integers.
Modular exponentiation is crucial for modern cryptography; see Section 7.8. Inte-
ger division, module operations, and the calculation of the integer part of a square
root can all be carried out in a time whose order is the same as that required for
multiplication; see Section 12.4. Some other important operations, such as calculat-
ing the greatest common divisor, may well be inherently harder to compute; they
are not treated here.
function DC(x)
if x is sufficiently small or simple then return adhoc(x)
decompose x into smaller instances x 1 , x 2, * * - Xe
for i - 1 to l? do yi - DC(xi)
recombine the YL 's to obtain a solution y for x
return y
Some divide-and-conquer algorithms do not follow this outline exactly: for in-
stance, they could require that the first subinstance be solved before the second
subinstance is formulated; see Section 7.5.
The number of subinstances, 1?,is usually small and independent of the particu-
lar instance to be solved. When e = 1, it does not make much sense to "decompose x
into a smaller instance xl " and it is hard to justify calling the technique divide-and-
conquer. Nevertheless, it does make sense to reduce the solution of a large instance
to that of a smaller one. Divide-and-conquer goes by the name of simplification in
this case; see Sections 7.3 and 7.7. When using simplification, it is sometimes pos-
sible to replace the recursivity inherent in divide-and-conquer by an iterative loop.
224 Divide-and-Conquer Chapter 7
provided n is large enough. If there exists an integer k such that .g(n) O(nk),
then Example 4.7.16 applies to conclude that
[(3(nk) if l < bk
t(n) ()(nklogn) if ? = bk (7.1)
[(nflog, i) if f > bk.
The techniques used in Section 4.7.6 and Example 4.7.14 generally apply to yield the
same conclusion even if some of the subinstances are of a size that differs from Ln / b
by at most an additive constant, and in particular if some of the subinstances are
of size [n /b]. As an example, our divide-and-conquer algorithm for large integer
multiplication is characterized by l?= 3, b = 2 and k = 1. Since 4 > bk, the third
case applies and we get immediately that the algorithm takes a time in 0 (n l g3)
with no need to worry about the fact that two of the subinstances are of size I n/21
and 1 + [n/21 rather than [n/2J. In more complicated cases when g(n) is not in
the exact order of a polynomial, Problem 4.44 may apply.
It remains to see how to determine whether to divide the instance and make
recursive calls, or whether the instance is so simple that it is better to invoke the
basic subalgorithm directly. Although this choice does not affect the order of the
execution time of the algorithm, we are also concerned to make the multiplicative
constant hidden in the 0) notation as small as possible. With most divide-and-
conquer algorithms, this decision is based on a simple threshold, usually denoted no.
The basic subalgorithm is used to solve any instance whose size does not exceed no.
Section 7.2 The general template 225
We return to the problem of multiplying large integers to see why the choice
of threshold is important, and how to choose it. To avoid clouding the essential
issues, we use a simplified recurrence formula for the running time of the divide-
and-conquer algorithm for multiplying large integers:
threshold for an instance whose size remains fixed. This empirical approach may
require considerable amounts of computer (and human!) time. We once asked stu-
dents in an algorithmics course to implement the divide-and-conquer algorithm
for multiplying large integers and to compare it with the classic algorithm. Several
groups tried to estimate the optimal threshold empirically, each group using in
the attempt more than 5000 dollars worth of machine time! On the other hand, a
purely theoretical calculation of the optimal threshold is rarely possible, given that
it varies from one implementation to another.
The hybrid approach, which we recommend, consists of determining theoreti-
cally the form of the recurrence equations, and then finding empirically the values
of the constants used in these equations for the implementation at hand. The op-
timal threshold can then be estimated by finding the size n of the instance for
which it makes no difference whether we apply the classic algorithm directly or
whether we go on for one more level of recursion; see Problem 7.8. This is why
we chose no = 64: the classic multiplication algorithm requires h (64)= 642 = 4096
microseconds to multiply two 64-figure numbers, whereas if we use one level of
recursion in the divide-and-conquer approach, the same multiplication requires
g(64)= 16 x 64 - 1024 microseconds in addition to three multiplications of 32-fig-
ure numbers by the classic algorithm, at a cost of h(32)= 322 = 1024 microseconds
each, for the same total of 3h(32)+g(64)= 4096 microseconds.
One practical difficulty arises with this hybrid technique. Even though the clas-
sic multiplication algorithm requires quadratic time, it was an oversimplification to
state that h(n) = cn 2 for some constant c that depends on the implementation. It is
more likely that there exist three constants a, b and c such that h (n) = cn2 + bn + a.
Although bn + a becomes negligible compared to cn2 when n is large, the clas-
sic algorithm is in fact used precisely on instances of moderate size. It is therefore
usually insufficient merely to estimate the higher-order constant c. Instead, it is
necessary to measure h(n) a number of times for several different values of n to
estimate all the necessary constants. The same remark applies to g(n).
This algorithm clearly takes a time in 0(r), where r is the index returned. This is
Q(n) in the worst case and 0(1) in the best case. If we assume that the elements
of T are distinct, that x is indeed somewhere in the array, and that it is to be found
with equal probability at each possible position, then the average number of trips
round the loop is (n + 1) /2; see Problem 7.9. On the average, therefore, as well as
in the worst case, sequential search takes a time in 0 (n).
To speed up the search, we should look for x either in the first half of the
array or in the second half. To find out which of these searches is appropriate, we
compare x to an element in the middle of the array. Let k = [n/21. If x < T[k],
then the search for x can be confined to T 1 .. k]; otherwise it is sufficient to search
T [k + 1. . n]. To avoid repeated tests in each recursive call, it is better to verify at
the outset if the answer is n + 1, that is if x lies to the right of T. We obtain the
following algorithm, illustrated in Figure 7.1.
1 2 3 4 5 6 7 8 9 10 11
-5 -2 0 3 8 8 9 12 12 26 31 x < T[k]?
i k j no
i k j yes
i k j yes
ik j no
ij Ei= j: stop
Figure 7.1. Binary search for x 12 in T[1. .11]
function binrec(T[i...j], x)
{Binary search for x in subarray T[i . .j]
with the promise that T[i -1]< x < T[j]}
if i = j then return i
k- (i + j) 2
if x < T[k] then return binrec(T[i . .k], x)
else return binrec(TI[k + 1. .j], x)
Let t(m) be the time required for a call on binrec(T[i. .j], x), where m j - i + 1
is the number of elements still under consideration in the search. The time required
for a call on binsearch(T[1 . . n], x) is clearly t(n) up to a small additive constant.
228 Divide-and-Conquer Chapter 7
When m > 1, the algorithm takes a constant amount of time in addition to one
recursive call on m /21 or ym/2j elements, depending whether or not x < TIjk].
Therefore, t(m)= t(m/2)+g(m) when m is even, where g(m)e 0(1)= 0(m0 ).
By our general analysis of divide-and-conquer algorithms, using Equation 7.1 with
e = 1, b = 2 and k = 0, we conclude that t (in) 0 (log m). Therefore, binary search
can be accomplished in logarithmic time in the worst case. It is easy to see that this
version of binary search also takes logarithmic time even in the best case.
Because the recursive call is dynamically at the end of the algorithm, it is easy
to produce an iterative version.
The analysis of this algorithm is identical to that of its recursive counterpart binsearch.
Exactly the same array locations are probed (except when n = 0; see Problem 7.10),
and the same sequences of values are assumed by i, j and k. Therefore, iterative
binary search also takes logarithmic time in the worst case as well as in the best
case. This algorithm can be modified to make it faster in the best case (constant
time), at the cost of making it slightly slower (albeit still logarithmic) in the worst
case, but this is to the detriment of average-case performance on large instances;
see Problem 7.11.
7.4 Sorting
Let T[l. . n] be an array of n elements. Our problem is to sort these elements into
ascending order. We have already seen that the problem can be solved by selection
sorting and insertion sorting (Section 2.4), or by heapsort (Section 5.7). Recall that
an analysis both in the worst case and on the average shows that the latter method
takes a time in 0 (n log n), whereas both the former methods take quadratic time.
There are several classic algorithms for sorting that follow the divide-and-conquer
template. It is interesting to note how different they are: significant freedom for
creativity remains even after deciding to attempt solving a given problem by divide-
and-conquer. We study two of them now-mergesort and quicksort-leaving yet
another for Chapter 11.
to preserve the order. To do this, we need an efficient algorithm for merging two
sorted arrays U and V into a single array T whose length is the sum of the lengths
of U and V. This can be achieved more efficiently-and more easily-if additional
storage is available at the end of both the arrays U and V to be used as a sentinel.
(This technique works only if we can set the sentinel to a value guaranteed to
be bigger than every element in U and V, which we denote below by cc see
Problem 7.13.)
The merge sorting algorithm is as follows, where we use insertion sort (insert) from
Section 2.4 as the basic subalgorithm. For the sake of efficiency, it may be better if
the intermediate arrays U and V are global variables.
Array to be sorted
3 11411 5 9 2 6 5 3 5 8 9]
31 41 15 91 1216 5 31518 9
112 3 3 4 5 5 5 6 8 9 9
11 2 3345556899
storage for the intermediate arrays U and V. Recall that heapsort can sort in-place,
in the sense that it needs only a small constant number of working variables. Merge
sorting can also be implemented in-place, but at the cost of such an increase in the
hidden constant that this is only of theoretical interest; see Problem 7.14.
The merge sorting algorithm illustrates the importance of creating subinstances
of roughly equal size when developing divide-and-conquer algorithms. Consider
the following variation on mergesort. (The dummy call to badmergesort
(V[1.. 1]) is included only to stress similarity with the original mergesort algo-
rithm.)
Let i(n) be the time needed to sort n elements with this modified algorithm. It is
clear that i(n)= i(n - 1)+ i(1)+(n), where g(n) 03(n). This recurrence yields
t(n) (-)(n2 ). Thus simply forgetting to balance the sizes of the subinstances can
Section 7.4 Sorting 231
7.4.2 Quicksort
The sorting algorithm invented by Hoare, usually known as quicksort, is also based
on the principle of divide-and-conquer. Unlike mergesort, most of the nonrecursive
part of the work to be done is spent constructing the subinstances rather than
combining their solutions. As a first step, this algorithm chooses as pivot one of
the items in the array to be sorted. The array is then partitioned on either side
of the pivot: elements are moved so that those greater than the pivot are to its
right, whereas the others are to its left. If now the sections of the array on either
side of the pivot are sorted independently by recursive calls of the algorithm, the
final result is a completely sorted array, no subsequent merge step being necessary.
To balance the sizes of the two subinstances to be sorted, we would like to use
the median element as the pivot. (For a definition of the median, see Section 7.5.)
Unfortunately, finding the median takes more time than it is worth. For this reason
we simply use an arbitrary element of the array as the pivot, hoping for the best.
Designing a linear time pivoting algorithm is no challenge. However, it is
crucial in practice that the hidden constant be small if quicksort is to be competitive
with other sorting techniques such as heapsort. Suppose subarray T[i. . j] is to
be pivoted around p = T[i]. One good way of pivoting consists of scanning the
subarray just once, but starting at both ends. Pointers k and I are initialized to i
and j + 1, respectively. Pointer k is then incremented until T[k] > p, and pointer I
is decremented until TEI] < p. Now T[k] and TEI] are interchanged. This process
continues as long as k < 1. Finally, T[iI and Tlt ] are interchanged to put the pivot
in its correct position.
Now here is the sorting algorithm. To sort the entire array T, simply call
quicksort(T[I . . n]).
232 Divide-and-Conquer Chapter 7
top-level call to pivot, and g(n)+t(1 -1)+t(n -1) is the expected time to sort n
elements conditional on this value of 1being returned by that call.
Section 7.4 Sorting 233
Array to be sorted
431 41115191216151315189
The array is pivoted about its
first element p = 3
1311114111519121615139518[9]
313 1 4 1 5 9 2T6 5 3 5 8 9
Swap those elements
!31113111519121615141518 9]
1311113111519121615 4589
Swap
33132195654589
Scan
131113111219151615141518 9]
11233411311415556899
To make the formula more explicit, let no be the threshold above which the
recursive approach is used, meaning that insertion sort is used whenever there are
no more than no elements to sort. Furthermore, let d be a constant (depending on
the implementation) such that g (n) < dn whenever n > no. Taking g(n) out of
the summation, we have
1 n
t(n)< dn + - (t(1 -1)+t(n -1)) for n > no.
=1
Noting that the term t (k) appears twice for each 0 < k < n - 1, a little manipulation
yields
2n -1
t(n)< dn+- E (k) for n > no. (7.2)
k =0
An equation of this type is more difficult to analyse than the linear recurrences
we saw in Section 4.7. In particular, Equation 7.1 does not apply this time. By anal-
ogy with mergesort, it is nevertheless reasonable to hope that t (n) is in O (n log n):
on the average the subinstances are not too badly unbalanced and the solution
would be 0 (n log n) if they were as well balanced as with mergesort. Proving
this conjecture provides a beautiful application of the technique of constructive
induction (Section 1.6.4). To apply this technique, we postulate the existence of a
constant c, unknown as yet, such that t (n) < c n log n for all n 2 2. We find an
appropriate value for this constant in the process of proving its existence by gener-
alized mathematical induction. We start at n = 2 because n log n is undefined or
zero for smaller values of n; alternatively, we could start at n = no + 1.
Proof Let t(n) be the time required by quicksort to sort n elements on the average.
Let d and no be constants such that Equation 7.2 holds. We wish to prove that
t (n) < c n log n for all n 2 2, provided c is a well-chosen constant. We proceed by
constructive induction. Assume without loss of generality that no 2 2.
o Basis: Consider any integer n such that 2 < n < no. We have to show
that t (n) < c n log n. This is easy since we still have complete freedom to
choose the constant c and the number of basis cases is finite. It suffices to
choose c at least as large as t (n) / (n log n). Thus, our first constraint on
c is
o Induction step: Consider any integer n > no. Assume the induction hypoth-
esisthatt(k)< cklogkforallksuchthat2 < k < n. Wewishtoconstrainc
so that t (n) < c n log n follows from the induction hypothesis. Let a stand
for t(O)+t(1). Starting with Equation 7.2,
2 n-
t(n) < dn + - t(k)
dn + 2 (t(O)+t(1)+ Et(k))
=cnlogn -(2-d -n
c > 2d + 4a7
(fln o + 1)2(74
Putting together the constraints given by Equations 7.3 and 7.4, it suffices to set
to conclude the proof by constructive induction that t (n) < c n log n for all n > 2,
and therefore that t (n) E O (n log n).
If you are puzzled or unconvinced, we urge you to work for yourself a proof by
ordinary-as opposed to constructive-generalized mathematical induction that
236 Divide-and-Conquer Chapter 7
1 2 3 4 n 2 ,I- I 1 ,
t (n) < c n log n for all n > 2, this time using the explicit value for c given by Equa-
tion 7.5. E
that partitions T into three sections using p as pivot: after pivoting, the elements
in T[i . . k] are smaller than p, those in T[k + 1 . .1 - 1] are equal to p, and those
in TNI. . j] are larger than p. The values of k and I are returned by pivotbis. Af-
ter pivoting with a call on pivotbis(T[i . . j], T[i], k, 1), it remains to call quicksort
recursively on T[i . . k] and T l . . 1]. With this modification, sorting an array of
equal elements takes linear time. More interestingly, quicksort now takes a time in
0 (n log n) even in the worst case if the median of T [i. . j] is chosen as pivot in
linear time. However, we mention this possibility only to point out that it should be
Section 7.5 Finding the median 237
shunned: the hidden constant associated with this "improved" version of quicksort
is so large that it results in an algorithm worse than heapsort in every case!
3114115l9l2l6[5 3 518j9f
3j1 4 1f24351515119689
1 1\23 4 3 I I.......
Only part right of pivot is still relevant since 4 > 4
By an analysis similar to that of binary search, the above algorithm selects the
required element of T after going round the repeat loop a logarithmic number of
times in the worst case. However, trips round the loop no longer take constant
Section 7.5 Finding the median 239
time, and indeed this algorithm cannot be used until we have an efficient way to
find the median, which was our original problem. Can we modify the algorithm
to avoid resort to the median?
First, observe that our algorithm still works regardless of which element of T is
chosen as pivot (the value of p). It is only the efficiency of the algorithm that depends
on the choice of pivot: using the median assures us that the number of elements
still under consideration is at least halved each time round the repeat loop. If we
are willing to sacrifice speed in the worst case to obtain an algorithm reasonably
fast on the average, we can borrow another idea from quicksort and simply choose
T[i] as pivot. In other words, replace the first instruction in the loop with
p - T[i].
This causes the algorithm to spend quadratic time in the worst case, for example
if the array is in decreasing order and we wish to find the smallest element. Nev-
ertheless, this modified algorithm runs in linear time on the average, under our
usual assumption that the elements of T are distinct and that each of the n! pos-
sible initial permutations of the elements is equally likely. (The analysis parallels
that of quicksort; see Problem 7.18). This is much better than the time required
on the average if we proceed by sorting the array, but the worst-case behaviour is
unacceptable for many applications.
Happily, this quadratic worst case can be avoided without sacrificing linear
behaviour on the average. The idea is that the number of trips round the loop
remains logarithmic provided the pivot is chosen reasonably close to the median.
A good approximation to the median can be found quickly with a little cunning.
Consider the following algorithm.
less than or equal to p, and therefore at most the (7n + 12) /10 remaining elements
of T are strictly larger than p. Similar reasoning applies to the number of elements
of T that are strictly smaller than p.
Although p is probably not the exact median of T, we conclude that its rank
is approximately between 3n /10 and 7n / 10. To visualize how these factors arise,
although nothing in the execution of the algorithm pseudomed really corresponds to
this illustration, imagine as in Figure 7.6 that the elements of T are arranged in five
rows, with the possible exception of at most four elements left aside. Now suppose
each of the [n/51 columns as well as the middle row are sorted by magic, the
smallest elements going to the top and to the left, respectively. The middle row
corresponds to the array Z in the algorithm and the element in the circle corresponds
to the median of this array, which is the value of p returned by the algorithm.
Clearly, each of the elements in the box is less than or equal to p. The conclusion
follows since the box contains approximately three-fifths of one-half of the elements
of T.
* * a SSS
We now analyse the efficiency of the selection algorithm presented at the beginning
of this section when the first instruction in its repeat loop is replaced by
p - pseudomed(T[i .. ]) .
Let t (n) be the time required in the worst case by a call on selection(TII . . n], s).
Consider any i and j such that 1 < i < j < n. The time required to complete the
repeat loop with these values for i and j is essentially t(m), where m = j - i + 1
is the number of elements still under consideration. When n > 5, calculating
pseudomed(T) takes a time in t([n/5J)+O(n) because the array Z can be con-
structed in linear time since each call to adhocmed takes constant time. The call
to pivotbis also takes linear time. At this point, either we are finished or we have to
go back round the loop with at most (7n + 12)/10 elements still to be considered.
Therefore, there exists a constant d such that
o (n log n) because we had analysed mergesort already, and it worked; no such luck
this time.) With some experience, the fact that + -7 < 1 is a telltale that t(n) may
well be linear in n, which is clearly the best we could hope for; see Problem 7.19.
Theorem 7.5.1 The selection algorithm used with pseudomedfinds the s-th small-
est among n elements in a time in 0 (n) in the worst case. In particular,the median
can befound in linear time in the worst case.
Proof Let t(n) and d be as above. Clearly, t(n)e Q(n) since the algorithm must look at
each element of T at least once. Thus it remains to prove that t(n)e 0(n). Let us
postulate the existence of a constant c, unknown as yet, such that t(n) < en for
all n 2 1. We find an appropriate value for this constant in the process of proving
its existence by generalized mathematical induction. Constructive induction will
also be used to determine the constant no that separates the basis case from the
induction step. For now, our only constraint is no > 5 because Equation 7.6 only
applies when n > 5. (We shall discover that the obvious choice no = 5 does not
work.)
• Basis: Consider any integer n such that 1 < n • no. We have to show that
t (n) cen. This is easy since we still have complete freedom to choose the
constant c and the number of basis cases is finite. It suffices to choose c at
least as large as t(n)/n. Thus, our first constraint on c is
C> t(n)/n for all n such that 1 • n < no. (7.7)
c Induction step: Consider any integer n > no. Assume the induction hy-
pothesis that t(m)< cm when 1 < m < n. We wish to constrain c so
that t (n) < en follows from the induction hypothesis. Starting with Equa-
tion 7.6, and because 1 < (7n + 12) /10 < n when n > no > 5,
t(n) s dn + t ( n/5J) + max{t(m) I m < (7n + 12) /101
<dn + cn/5 + (7n + 12)c/10 by the induction hypothesis
=9cn/10 + dn + 6c/5
= en - (/10 - d - 6c/5n) n.
It follows that t (n) n provided c /10 - d - 6c /5n > 0, which is equivalent
to (1 - 12/n) c 2 10d. This is possible provided n > 13 (so 1 - 12/n > 0),
in which case c must be no smaller than 10d/ (1 - 12/n). Keeping in mind
that n > no, any choice of no > 12 is adequate, provided c is chosen
accordingly. More precisely, all is well provided no > 12 and
l0d
c 12 (7.8)
1 - no-i-
which is our second and final constraint on c and no. For instance, the
induction step is correct if we take no = 12 and c > 130d, or no = 23 and
C > 20d, or no = 131 and c > 11d.
242 Divide-and-Conquer Chapter 7
Putting together the constraints given by Equations 7.7 and 7.8, and choosing
no = 23 for the sake of definiteness, it suffices to set
to conclude the proof by constructive induction that t (n) < cn for all n > 1. A
Each entry in C is calculated in a time in 0((n), assuming that scalar addition and
multiplication are elementary operations. Since there are n2 entries to compute,
the product AB can be calculated in a time in 0 (n 3 ).
Towards the end of the 1960s, Strassen caused a considerable stir by improving
this algorithm. From an algorithmic point of view, this breakthrough is a landmark
in the history of divide-and-conquer, even though the equally surprising algorithm
for multiplying large integers (Section 7.1) was discovered almost a decade earlier.
The basic idea behind Strassen's algorithm is similar to that earlier one. First we
show that two 2 x 2 matrices can be multiplied using less than the eight scalar
multiplications apparently required by the definition. Let
A al a12 ) and B bl b 2 )
a2l a22 b2l b22
We leave the reader to verify that the required product AB is given by the following
matrix.
C- (n2+m 3 ni
1 + m 2 + m5+m6'\
C +=m2 + m 4 m7 ml + m2 + m4 + m5 (7.10)
It is therefore possible to multiply two 2 x 2 matrices using only seven scalar mul-
tiplications. At first glance, this algorithm does not look very interesting: it uses
Section 7.7 Exponentiation 243
a large number of additions and subtractions compared to the four additions that
are sufficient for the classic algorithm.
If we now replace each entry of A and B by an n x n matrix, we obtain an algo-
rithm that can multiply two 2n x 2n matrices by carrying out seven multiplications
of n x n matrices, as well as a number of additions and subtractions of n x n matri-
ces. This is possible because the 2 x 2 algorithm does not rely on the commutativity
of scalar multiplication. Given that large matrices can be added much faster than
they can be multiplied, saving one multiplication more than compensates for the
supplementary additions.
Let t (n) be the time needed to multiply two n x n matrices by recursive use
of Equations 7.9 and 7.10. Assume for simplicity that n is a power of 2. Since
matrices can be added and subtracted in a time in 0(n 2 ), t(n)= 7t(n/2)+g(n),
where g (n) E 0 (n2 ). This recurrence is another instance of our general analysis for
divide-and-conquer algorithms. Equation 7.1 applies with f = 7, b = 2 and k = 2.
Since p > bk, the third case yields t(n)e 0(n1 g7 ). Square matrices whose size is
not a power of 2 are easily handled by padding them with rows and columns of
zeros, at most doubling their size, which does not affect the asymptotic running
time. Since lg 7 < 2.81, it is thus possible to multiply two n x n matrices in a time
in 0 (n 2 8. 1 ), provided scalar operations are elementary.
Following Strassen's discovery, a number of researchers attempted to improve
the constant co such that it is possible to multiply two n x n matrices in a time
in 0 (n ). The obvious thing to try first was to multiply two 2 x 2 matrices with
six scalar multiplications. But in 1971 Hopcroft and Kerr proved this is impossible
when the commutativity of multiplication cannot be used. The next thing to try was
to find a way to multiply two 3 x 3 matrices with at most 21 scalar multiplications.
This would yield a recursive algorithm to multiply n x n matrices in a time in
0(nflg 3 21), asymptotically faster than Strassen's algorithm since log 3 21 < log 2 7.
Unfortunately, this too is impossible.
Almost a decade passed before Pan discovered a way to multiply two 70 x 70
matrices with 143 640 scalar multiplications-compare this with the 343 000 re-
quired by the classic algorithm-and indeed log 7 0 143640 is a tiny bit smaller
than lg 7. This discovery launched the so-called decimal war. Numerous algo-
rithms, asymptotically more and more efficient, were discovered subsequently.
For instance, it was known at the end of 1979 that matrices could be multiplied in
a time in 0(n 2 5 2181 3 ); imagine the excitement in January 1980 when this was im-
proved to ) (n25 2 18 0 1 ). The asymptotically fastest matrix multiplication algorithm
known at the time of writing goes back to 1986 when Coppersmith and Winograd
discovered that it is possible, at least in theory, to multiply two n-x n matrices in a
time in 0 (n 2 .376 ). Because of the hidden constants involved, however, none of the
algorithms found after Strassen's is of much practical use.
7.7 Exponentiation
Let a and n be two integers. We wish to compute the exponentiation x = a'.
For simplicity, we shall assume throughout this section that n > 0. If n is small,
the obvious algorithm is adequate.
244 Divide-and-Conquer Chapter 7
function exposeq(a, n)
r- a
fori - ton -I do r -ax r
return r
This algorithm takes a time in 63(n) since the instruction r - a x r is executed
exactly n -1 times, provided the multiplications are counted as elementary operations.
However, on most computers, even small values of n and a cause this algorithm
to produce integer overflow. For example, 1517 does not fit in a 64-bit integer.
If we wish to handle larger operands, we must take account of the time required
for each multiplication. Let M(q, s) denote the time needed to multiply two inte-
gers of sizes q and s. For our purpose, it does not matter whether we consider the
size of integers in decimal digits, in bits, or in any other fixed basis larger than 1.
Assume for simplicity that qi s q2 and sI < s2 imply that M(ql,sI)< M(q 2 ,s2 ).
Let us estimate how much time our algorithm spends multiplying integers when
exposeq(a, n) is called. Let m be the size of a. First note that the product of two
integers of size i and j is of size at least i + j - 1 and at most i + j; see Problem 7.24.
Let ri and mi be the value and the size of r at the beginning of the i-th time round
the loop. Clearly, r1 = a and therefore ml = m. Since ri+I = ari, the size of ri+l
is at least m + mi -1 and at most m + mi. The demonstration by mathematical
induction that im - i + 1 < mi < im for all i follows immediately. Therefore, the
multiplication performed the i-th time round the loop concerns an integer of size
m and an integer whose size is between im - i + 1 and imu, which takes a time be-
tween M(m, im - i + 1) and M(m, im). The total time T(m, n) spent multiplying
when computing a' with exposeq is therefore
where m is the size of a. This is a good estimate on the total time taken by exposeq
since most of the work is spent performing these multiplications.
If we use the classic multiplication algorithm (Section 1.2), then M (q, s) E 03 (qs).
Let c be such that M(q, s)< c qs.
n-l n-1
T(m,n) E M(m, im) < cm im
iil iil
n-I
=cm 2 2 2
Zi<cm n
Thus, T(m, n)e O(m2 n2 ). It is equally easy to show from Equation 7.11 that
T(m,n)e Q(m 2 n2 ) and therefore T(m,n) )(m 2n 2 ); see Problem 7.25. On the
other hand, if we use the divide-and-conquer multiplication algorithm described
earlier in this chapter, M(q,s)e 0(sqlg(3 12)) when s > q, and a similar argument
yields T(m,n)e 0(mig3 n 2).
The key observation for improving exposeq is that an - (a"/ 2 )2 when n is even.
This is interesting because an/2 can be computed about four times faster than an
Section 7.7 Exponentiation 245
a ~if n =1
a"l (aa' 2 )2 if n is even
a x an 1 otherwise
For instance,
a2 9 =aa 28
= a(a 14
)2= a((a
7
)2)2 a((a(aa2)2) 2,
which involves only three multiplications and four squarings instead of the 28 mul-
tiplications required with exposeq. The above recurrence gives rise to the following
algorithm.
function expoDC(a, n)
if n = 1 then return a
if n is even then return [expoDC(a, n/2)]2
return a x expoDC(a, n - 1)
[0 if n=1
N(n) N(n/2)+1 if n is even (7.12)
N(n - 1) +1 otherwise
An unusual feature of this recurrence is that it does not give rise to an increasing
function of n. For example,
This function is not even eventually nondecreasing; see Problem 7.26. Therefore,
Equation 7.1 cannot be used directly.
To handle such a recurrence, it is useful to bound the function from above and
below with nondecreasing functions. When n > 1 is odd,
On the other hand, when n is even, N(n)= N(Ln121)+l since Ln/2J = n/2 in that
case. Therefore,
N(Ln/21)+1 < N(n)< N([n/2J)+2 (7.13)
for all n > 1. Let N1 and N2 be functions defined by
0 if n = 1
T(m,n)< T(m,n/2)+M(mn/2,mn/2) if n iseven (7.15)
T(m,n -1)+M(m, (n -1)m) otherwise
As with the recurrence for N, this implies that
T(m,n)< T(m, Ln/2J)+M(mLn/2J,mLn/21)+M(m, (n -1)m)
exposeq a (m 2 n 2 ) 6(mlg 3 n 2 )
expoDC E)(m 2 2
n ) E)(m 1g3 n 1g 3 )
Section 7.8 Putting it all together: Introduction to cryptography 247
function expoiter(a, n)
i -n; r -1; x- a
while i > 0 do
if i is odd then r - rx
x - x2
i- i .2
return r
and
(x mod z)Y mod z = xy mod z
Thus, expose, expoDC and expoiter can be adapted to compute a' mod z in modu-
lar arithmetic without ever having to manipulate integers larger than max(a, z2 ).
For this, it suffices to reduce modulo z after each multiplication. For example,
expoiter gives rise to the following algorithm.
248 Divide-and-Conquer Chapter 7
function expomod(a, n, z)
{Computes all mod z}
i- n; r- 1; x- amodz
while i > 0 do
if i is odd then r - rx mod z
x - x2 mod z
i -i . 2
return r
The analysis in the previous section applies mutatis mutandis to conclude that this
algorithm needs only a number of modular multiplications in 0 (log n) to compute
an mod z. A more precise analysis shows that the number of modular multiplica-
tions is equal to the number of bits in the binary expansion of n, plus the number of
these bits that are equal to 1; it is thus approximately equal to 2 lg n for a typical n.
In contrast, the algorithm corresponding to exposeq requires n - 1 such multiplica-
tions for all n. For definiteness, say we wish to compute a' mod z where a, n and
z are 200-digit numbers and that numbers of that size can be multiplied modulo
z in one millisecond. Our algorithm expomod typically computes an mod n in less
than one second. The algorithm corresponding to exposeq would require roughly
10179 times the age of the Universe for the same task!
Impressive as this is, you may well wonder who needs to compute such huge
modular exponentiations in real life. It turns out that modern cryptography, the
art and science of secret communication over insecure channels, depends crucially
on this. Consider two parties, whom we shall call Alice and Bob, and assume that
Alice wishes to send some private message m to Bob over a channel susceptible
to eavesdropping. To prevent others reading the message, Alice transforms it into
a ciphertext c, which she sends to Bob. This transformation is the result of an
enciphering algorithm whose output depends not only on the message m but also
on another parameter k known as the key. Classically, this key is secret information
that has to be established between Alice and Bob before secret communication can
take place. From c and his knowledge of k, Bob can reconstruct Alice's actual
message m. Such secrecy systems rely on the hope that an eavesdropper who
intercepts c but does not know k will be unable to determine m from the available
information.
This approach to cryptography has been used with more or less success through-
out history. Its requirement that the parties must share secret information prior to
communication may be acceptable to the military and diplomats, but not to the or-
dinary citizen. In the era of the electronic super-highway, it is desirable for any two
citizens to be able to communicate privately without prior coordination. Can Alice
and Bob communicate secretly in full view of a third party if they do not share a
secret before the communication is established? The age of public-key cryptography
was launched when the thought that this may be possible came to Diffie, Hellman
and Merkle in the mid-seventies. Here, we present the amazingly simple solution
discovered a few years later by Rivest, Shamir and Adleman, which became known
as the RSA cryptographicsystem after the names of its inventors.
Section 7.8 Putting it all together: Introduction to cryptography 249
Now consider the eavesdropper's task. Assuming she has intercepted all com-
munications between Alice and Bob, she knows z, n and c. Her purpose is to
determine Alice's message a, which is the unique number between 0 and z -1
such that c = an mod z. Thus she has to compute the n-th root of c modulo z.
No efficient algorithm is known for this calculation: modular exponentiations can
be computed efficiently with expomod but it appears that the reverse process is
infeasible. The best method known today is the obvious one: factorize z into p
and q, compute (Pas (p - 1)(q - 1), use Problem 7.31 to compute s from n and (P,
and compute a = cS mod z exactly as Bob would have done. Every step in this
attack is feasible but the first: factorizing a 200-digit number is beyond the reach of
current technology. Thus Bob's advantage in deciphering messages intended for
him stems from the fact that he alone knows the factors of z, which are necessary
to compute (Pand s. This knowledge does not come from his factorizing skills but
rather from the fact that he chose z's factors in the first place, and computed z from
them.
At the time of writing, the safety of this cryptographic scheme has not been
established mathematically: factorizing may turn out to be easy or not even nec-
essary to break the scheme. Moreover, an efficient factorizing algorithm is known,
but it requires the availability of a quantum computer, a device whose construction
is beyond the reach of current technology; see Section 12.6. Nevertheless, the secret
250 Divide-and-Conquer Chapter 7
7.9 Problems
Problem 7.1. Consider an algorithm whose running time t(n) on instances of
size n is such that t(n)= 3t(n/2)-+ g (n) when n is even and sufficiently large,
where g(n)c 0(n). This is the recurrence we encountered early in our study of
the divide-and-conquer algorithm to multiply large integers in Section 7.1, before
we had discussed how to handle operands of odd length. Recall that solving it
yields t(n)e 0(nlg 3 I n is a power of 2). Because t(n)= 3t(n/2)+g(n) holds for
all sufficiently large even values of n rather than merely when n is a power of 2,
however, it may be tempting to conclude that t (n) e E)(n1g 3 I n is even). Show that
this conclusion could be premature without more information on the behaviour of
t(n) when n is odd. On the other hand, give a simple and natural condition on
t(n) that would allow the conclusion that it is in () (nlg 3 ) unconditionally.
Problem 7.3. Generalize the algorithm suggested in Problem 7.2 by showing that
the multiplication of two n-figure integers can be reduced to 2k - 1 multiplications
of integers about k times shorter, for any integer constant k. Conclude that there
exists an algorithm A, that can multiply two n-figure integers in a time in 0(no,)
for every real number a > 1.
Problem 7.4. Use a simple argument to prove that Problem 7.3 would be impos-
sible if it required algorithm A, to take a time in 0(nf).
Problem 7.5. Continuing Problem 7.3, consider the following algorithm for mul-
tiplying large integers.
function supermul(u, v)
{We assume for simplicity u and v are the same size}
n - size of u and v
c - 1 + (lglgn)/lgn
return A, (u, v)
At first glance this algorithm seems to multiply two n-figure integers in a time in
O(n log n) since na = n lg n when c = 1 + (lg lg n) / lg n. Find at least two funda-
mental errors in this analysis of supermul.
Section 7.9 Problems 251
Problem 7.6. If you have not yet worked out Problems 4.6, 4.7 and 4.8, now is the
time!
Problem 7.7. What happens to the efficiency of divide-and-conquer algorithms if,
instead of using a threshold to decide when to revert to the basic subalgorithm, we
recur at most r times, for some constant r, and then use the basic subalgorithm?
Problem 7.8. Let a and b be positive real constants. For each positive real num-
ber s, consider the function fs l2°
W ->O defined by
2
fS ( - Iax if x<s
5) 63fs(xI2)+bx otherwise.
Problem 7.11. Quick inspection of the iterative binary search algorithm biniter in
Section 7.3 shows what is apparently an inefficiency. Suppose T contains 17 distinct
elements and x = T[13]. On the first trip round the loop, i = 1, j = 17, and k = 9.
The comparison between x and T[9] causes the assignment i - 10 to be executed.
On the second trip round the loop i = 10, j = 17, and k = 13. A comparison is then
made between x and T[13]. This comparison could allow us to end the search
immediately, but no test is made for equality, and so the assignment j - 13 is
carried out. Two more trips round the loop are necessary before we leave with
i = j = 13. In contrast, algorithm Binary Search from Section 4.2.4 leaves the loop
immediately after it finds the element it is looking for.
Thus biniter systematically makes a number of trips round the loop in 0 (log n),
regardless of the position of x in T, whereas Binary Search may make only one or
two trips round the loop if x is favourably situated. On the other hand, a trip round
252 Divide-and-Conquer Chapter 7
the loop in Binary Search takes a little longer to execute on the average than a trip
round the loop in biniter. To determine which algorithm is asymptotically better,
analyse precisely the average number of trips round the loop that each makes.
For simplicity, assume that T contains n distinct elements and that x appears in T,
occupying each possible position with equal probability. Prove the existence of a
constant c such that on the average Binary Search saves at most c trips round the
loop compared with biniter. In conclusion, which is the better algorithm when the
instance is arbitrarily large?
Problem 7.12. Let T[1.. n] be a sorted array of distinct integers, some of which
may be negative. Give an algorithm that can find an index i such that 1 < i < n
and T[i]= i, provided such an index exists. Your algorithm should take a time in
o (log n) in the worst case.
Problem 7.13. The use of sentinels in algorithm merge requires the availability
of an additional cell in the arrays to be merged; see Section 7.4.1. Although this
is not an issue when merge is used within mergesort, it can be a nuisance in other
applications. More importantly, our merging algorithm can fail if it is not possible
to guarantee that the sentinels are strictly greater than any possible value in the
arrays to be merged.
(a) Give an example of arrays U and V that are sorted but where the result of
merge(U, V, T) is not what it should be. What is the contents of T after this
pathological call? (You are allowed the value co in arrays U and V and you
may wish to specify the values of U and V outside the bounds of the arrays.)
(b) Give a procedure for merging that does not use sentinels. Your algorithm must
work correctly in linear time provided the arrays U and V are sorted prior to
the call.
Problem 7.14. In Section 7.4.1 we saw an algorithm merge capable of merging two
sorted arrays U and V in linear time, that is, in a time in the exact order of the sum
of the lengths of U and V. Find another merging algorithm that achieves the same
goal, also in linear time, but without using an auxiliary array: the sections T[l .. k]
and T[k + 1 . n] of an array are sorted independently, and you have to sort the
whole array T[1 . . n] using only a fixed amount of additional storage.
Problem 7.15. Rather than separate T[I . n] into two half-size arrays for the pur-
pose of merge sorting, we might choose to separate it into three arrays of size n : 3,
(n + 1) .3 and (n + 2) : 3, to sort each of these recursively, and then to merge the
three sorted arrays. Give a more formal description of this algorithm and analyse
its execution time.
Problem 7.16. Consider an array T[l..n]. As in the average-case analysis of
quicksort, assume that the elements of T are distinct and that each of the n! pos-
sible initial permutations of the elements is equally likely. Consider a call on
pivot(T[I . . n], 1). Prove that each of the (1 -1)! possible permutations of the el-
ements in T[1 . . I 1] is equally likely after the call. Prove the similar statement
concerning T[1 + 1, n].
Section 7.9 Problems 253
Problem 7.17. Give a linear-time algorithm for implementing pivotbis from Sec-
tions 7.4.2 and 7.5. Your algorithm should scan the array only once, and no auxiliary
arrays should be used.
Problem 7.18. Prove that the selection algorithm of Section 7.5 takes linear time on
the average if we replace the first instruction in the repeat loop with " p - T[i] ".
Assume that the elements of the array are distinct and that each of the possible
initial permutations of the elements is equally likely.
Problem 7.19. Let al, a2,..., ak be positive real numbers whose sum is strictly
less than 1. Consider a function f: N -l > such that
for some positive c and all sufficiently large n. Prove by constructive induction
thatf(n) 0(n).
Would the above work in general if the ai's sum to exactly I? Justify your answer
with an easy argument.
Problem 7.20. An array T contains n elements. You want to find the m smallest,
where m is much smaller than n. Would you
(a) sort T and pick the first m,
(b) call select (T, i) for i = 1, 2, . . ., m, or
(c) use some other method?
Justify your answer.
Problem 7.21. The array T is as in the previous problem, but now you want the
elements of rank [n/21, En/21 + 1. [n/21 + m -1. Would you
(a) sort T and pick the appropriate elements,
(b) use select m times, or
(c) use some other method?
Justify your answer.
Problem 7.22. The number of additions and subtractions needed to calculate the
product of two 2 x 2 matrices using Equations 7.9 and 7.10 seems at first to be 24.
Show that this can be reduced to 15 by using auxiliary variables to avoid recalcu-
lating terms such as ml + m2 + M 4 .
Problem 7.23. Assuming n is a power of 2, find the exact number of scalar addi-
tions and multiplications needed by Strassen's algorithm to multiply two n x n ma-
trices. (Use the result of Problem 7.22.) Your answer will depend on the threshold
used to stop making recursive calls. Bearing in mind what you learnt in Section 7.2,
propose a threshold that minimizes the number of scalar operations.
254 Divide-and-Conquer Chapter 7
Problem 7.24. We say that an integer x is of (decimal) size n if 10"1 < x < 10" - 1.
Prove that the product of two integers of size i and j is of size at least i + j 1
and at most i + j. Prove that this rule applies equally well in any fixed basis b 2 2,
when we say that an integer x is of size n if b"-1 < x < bn - 1.
Problem 7.25. Let T(m, n) be the time spent multiplying when computing a'
with a call on exposeq(a, n), where m is the size of a; see Section 7.7. Use Equa-
tion 7.11 with M(q, s) E E((qs) to conclude that T(m, n) (E-Q(m 2 n 2 ).
Problem 7.26. Consider the function N(n) given by Equation 7.12, which counts
the number of multiplications needed to compute a" with algorithm expoDC from
Section 7.7. We saw that N(15)> N(16). Prove the existence of an infinity of in-
tegers n such that N(n)> N(n + 1). Conclude that this function is not eventually
nondecreasing.
Problem 7.27. Use Equations 7.12, 7.13 and 7.14 to prove that
for all n and both N1 (n) and N2 (n) are nondecreasing functions.
Problem 7.28. Algorithm expoDC from Section 7.7 does not always minimize the
number of multiplications-including squarings-to calculate a". For instance, it
calculates a 15 as a(a(aa 2 ) 2 ) 2 , which requires six multiplications. Show that in fact
a 15 can be calculated with as few as five multiplications. Resist the temptation to
use the formula a15 = ((((a 2 ) 2 )2 )2) /a and claim that a division is just another form
of multiplication!
Problem 7.29. Let T(m, n) be given by Equation 7.15. This is the time spent
multiplying when calling expoDC (a, n), where m is the size of a; see Section 7.7.
If M(q, s) c (sq0 1 ) for some constant ot when s 2 q, prove that
T(m,n)r= 0(mono).
Problem 7.30. Consider any integers x, y and z such that z is positive. Prove
that
xy mod z = [(x mod z)x(y mod z)] mod z
and
(x mod z)y mod z = xy mod z.
Problem 7.31. Let u and v be two positive integers and let d be their greatest
common divisor.
(a) Prove that there exist integers a and b such that au + bv = d.
Section 7.9 Problems 255
Problem 7.32. In this problem, you are invited to work out a toy example of
encipherment and decipherment using the RSA public-key cryptographic system;
see Section 7.8. Assume Bob chooses his two "large" prime numbers to be p = 19
and q = 23. He multiplies them to obtain z = 437. Next, he chooses randomly
n = 13. Compute D = (p - 1)(q - 1) and use Problem 7.31 to find the unique s
between 1 and z - 1 such that ns mod 4 1. Bob makes z and n public, but he
keeps s secret.
Next, suppose Alice wishes to send cleartext message m = 123 to Bob. She looks
up Bob's z = 437 and n = 13 in the public directory. Use expomod to compute the
ciphertext c = m I mod z. Alice sends c to Bob. Use Bob's secret s to decipher
Alice's message: compute c' mod z with expomod. Is your answer m = 123 as it
should be?
Of course, much bigger numbers would be used in real life.
Problem 7.33. Consider the matrix
F (° 1).
Let i and j be any two integers. What is the product of the vector (i, j) and the
matrix F? What happens if i and j are two consecutive numbers from the Fibonacci
sequence? Use this idea to invent a divide-and-conquer algorithm to calculate this
sequence, and analyse its efficiency (1) counting all arithmetic operations at unit
cost, and (2) counting a time in 0(sql-1) to multiply integers of size q and s
when s > q. Recall that the size of the n-th Fibonacci number is in ( (n).
Problem 7.34. Represent the polynomial p(n)= ao + aln + a2n2 + . + adnd of
degree d by an array P[0.. d] containing its coefficients. Suppose you already have
an algorithm capable of multiplying a polynomial of degree k by a polynomial of
degree 1 in a time in 0 (k), as well as another algorithm capable of multiplying two
polynomials of degree k in a time in 0 (k log k). Let n, n2,..., n.a be integers. Give
an efficient algorithm based on divide-and-conquer to find the unique polynomial
p (n) of degree d whose coefficient of highest degree is 1, such that p (n1 ) = p (n2 )
... = p(nd)= 0. Analyse the efficiency of your algorithm.
256 Divide-and-Conquer Chapter 7
Problem 7.36. Rework Problem 7.35 with the supplementary constraint that the
only comparisons allowed between the elements of T are tests of equality. You
may therefore not assume that an order relation exists between the elements.
Problem 7.37. If you could not manage Problem 7.36, try again but allow your
algorithm to take a time in 0 (n log n) in the worst case.
Player
Day 1 2 3 4 5
1 2 1 - 5 4
2 3 5 1 - 2
(n = 5) 3 4 3 2 1 -
4 5 - 4 3 1
5 - 4 5 2 3
Player
Day 1 2 3 4 5 6
1 2 1 6 5 4 3
2 3 5 1 6 2 4
(n =6) 3 4 3 2 1 6 5
4 5 6 4 3 1 2
5 6 4 5 2 3 1
Figure 7.7. Timetables for five and six players
Problem 7.39. You are given the Cartesian coordinates of n points in the plane.
Give an algorithm capable of finding the closest pair of points in a time in 0 (n log n)
in the worst case.
Section 7.10 References and further reading 257
Problem 7.41. An n-tally is a circuit that takes n bits as input and produces
1 + hIg n] bits as output. It counts (in binary) the number of bits equal to 1 among
the inputs. For example, if n = 9 and the inputs are 011001011, the output is
0101. An (i, j)-adder is a circuit that has one i-bit input, one j-bit input, and one
[1 + max (i, j) I-bit output. It adds its two inputs in binary. For example, if i = 3,
j = 5, and the inputs are 101 and 10111 respectively, the output is 011100. It is al-
ways possible to construct an (ii,j) -adder using exactly max (i, j) 3-tallies. For this
reason the 3-tally is often called a full adder.
(a) Using full adders and (i, j)-adders as primitive elements, show how to build
an efficient n-tally.
(b) Give the recurrence, including the initial conditions, for the number of 3-tallies
needed to build your n-tally. Do not forget to count the 3-tallies that are part
of any (i, j)-adders you might have used.
(c) Using the 0 notation, give the simplest possible expression for the number of
3-tallies needed in the construction of your n-tally. Justify your answer.
Problem 7.42. A switch is a circuit with two inputs, a control, and two outputs.
It connects input A with output A and input B with output B, or input A with output
B and input B with output A, depending on the position of the control; see Fig-
ure 7.8. Use these switches to construct a network with n inputs and n outputs
able to implement any of the n! possible permutations of the inputs. The number
of switches used must be in 0 (n log n).
A - A A A
B_ B B B
integer multiplication is given in the 1981 second edition of Knuth (1969), which
includes the answer to Problems 7.2 and 7.3. See also Borodin and Munro (1975)
and Turk (1982).
The technique to determine the optimal threshold at which to use the basic sub-
algorithm rather than continuing to divide the subproblems is original to Brassard
and Bratley (1988). The solution to Problem 7.11 is also given in Brassard and Brat-
ley (1988); it provides yet another nice application of the technique of constructive
induction.
Quicksort is from Hoare (1962). Mergesort and quicksort are discussed in detail
in Knuth (1973), which is a compendium of sorting techniques. Problem 7.14 was
solved by Kronrod; see the solution to Exercise 18 of Section 5.2.4 of Knuth (1973).
The algorithm linear in the worst case for selection and for finding the median is
due to Blum, Floyd, Pratt, Rivest and Tarjan (1972).
The algorithm that multiplies two n x n matrices in a time in 0 (n 28. 1 ) comes
from Strassen (1969). Subsequent efforts to do better than Strassen's algorithm
began with the proof by Hopcroft and Kerr (1971) that seven multiplications are
necessary to multiply two 2 x 2 matrices in a noncommutative structure; the first
positive success was obtained by Pan (1980), and the algorithm that is asymptoti-
cally the most efficient known at present is by Coppersmith and Winograd (1990).
The thought that secure communication over insecure channels can be achieved
without prior agreement on a secret came independently to Merkle (1978) and
Diffie and Hellman (1976). The RSA public-key cryptographic system described
in Section 7.8, invented by Rivest, Shamir and Adleman (1978), was first pub-
lished by Gardner (1977); be warned however that the challenge issued there was
successfully taken up by Atkins, Graff, Lenstra and Leyland in April 1994 after
eight months of calculation on more than 600 computers throughout the world.
The efficient algorithm capable of breaking this system on a quantum computer
is due to Shor (1994). For more information about cryptology, consult the intro-
ductory papers by Kahn (1966) and Hellman (1980) and the books by Kahn (1967),
Denning (1983), Kranakis (1986), Koblitz (1987), Brassard (1988), Simmons (1992),
Schneier (1994) and Stimson (1995). For an approach to cryptography that remains
secure regardless of the eavesdropper's computing power, consult Bennett, Bras-
sard and Ekert (1992). The natural generalization of Problem 7.28 is examined in
Knuth (1969).
The solution to Problem 7.33 can be found in Gries and Levin (1980) and Ur-
banek (1980). Problem 7.39 is solved in Bentley (1980), but consult Section 10.9 for
more on this problem. Problem 7.40 is solved in Gries (1981); see also Brassard and
Bratley (1988).
Chapter 8
Dynamic Programming
In the previous chapter we saw that it is often possible to divide an instance into
subinstances, to solve the subinstances (perhaps by dividing them further), and
then to combine the solutions of the subinstances so as to solve the original instance.
It sometimes happens that the natural way of dividing an instance suggested by
the structure of the problem leads us to consider several overlapping subinstances.
If we solve each of these independently, they will in turn create a host of identical
subinstances. If we pay no attention to this duplication, we are likely to end up
with an inefficient algorithm; if, on the other hand, we take advantage of the dupli-
cation and arrange to solve each subinstance only once, saving the solution for later
use, then a more efficient algorithm will result. The underlying idea of dynamic
programming is thus quite simple: avoid calculating the same thing twice, usually
by keeping a table of known results that fills up as subinstances are solved.
Divide-and-conquer is a top-down method. When a problem is solved by
divide-and-conquer, we immediately attack the complete instance, which we then
divide into smaller and smaller subinstances as the algorithm progresses. Dynamic
programming on the other hand is a bottom-up technique. We usually start with
the smallest, and hence the simplest, subinstances. By combining their solutions,
we obtain the answers to subinstances of increasing size, until finally we arrive at
the solution of the original instance.
259
260 Dynamic Programming Chapter 8
We begin the chapter with two simple examples of dynamic programming that
illustrate the general technique in an uncomplicated setting. The following sections
pick up the problems of making change, which we met in Section 6.1, and of filling
a knapsack, encountered in Section 6.5.
I ifk=O ork=n
(kn) = (n 1) + (n-1) if 0 < k < n
0 otherwise.
function C(n, k)
if k = O or k = n then return I
else return C(n -1, k - ) +C(n -1, k)
many of the values C(i, j), i < n, j < k, are calculated over and over. For exam-
ple, the algorithm calculates C(5,3) as the sum of C(4,2) and C(4,3). Both these
intermediate results require us to calculate C(3,2). Similarly the value of C(2,2)
is used several times. Since the final result is obtained by adding up a number of
is, the execution time of this algorithm is sure to be in Q ((nk)) . We met a similar
phenomenon before in the algorithm Fibrec for calculating the Fibonacci sequence;
see Section 2.7.5.
If, on the other hand, we use a table of intermediate results-this is of course
Pascal's triangle-we obtain a more efficient algorithm; see Figure 8.1. The table
should be filled line by line. In fact, it is not even necessary to store the entire table:
it suffices to keep a vector of length k, representing the current line, and to update
this vector from left to right. Thus to calculate (') the algorithm takes a time in
9 (nk) and space in 0((k), if we assume that addition is an elementary operation.
0 1 2 3 ... k-l k
0
l I
2 1 2 1
function P(i,j)
if i = 0 then return 1
else if j = 0 then return 0
else return pP(i -1, j)+qP(i,j -1)
Let T(k) be the time needed in the worst case to calculate P(i, j), where k i + j.
With this method, we see that
T(1) = c
T(k) < 2T(k- 1)+d, k >1
where c and d are constants. Rewriting T(k -1) in terms of T(k - 2), and so on,
we find
T(k) < 4T(k - 2)+2d + d, k > 2
etc.
Problem 1.42 asks the reader to show that (n2n)2 4n / (2n + 1). Combining these re-
sults, we see that the time required to calculate P(n, n) is in 0(4f) and in Q(4"/n).
The method is therefore not practical for large values of n. (Although sporting
competitions with n > 4 are the exception, this problem does have other applica-
tions!)
To speed up the algorithm, we proceed more or less as with Pascal's triangle:
we declare an array of the appropriate size and then fill in the entries. This time,
however, instead of filling the array line by line, we work diagonal by diagonal.
Here is the algorithm to calculate P (n, n).
function series(n, p)
array P [O.. n, O.. n]
q - -p
{Fill from top left to main diagonal}
for s - 1 to n do
P[O,sII 1; P[s,O]- O
for k - I to s - I do
P[k, s - k]- pP[k - 1, s - k]+qP[k,s - k - 1]
{Fill from below main diagonal to bottom right}
for s - 1 to n do
fork- ton -sdo
P[s+k,n -k]- pP[s+k-1,n-k]+qP[s+k,n- k -]
return Pin,n]
Since the algorithm has to fill an n x n array, and since a constant time is required
to calculate each entry, its execution time is in e (n2 ). As with Pascal's triangle, it is
easy to implement this algorithm so that storage space in 0(n) is sufficient.
Section 8.2 Making change (2) 263
When i = 1 one of the elements to be compared falls outside the table. The same is
true when j < di. It is convenient to think of such elements as having the value + oo.
If i = 1 and j < dl, then both elements to be compared fall outside the table. In
this case we set c [i, j] to + co to indicate that it is impossible to pay an amount j
using only coins of type 1.
264 Dynamic Programming Chapter 8
Figure 8.3 illustrates the instance given earlier, where we have to pay 8 units
with coins worth 1, 4 and 6 units. For example, c [3,8] is obtained in this case as
the smaller of c [2,8] = 2 and 1 + c [3,8 - d3] = 1 + c [3,2] = 3. The entries elsewhere
in the table are obtained similarly. The answer to this particular instance is that we
can pay 8 units using only two coins. In fact the table gives us the solution to our
problem for all the instances involving a payment of 8 units or less.
Amount: 0 1 2 3 4 5 6 7 8
di = 1 0 1 2 3 4 5 6 7 8
d2 =4 0 1 2 3 1 2 3 4 2
d 3 =6 0 1 2 3 1 2 1 2 2
Figure 8.3. Making change using dynamic programming
function coins(N)
{Gives the minimum number of coins needed to make
change for N units. Array d[Il . .n] specifies the coinage:
in the example there are coins for 1, 4 and 6 units. }
array d[1. .n] = [1, 4,6]
array c[1..n,O..NJ
for i - ito n doc[i,0]- 0
for i - 1 to n do
for j - 1 to N do
c[i,j] - if i = 1 and j < d[i] then +oo
else if i = 1 then 1 + c[,j - d[l]]
else if j < d[i] then c[i -1, j]
else min(c[i -1,j], 1 + c[ij - d[i]])
return c[n,N]
may choose either course of action. Continuing in this way, we eventually arrive
back at c[O, 0], and now there remains nothing to pay. This stage of the algorithm
is essentially a greedy algorithm that bases its decisions on the information in the
table, and never has to backtrack.
Analysis of the algorithm is straightforward. To see how many coins are needed
to make change for N units when n different denominations are available, the
algorithm has to fill up an n x (N + 1) array, so the execution time is in W(nN).
To see which coins should be used, the search back from c [En, N] to c [0 O] makes
n- 1 steps to the row above (corresponding to not using a coin of the current
denomination) and c [n, N] steps to the left (corresponding to handing over a coin).
Since each of these steps can be made in constant time, the total time required is in
O(n + c[n,N]).
8.3 The principle of optimality
The solution to the problem of making change obtained by dynamic programming
seems straightforward, and does not appear to hide any deep theoretical consid-
erations. However it is important to realize that it relies on a useful principle
called the principle of optimality, which in many settings appears so natural that it
is invoked almost without thinking. This principle states that in an optimal se-
quence of decisions or choices, each subsequence must also be optimal. In our
example, we took it for granted, when calculating c [i, j] as the lesser of c [i - 1, j]
and 1 + c[i, j - di], that if c[ i, j] is the optimal way of making change for j units
using coins of denominations 1 to i, then c [i - 1, j] and c [i, j - di] must also give
the optimal solutions to the instances they represent. In other words, although the
only value in the table that really interests us is c En, N], we took it for granted that
all the other entries in the table must also represent optimal choices: and rightly
so, for in this problem the principle of optimality applies.
Although this principle may appear obvious, it does not apply to every prob-
lem we might encounter. When the principle of optimality does not apply, it will
probably not be possible to attack the problem in question using dynamic program-
ming. This is the case, for instance, when a problem concerns the optimal use of
limited resources. Here the optimal solution to an instance may not be obtained
by combining the optimal solutions to two or more subinstances, if the resources
used in these subsolutions add up to more than the total resources available.
For example, if the shortest route from Montreal to Toronto passes through
Kingston, then that part of the journey from Montreal to Kingston must also follow
the shortest possible route, as must the part of the journey from Kingston to Toronto.
Thus the principle of optimality applies. However if the fastest way to drive from
Montreal to Toronto takes us through Kingston, it does not necessarily follow that
it is best to drive as fast as possible from Montreal to Kingston, and then to drive
as fast as possible from Kingston to Toronto. If we use too much petrol on the first
half of the trip, we may have to fill up somewhere on the second half, losing more
time than we gained by driving hard. The sub-trips from Montreal to Kingston,
and from Kingston to Toronto, are not independent, since they share a resource, so
choosing an optimal solution for one sub-trip may prevent our using an optimal
solution for the other. In this situation, the principle of optimality does not apply.
266 Dynamic Programming Chapter 8
For a second example, consider the problem of finding not the shortest, but the
longest simple route between two cities, using a given set of roads. A simple route is
one that never visits the same spot twice, so this condition rules out infinite routes
round and round a loop. If we know that the longest simple route from Montreal
to Toronto passes through Kingston, it does not follow that it can be obtained by
taking the longest simple route from Montreal to Kingston, and then the longest
simple route from Kingston to Toronto. It is too much to expect that when these
two simple routes are spliced together, the resulting route will also be simple. Once
again, the principle of optimality does not apply.
Nevertheless, the principle of optimality applies more often than not. When it
does, it can be restated as follows: the optimal solution to any nontrivial instance
of a problem is a combination of optimal solutions to some of its subinstances. The
difficulty in turning this principle into an algorithm is that it is not usually obvious
which subinstances are relevant to the instance under consideration. Coming back
to the example of finding the shortest route, how can we tell whether the subin-
stance consisting of finding the shortest route from Montreal to Ottawa is relevant
when we want the shortest route from Montreal to Toronto? This difficulty prevents
our using an approach similar to divide-and-conquer starting from the original in-
stance and recursively finding optimal solutions to the relevant subinstances, and
only to these. Instead, dynamic programming efficiently solves every subinstance
to figure out which ones are in fact relevant; only then are these combined into an
optimal solution to the original instance.
where vi > 0, wi > 0 and xi E {0, 11 for 1 < i < n. Here the conditions on vi and
wi are constraints on the instance; those on xi are constraints on the solution. Since
the problem closely resembles the one in Section 6.5, it is natural to enquire first
whether a slightly modified version of the greedy algorithm we used before will
still work. Suppose then that we adapt the algorithm in the obvious way, so that
it looks at the objects in order of decreasing value per unit weight. If the knapsack
is not full, the algorithm should select a complete object if possible before going on
to the next.
Unfortunately the greedy algorithm turns out not to work when xi is required
to be 0 or 1. For example, suppose we have three objects available, the first of which
Section 8.4 The knapsack problem (2) 267
weighs 6 units and has a value of 8, while the other two weigh 5 units each and
have a value of 5 each. If the knapsack can carry 10 units, then the optimal load
includes the two lighter objects for a total value of 10. The greedy algorithm, on
the other hand, would begin by choosing the object that weighs 6 units, since this
is the one with the greatest value per unit weight. However if objects cannot be
broken the algorithm will be unable to use the remaining capacity in the knapsack.
The load it produces therefore consists of just one object with a value of only 8.
To solve the problem by dynamic programming, we set up a table
V[L. .nO.. W], with one row for each available object, and one column for each
weight from 0 to W. In the table, V[ji, j] will be the maximum value of the objects
we can transport if the weight limit is j, 0 • j < W, and if we only include objects
numbered from 1 to i, 1 • i < n. The solution of the instance can therefore be
found in V[n, W].
The parallel with the problem of making change is close. As there, the prin-
ciple of optimality applies. We may fill in the table either row by row or column
by column. In the general situation, V [i, j] is the larger (since we are trying to
maximize value) of V[i - 1, i] and V[i - 1, j - wi] +vi. The first of these choices
corresponds to not adding object i to the load. The second corresponds to choosing
object i, which has for effect to increase the value of the load by vi and to reduce the
capacity available by wi. Thus we fill in the entries in the table using the general
rule
V[i, j]= max(V[i - 1, j], V[i - 1, j - wi]+vi).
For the out-of-bounds entries we define V[0, j] to be 0 when j > 0, and we define
V [i, j ] to be - oo for all i when j < 0. The formal statement of the algorithm, which
closely resembles the function coins of the previous section, is left as an exercise for
the reader; see Problem 8.11.
Figure 8.4 gives an example of the operation of the algorithm. In the figure
there are five objects, whose weights are respectively 1, 2, 5, 6 and 7 units, and
whose values are 1, 6, 18, 22 and 28. Their values per unit weight are thus 1.00,
3.00, 3.60, 3.67 and 4.00. If we can carry a maximum of 11 units of weight, then the
table shows that we can compose a load whose value is 40.
Weight limit: 0 1 2 3 4 5 6 7 8 9 10 11
w,=1,v= 1 0 1 1 1 1 1 1 1 1 1 1 1
w2 =2,V2 =6 0 1 6 7 7 7 7 7 7 7 7 7
W3 = 5,v 3 = 18 0 1 6 7 7 18 19 24 25 25 25 25
w4 = 6, v4 = 22 0 1 6 7 7 18 22 24 28 29 29 40
w5 = 7,v5 = 28 0 1 6 7 7 18 22 28 29 34 35 40
Figure 8.4. The knapsack using dynamic programming
Just as for the problem of making change, the table V allows us to recover not
only the value of the optimal load we can carry, but also its composition. In our
example, we begin by looking at V[5, 11]. Since V[5, 11]= V[4, 11] but V[5, 11]k
V [4, 11 - W5] +V5, an optimal load cannot include object 5. Next V[4, 11] 5 V[3, 11]
268 Dynamic Programming Chapter 8
but V[4, 11] V[3, 11 - W 4 ]+V 4 , so an optimal load must include object 4. Now
V[3,5PA V[2,5] but V[3,5]= V[2,5 - W3]+V3, so we must include object 3. Con-
tinuing thus, we find that V[2, 0]= V[1, 0] and V[1, 0] V[0, 0], so the optimal load
includes neither object 2 nor object 1. In this instance, therefore, there is only one
optimal load, consisting of objects 3 and 4.
In this example the greedy algorithm would first consider object 5, since this
has the greatest value per unit weight. The knapsack can carry one such object.
Next the greedy algorithm would consider object 4, whose value per unit weight
is next highest. This object cannot be included in the load without violating the
capacity constraint. Continuing in this way, the greedy algorithm would look at
objects 3, 2 and 1, in that order, finally ending up with a load consisting of objects
5, 2 and 1, for a total value of 35. Once again we see that the greedy algorithm does
not work when objects cannot be broken.
Analysis of the dynamic programming algorithm is straightforward, and closely
parallels the analysis of the algorithm for making change. We find that a time in
6 (nW) is necessary to construct the table V, and that the composition of the optimal
load can then be determined in a time in 0 (n + W).
8.5 Shortest paths
Let G = (N, A) be a directed graph; N is the set of nodes and A is the set of edges.
Each edge has an associated nonnegative length. We want to calculate the length
of the shortest path between each pair of nodes. Compare this to Section 6.4 where
we were looking for the length of the shortest paths from one particular node, the
source, to all the others.
As before, suppose the nodes of G are numbered from 1 to n, so
N U1,2,...,nl, and suppose a matrix L gives the length of each edge, with
L[i,i]= 0 for i = 1,2,...,n, L[i,j]> 0 for all i and j, and L[i,j]= co if the edge
(i, j) does not exist.
The principle of optimality applies: if k is a node on the shortest path from i
to j, then the part of the path from i to k, and the part from k to j, must also be
optimal.
We construct a matrix D that gives the length of the shortest path between
each pair of nodes. The algorithm initializes D to L, that is, to the direct distances
between nodes. It then does n iterations. After iteration k, D gives the length of
the shortest paths that only use nodes in {1, 2, . . ., k} as intermediate nodes. After
n iterations, D therefore gives the length of the shortest paths using any of the
nodes in N as an intermediate node, which is the result we want. At iteration k,
the algorithm must check for each pair of nodes (i, j) whether or not there exists
a path from i to j passing through node k that is better than the present optimal
path passing only through nodes in {1, 2,..., k - 1}. If Dk represents the matrix D
after the k-th iteration (so Do = L), the necessary check can be implemented by
Dk [li, j] = min(Dk-I [i, j], Dk 1[i, k] +Dk <[k, j]),
where we use the principle of optimality to compute the length of the shortest path
from i to j passing through k. We have also tacitly used the fact that an optimal
path through k does not visit k twice.
Section 8.5 Shortest paths 269
At the k-th iteration the values in the k-th row and the k-th column of D do
not change, since D[k, k] is always zero. It is therefore not necessary to protect
these values when updating D. This allows us to get away with using only a single
n x n matrix D, whereas at first sight it might seem necessary to use two such
matrices, one containing the values of Dk 1 and the other the values of Dk, or even
a matrix n x n x n.
The algorithm, known as Floyd's algorithm, follows.
0 5 soo 00
50 0 15 51
DO= L = 30 oo 0 5
15 oo 5 0
0 5 oco co 0 5 20 10
Di 50 0 15 5 D 50 0 15 5
130 35 0 15 130 35 0 15
15 20 5 0 15 20 5 0
0 5 20 10 0 5 15 10
I 45 0 15 5 20 0 10 5
13 130 35 0 15 D4-30 35 0 15
15 20 5 0 15 20 5 0
5 15
It is obvious that this algorithm takes a time in E)(n 3 ). We can also use Dijkstra's
algorithm to solve the same problem; see Section 6.4. In this case we have to apply
the algorithm n times, each time choosing a different node as the source. If we
use the version of Dijkstra's algorithm that works with a matrix of distances, the
total computation time is in n x 0 (n 2 ), that is, in 0)(n 3 ). The order is the same as
for Floyd's algorithm, but the simplicity of the latter means that it will probably
have a smaller hidden constant and thus be faster in practice. Compilers are good
at optimizing for-loops, too. On the other hand, if we use the version of Dijkstra's
algorithm that works with a heap, and hence with lists of the distances to adjacent
nodes, the total time is in n x 0 ((a + n)log n), that is, in 0 ((an + n 2 )log n), where
a is the number of edges in the graph. If the graph is sparse (a << n 2 ), it may be
preferable to use Dijkstra's algorithm n times; if the graph is dense (a n 2 ), it is
better to use Floyd's algorithm.
We usually want to know where the shortest path goes, not just its length.
In this case we use a second matrix P. all of whose elements are initialized to 0.
The innermost loop of the algorithm becomes
for i - 1 to p do
for j - 1 to r do
C[i,j]- 0
for k- 1 to q do
C[i,j] - C[i,j]+A[i, k]B[k, j]
from which it is clear that a total of pqr scalar multiplications are required to
calculate the matrix product using this algorithm. (In this section we shall not
consider the possibility of using a better matrix multiplication algorithm, such as
Strassen's algorithm, described in Section 7.6.)
Suppose now we want to calculate the product of more than two matrices.
Matrix multiplication is associative, so we can compute the matrix product
M = MIM2 ... Mn
and the one that starts with CD and then calculates AB, since they both require
the same number of multiplications. For each of these five methods, here is the
corresponding number of scalar multiplications:
((AB)C)D 10582
(AB) (CD) 54201
(A(BC))D 2 856
A((BC)D) 4055
A(B(CD)) 26418
The most efficient method is almost 19 times faster than the slowest.
To find directly the best way to calculate the product, we could simply paren-
thesize the expression in every possible fashion and count each time how many
scalar multiplications are required. Let T(n) be the number of essentially different
ways to parenthesize a product of n matrices. Suppose we decide to make the first
cut between the i-th and the (i + 1)-st matrices of the product, thus:
n-l
T (n)= T (i)T(n - i).
iil
Adding the obvious initial condition T (1) = 1, we can use the recurrence to calculate
any required value of T. The following table gives some values of T(n).
n 1 2 3 4 5 10 15
T(n) 1 1 2 5 14 4862 2674440
The values of T(n) are called the Catalannumbers.
For each way that parentheses can be inserted in the expression for M, it takes
a time in Q(n) to count the number of scalar multiplications required (at least,
if we do not try to be subtle). Since T(n) is in Q(4"/n2 ) (combine the results
of Problems 8.24 and 1.42), finding the best way to calculate M using the direct
approach requires a time in Q(4n / n). This method is therefore impracticable for
large values of n: there are too many ways in which parentheses can be inserted
for us to look at them all.
A little experimenting shows that none of the obvious greedy algorithms will
allow us to compute matrix products in an optimal way; see Problem 8.20. Fortu-
nately, the principle of optimality applies to this problem. For instance, if the best
way of multiplying all the matrices requires us to make the first cut between the i-th
and the (i + 1)-st matrices of the product, then both the subproducts MM 2 ...Mi
and Mi+lMi+2... M. must also be calculated in an optimal way. This suggests
Section 8.6 Chained matrix multiplication 273
j=1 2 3 4
i= I 0 5785 1530 2856
5- 3
2 0 \ 1335\ 1845\
22
3 0 \ 907P\ s 2
4 0
Once again, we usually want to know not just the number of scalar multiplications
necessary to compute the product M, but also how to perform this computation
efficiently. As in Section 8.5, we do this by adding a second array to keep track of the
choices we have made. Let this new array be bestk. Now when we compute mij,
we save in bestk [i, j] the value of k that corresponds to the minimum term among
those compared. When the algorithm stops, bestk [1, n] tells us where to make the
first cut in the product. Proceeding recursively on both the terms thus produced,
we can reconstruct the optimal way of parenthesizing M. Problems 8.21 and 8.22
invite you to fill in the details.
For s > 0 there are n - s elements to be computed in the diagonal s; for each
of these we must choose between s possibilities given by the different values of k.
The execution time of the algorithm is therefore in the exact order of
n1 11
n nil
2
Z (n - s)s = n E S- E s
Si1 S1 Si
where we used Propositions 1.7.14 and 1.7.15 to evaluate the sums. The execution
time of the algorithm is thus in 0 (n 3 ), better algorithms exist.
fn such thatfm(i, j)= mij for 1 < i < j < n, but that can be calculated recursively,
unlike the table m, which we calculated bottom-up.
Writing such a function is sirrple: all we have to do is to program the rules for
calculating m.
functionfm(i, j)
if i = j then {only one matrix is involved}
return 0
M - 0c
fork- itoj -Ido
m - min(m,ftn(i, k)+±fn(k + 1,j)+d[i - l]d[k]d[j])
return m
Here the global array d[O. . n] gives the dimensions of the matrices involved, ex-
actly as before. For all the relevant values of k the intervals [i. . k] and [k + 1. . j]
concerned in the recursive calls involve less matrices than [ i. . j]. However each
recursive call still involves at least one matrix (provided of course i < j on the orig-
inal call). Eventually therefore the recursion will stop. To find how many scalar
multiplications are needed to calculate M = M1M2. . .Mn, we simply callfln(l, n).
To analyse this algorithm, let T(s) be the time required to execute a call of
fin (i, i + s), where s is the number of matrix multiplications involved in the corre-
sponding product. This is the same s used previously to number the diagonals of
the table m. Clearly T(O)= c for some constant c. When s > 0, we have to choose
the smallest among s terms, each of the form
s- l
= sb + 2 E T(m).
m=O
Since this implies that T(s)> 2T(s -1), we see immediately that T(s)Ž 2sT(O), so
the algorithm certainly takes a time in Q(2n) to find the best way to multiply n
matrices. It therefore cannot be competitive with the algorithm using dynamic
programming, which takes a time in 0 (n3).
276 Dynamic Programming Chapter 8
sI
T(s) < sd + 2 E a3m
mO
sd+a3S -a,
where we used Proposition 1.7.10 to compute the sum. Unfortunately, this does not
allow us to conclude that T (s) < a3s. However, if we adopt the tactic recommended
in Section 1.6.4 and strengthen the induction hypothesis, things work better. As the
strengthened induction hypothesis, suppose T(m) < a3m - b for m < s, where b
is a new unknown constant. Now when we substitute into the recurrence we obtain
sl1
T(s) < sd + 2 E (a3'- b)
mfI
s(d -2b)+a3s -a.
This is sufficient to ensure that T(s) < a35 - b provided b > d/2 and a > b. To start
the induction, we require that T (0) < a - b, which is satisfied provided a > T (0) +b.
Summing up, we have proved that T(s) < a3 m - b for all s provided b > d/2 and
a > T(0)+b. The time taken by the recursive algorithm to find the best way of
computing a product of n matrices is therefore in 0 (3' ).
We conclude that a call on the recursive functionftn(1, n) is faster than naively
trying all possible ways to parenthesize the desired product, which, as we saw, takes
a time in ( (4n / n). However it is slower than the dynamic programming algorithm
described previously. This illustrates a point made earlier in this chapter. To decide
the best way to parenthesize the product ABCDEFG, sayfm recursively solves 12
subinstances, including the overlapping ABCDEF and BCDEFG, both of which
recursively solve BCDEFfrom scratch. It is this duplication of effort that makesfm
inefficient.
functionfm-mem(i, j)
if i = j then return 0
if mtab[i, j]> 0 then return mtab[i,j]
m o
fork- itoj -Ido
m - min(m,frn-mem(i, k) +fm-mem(k + 1,j)
+d[i - lld[k]d[j])
mtab[i,jP- m
return m
As pointed out in Section 8.7, this function may be speeded up by avoiding the
recursive calls if d[i - 1]d[k]d[j] is already larger than the previous value of m.
We sometimes have to pay a price for using this technique. We saw in Sec-
tion 8.1.1, for instance, that we can calculate a binomial coefficient (k) using a time
in 6 (nk) and space in 06(k). Implemented using a memory function, the calculation
takes the same amount of time but needs space in Q(nk); see Problem 8.26.
If we use a little more space-the space needed is only multiplied by a constant
factor-we can avoid the initialization time needed to set all the entries of the table
to some special value. This can be done using virtual initialization, described in
Section 5.1. This is particularly desirable when only a few values of the function
are to be calculated, but we do not know in advance which ones. For an example,
see Section 9.1.
278 Dynamic Programming Chapter 8
8.9 Problems
Problem 8.1. Prove that the total number of recursive calls made during the com-
putation of C(n, k) using the algorithm of Section 8.1.1 is exactly 2(z) - 2.
Problem 8.2. Calculating the Fibonacci sequence affords another example of the
kind of technique introduced in Section 8.1. Which algorithm in Section 2.7.5 uses
dynamic programming?
Problem 8.3. Prove that the time needed to calculate P (n, n) using the function
P of Section 8.1.2 is in O (41/ n).
Problem 8.4. Using the algorithm series of Section 8.1.2, calculate the probability
that team A will win the series if p = 0.45 and if four victories are needed to win.
Problem 8.5. Repeat the previous problem with p = 0.55. What should be the
relation between the answers to the two problems?
Problem 8.6. As in Problem 8.4, calculate the probability that team A will win
the series if p = 0.45 and if four victories are needed to win. This time, however,
calculate the required probability directly as the probability that team A will win
4 or more out of a series of 7 games. (Playing extra games after team A have won
the series cannot change the result.)
Problem 8.7. Adapt algorithm series of Section 8.1.2 to the case where team A
win any given match with probability p and lose it with probability q, but there
is also a probability r that the match is tied, so it counts as a win for nobody. As-
sume that n victories are still required to win the series. Of course we must have
p+q+r=1.
Problem 8.8. Show that storage space in 0(n) is sufficient to implement the algo-
rithm series of Section 8.1.2.
Problem 8.9. Adapt the algorithm coins of Section 8.2 so it will work correctly
even when the number of coins of a particular denomination is limited.
Problem 8.10. Rework the example illustrated in Figure 8.4, but renumbering
the objects in the opposite order (so w, = 7, v1 28, ... , w 5 = 1, v 5 = 1). Which
elements of the table should remain unchanged?
Problem 8.11. Write out the algorithm for filling the table V as described in Sec-
tion 8.4.
Problem 8.12. When j < wi in the algorithm for filling the table V described in
Section 8.4, we take V [ i - 1, j - wi ] to be - oo.Can the finished table contain entries
that are -oc? If so, what do they indicate? If not, why not?
Problem 8.13. There may be more than one optimal solution to an instance of the
knapsack problem. Using the table V described in Section 8.4, can you find all
possible optimal solutions to an instance, or only one? If so, how? If not, why not?
Section 8.9 Problems 279
Problem 8.14. An instance of the knapsack problem described in Section 8.4 may
have several different optimal solutions. How would you discover this? Does the
table V allow you to recover more than one solution in this case?
Problem 8.15. In Section 8.4 we assumed that we had available n objects num-
bered 1 to n. Suppose instead that we have n types of object available, with an
adequate supply of each type. Formally, this simply replaces the constraint that
xi must be 0 or 1 by the looser constraint that xi must be a nonnegative integer.
Adapt the dynamic programming algorithm of Section 8.4 so it will handle this
new problem.
Problem 8.16. Adapt your algorithm of Problem 8.15 so it will work even when
the number of objects of a given type is limited.
Problem 8.17. Does Floyd's algorithm (see Section 8.5) work on a graph that has
some edges whose lengths are negative, but that does not include a negative cycle?
Justify your answer.
Problem 8.18. (Warshall's algorithm) As for Floyd's algorithm (see Section 8.5)
we are concerned with finding paths in a graph. In this case, however, the length
of the edges is of no interest; only their existence is important. Let the matrix L be
such that L[i, j] = true if the edge (i, J) exists, and L [i, j] = false otherwise. We want
to find a matrix D such that D [i, j] = true if there exists at least one path from i to j,
and D [i, j]=false otherwise. Adapt Floyd's algorithm for this slightly different
case.
Note: We are looking for the reflexive transitive closure of the graph in question.
Problem 8.19. Find a significantly better algorithm for the preceding problem in
the case when the matrix L is symmetric, that is, when L[i, j]= L[j, i].
Problem 8.20. We (vainly) hope to find a greedy algorithm for the chained matrix
multiplication problem; see Section 8.6. Suppose we are to calculate
where matrix Mi is di-, x di, 1 < i < n. For each of the following suggested tech-
niques, provide a counterexample where the technique does not work.
(a) First multiply the matrices Mi and Mi, 1 whose common dimension di is small-
est, and continue in the same way.
(b) First multiply the matrices Mi and Mi,1 whose common dimension di is largest,
and continue in the same way.
(c) First multiply the matrices Mi and M, 11 that minimize the product di ldidi~1 ,
and continue in the same way.
(d) First multiply the matrices Mi and M, 1 that maximize the product di-ldidi+l,
and continue in the same way.
280 Dynamic Programming Chapter 8
Problem 8.21. Write out in detail the algorithm for calculating the values of mij
described in Section 8.6.
Problem 8.22. Adapt your algorithm for the previous problem so that not only
does it calculate mij, but it also says how the matrix product should be calculated
to achieve the optimal value of ml,-
Problem 8.23. What is wrong with the following simple argument? "The algo-
rithm for calculating the values of m given in Section 8.6 has essentially to fill in
the entries in just over half of an n x n table. Its execution time is thus clearly
in 0(n 2 )."
Problem 8.24. Let T(n) be a Catalan number; see Section 8.6. Prove that
T(n)= - h2
n (n -1)
Problem 8.25. Prove that the number of ways to cut an n-sided convex polygon
into n - 2 triangles using diagonal lines that do not cross is T (n - 1), the (n - 1)-st
Catalan number; see Section 8.6. For example, a hexagon can be cut in 14 different
ways, as shown in Figure 8.7.
/7\
\-V 0
/777\
/,/77/7 \
\Z::I
070
Figure 8.7. Cutting a hexagon into triangles
Problem 8.26. Show how to calculate (i) a binomial coefficient, and (ii) the function
series(n, p) of Section 8.1.2 using a memory function.
Problem 8.27. Show how to solve (i) the problem of making change, and (ii) the
knapsack problem of Section 8.4 using a memory function.
Section 8.9 Problems 281
Problem 8.28. Consider the alphabet Y. {a, b, ci. The elements of E have the
following multiplication table, where the rows show the left-hand symbol and the
columns show the right-hand symbol.
a b c
a b b a
b c b a
c a c c
Thus ab = b, ba =c,and so on. Note that the multiplication defined by this table
is neither commutative nor associative.
Find an efficient algorithm that examines a string x x=x2* .. xn of characters of
E and decides whether or not it is possible to parenthesize x in such a way that the
value of the resulting expression is a. For instance, if x = bbbba, your algorithm
should return "yes" because (b(bb)) (ba)= a. This expression is not unique. For
example, (b (b (b (ba))))= a as well. In terms of n, the length of the string x, how
much time does your algorithm take?
Problem 8.29. Modify your algorithm from the previous problem so it returns the
number of different ways of parenthesizing x to obtain a.
Problem 8.31. You have n objects that you wish to put in order using the relations
"< " and" ". For example, with three objects 13 different orderings are possible.
a =b =c a =b<c a<b =c a<b<c a<c<b
a =c<b b<a=c b<a<c b<c<a b =c<a
c<a =b c<a<b c<b<a
Give a dynamic programming algorithm that can calculate, as a function of n, the
number of different possible orderings. Your algorithm should take a time in 0 (n 2 )
and space in 0(n).
Problem 8.32. There are n trading posts along a river. At any of the posts you can
rent a canoe to be returned at any other post downstream. (It is next to impossible to
paddle against the current.) For each possible departure point i and each possible
arrival point j the cost of a rental from i to j is known. However, it can happen
282 Dynamic Programming Chapter 8
that the cost of renting from i to j is higher than the total cost of a series of shorter
rentals. In this case you can return the first canoe at some post k between i and j
and continue your journey in a second canoe. There is no extra charge for changing
canoes in this way.
Give an efficient algorithm to determine the minimum cost of a trip by canoe from
each possible departure point i to each possible arrival point j. In terms of n, how
much time is needed by your algorithm?
Problem 8.33. When we discussed binary search trees in Section 5.5, we men-
tioned that it is a good idea to keep them balanced. This is true provided all the
nodes are equally likely to be accessed. If some nodes are more often accessed
than others, however, an unbalanced tree may give better average performance.
For example, the tree shown in Figure 8.8 is better than the one in Figure 5.9 if we
are interested in minimizing the average number of comparisons with the tree and
if the nodes are accessed with the following probabilities.
Node 6 12 18 20 27 34 35
Probability 0.2 0.25 0.05 0.1 0.05 0.3 0.05
More generally, suppose we have an ordered set cl < C2 < < C, of n distinct
keys. The probability that a request refers to key ci is pi, 1 < i < n. Suppose for
simplicity that every request refers to a key in the search tree, so Y,' Pi = 1. Recall
that the depth of the root of a tree is 0, the depth of its children is 1, and so on.
If key ci is held in a node at depth di, then di + 1 comparisons are necessary to
find it. For a given tree the average number of comparisons needed is thus
n
C EP(di + 1).
iil
For example, the average number of comparisons needed with the tree in Figure 8.8
is
0.3 + (0.25 + 0.05)x2 + (0.2 + 0.1)x3 + (0.05 + 0.05)x4 = 2.2.
(a) Compute the average number of comparisons needed with the tree in Figure 5.9
and verify that the tree in Figure 8.8 is better.
(b) The tree in Figure 8.8 was obtained from the given probabilities using a simple
algorithm. Can you guess what this is?
(c) Find yet another search tree for the same set of keys that is even more efficient
on the average than Figure 8.8. What conclusion about algorithm design does
this reinforce ?
Problem 8.34. Continuing Problem 8.33, design a dynamic programming algo-
rithm to find an optimal binary search tree for a set of keys with given probabilities
of access. How much time does your algorithm take as a function of the number
of keys? Apply your algorithm to the instance given in Problem 8.33.
Hint: In any search tree where the nodes holding keys ci, ci+ 1 .. , c1 form a subtree,
let Cij be the minimum average number of accesses made to these nodes. In partic-
ular, Cln is the average number of accesses caused by a query to an optimal binary
Section 8.1 0 References and further reading 283
search tree and Cii = pi for each i, i • i • n. Invoke the principle of optimality to
argue that
j
Cij = min (Ci,k-I + Ck+I,j)+ pk
i5k!sj k=i
when i < j. Give a dynamic programming algorithm to compute Ci, for all
o <i < j < n, and an algorithm to find the optimal binary search tree from the Ci1 's.
Problem 8.35. Solve Problem 8.34 again. This time your algorithm to compute an
optimal binary search tree for a set of n keys must run in a time in 0(n 2 ).
Hint: First prove that ri~j- < rij • ri+,, for every 1 < i < j < n, where rij is the
root of an optimal search subtree containing ci, ci+i, ...c,c for 1 < i < j < n (ties
are broken arbitrarily) and ri,-1 = i, 1 < i < n.
Problem 8.36. As a function of n, how many binary search trees are there for n
distinct keys?
Problem 8.37. Recall that Ackermann's function A(m, n), defined in Problem 5.38,
grows extremely rapidly. Give a dynamic programming algorithm to calculate it.
Your algorithm must consist simply of two nested loops and recursion is not al-
lowed. Moreover, you are restricted to using a space in 0 (mn) to calculate A (m, n).
However you may suppose that a word of storage can hold an arbitrarily large
integer.
Hint: Use two arrays value[O. . m] and index[O. . m] and make sure that
value[i]= A(i, index[i]) at the end of each trip round the inner loop.
The algorithm in Section 8.5 for calculating all shortest paths is due to
Floyd (1962). A theoretically more efficient algorithm is known: Fredman (1976)
shows how to solve the problem in a time in 0 (n3 log log n/log n ). The solution
to Problem 8.18 is supplied by the algorithm in Warshall (1962). Both Floyd's and
Warshall's algorithms are essentially the same as the earlier one in Kleene (1956)
to determine the regular expression corresponding to a given finite automaton;
see Hopcroft and Ullman (1979). All these algorithms with the exception of Fred-
man's are unified in Tarjan (1981).
The algorithm in Section 8.6 for chained matrix multiplication is described in
Godbole (1973); a more efficient algorithm, able to solve the problem in a time
in 0(nlogn), can be found in Hu and Shing (1982, 1984). Catalan numbers are
discussed in many places, including Sloane (1973) and Purdom and Brown (1985).
Memory functions are introduced in Michie (1968); for further details see
Marsh (1970).
Problem 8.25 is discussed in Sloane (1973). A solution to Problem 8.30 is given
in Wagner and Fischer (1974). Problem 8.31 suggested itself to the authors while
grading an exam including a question resembling Problem 3.21: we were curious
to know what proportion of all the possible answers was represented by the 69
different answers suggested by the students!
Problem 8.34 on the construction of optimal binary search trees comes from
Gilbert and Moore (1959), where it is extended to the possibility that the requested
key may not be in the tree. The improvement considered in Problem 8.35 comes
from Knuth (1971, 1973) but a simpler and more general solution is given by
Yao (1980), who also gives a sufficient condition for certain dynamic program-
ming algorithms that run in cubic time to be transformable automatically into
quadratic-time algorithms. The optimal search tree for the 31 most common words
in English is compared in Knuth (1973) with the tree obtained using the obvious
greedy algorithm suggested in Problem 8.33(b).
Important dynamic programming algorithms we have not mentioned include
the one in Kasimi (1965) and Younger (1967) that takes cubic time to carry out the
systematic analysis of any context-free language (see Hopcroft and Ullman 1979)
and the one in Held and Karp (1962) that solves the travelling salesperson problem
(see Sections 12.5.2 and 13.1.2) in a time in O(n2 29), much better than the time in
W(n!) required by the naive algorithm.
Chapter 9
Exploring graphs
A great many problems can be formulated in terms of graphs. We have seen, for
instance, the shortest route problem and the problem of the minimum spanning
tree. To solve such problems, we often need to look at all the nodes, or all the edges,
of a graph. Sometimes the structure of the problem is such that we need only visit
some of the nodes, or some of the edges. Up to now the algorithms we have seen
have implicitly imposed an order on these visits: it was a case of visiting the nearest
node, or the shortest edge, and so on. In this chapter we introduce some general
techniques that can be used when no particular order of visits is required.
285
286 Exploring graphs Chapter 9
can take them and win; if I take two, the same thing happens; but if I take just one,
then he will have four matches in front of him of which he can only take one or two.
In this case he doesn't win at once, so it is certainly my best move." By looking just
one move ahead in this simple situation the player can determine what to do next.
In a more complicated example, he may have several possible moves. To choose
the best it may be necessary to consider not just the situation after his own move,
but to look further ahead to see how his opponent can counter each possible move.
And then he might have to think about his own move after each possible counter,
and so on.
To formalize this process of looking ahead, we represent the game by a directed
graph. Each node in the graph corresponds to a position in the game, and each
edge corresponds to a move from one position to another. (In some contexts, for
example in the reports of chess games, a move consists of an action by one player
together with the reply from his opponent. In such contexts the term half-move is
used to denote the action by just one player. In this book we stick to the simpler
terminology and call each player's action a move.) A position in the game is not
specified merely by the number of matches that remain on the table. It is also
necessary to know the upper limit on the number of matches that may be taken on
the next move. However it is not necessary to know whose turn it is to play, since
the rules are the same for both players (unlike games such as 'fox and geese', where
the players have different aims and forces). The nodes of the graph corresponding
to this game are therefore pairs (i, j). In general, (i, j), 1 <j s i, indicates that i
matches remain on the table, and that any number of them between 1 and j may
be taken on the next move. The edges leaving this position, that is, the moves
that can be made, go to the j nodes (i - k,min(2k, i - k)), 1 • k < j. The node
corresponding to the initial position in a game with n matches is (n, n - 1), n > 2.
The position (0, 0) loses the game: if a player is in this position when it is his turn
to move, his opponent has just taken the last match and won.
Figure 9.1 shows part of the graph corresponding to this game. In fact, it is the
part of the graph needed by the player in the example above who faces a heap of
five matches of which he may take four: this is the position (5,4). No positions of
the form (i, 0) appear except for the losing position (0, 0). Such positions cannot
be reached in the course of a game, so they are of no interest. Similarly nodes
(i, j) with j odd and j < i - 1 cannot be reached from any initial position, so they
too are omitted. As we explain in a moment, the square nodes represent losing
positions and the round nodes are winning positions. The heavy edges correspond
to winning moves: in a winning position, choose one of the heavy edges to win.
There are no heavy edges leaving a losing position, corresponding to the fact that
such positions offer no winning move. We observe that the player who must move
first in a game with two, three or five matches has no winning strategy, whereas he
does have such a strategy in the game with four matches.
To decide which are the winning positions and which the losing positions, we
start at the losing position (0,0 ) and work back. This node has no successor, and
a player who finds himself in this position loses the game. In any of the nodes
(1,1), (2,2) or (3,3), a player can make a move that puts his opponent in the
losing position. These three nodes are therefore winning nodes. From (2,1) the
Section 9.1 Graphs and games: An introduction 287
only possible move is to (1,1). In position (2,1) a player is therefore bound to put
his opponent in a winning position, so (2, 1) itself is a losing position. A similar
argument applies to the losing position (3,2). Two moves are possible, but they
both leave the opponent in a winning position, so (3, 2) itself is a losing position.
From either (4,2) or (4,3) there is a move available that puts the opponent in
a losing position, namely (3,2); hence both these nodes are winning positions.
Finally the four possible moves from (5,4) all leave the opponent in a winning
position, so (5,4) is a losing position.
On a larger graph this process of labelling winning and losing positions can
be continued backwards as required. The rules we have been applying can be
summed up as follows: a position is a winning position if at least one of its suc-
cessors is a losing position, for then the player can move to put his opponent in
a losing position; a position is a losing position if all its successors are winning
positions, for then the player cannot avoid leaving his opponent in a winning posi-
tion. The following algorithm therefore determines whether a position is winning
or losing.
function recwin(i, j)
{Returns true if and only if node (i, j) is winning;
we assume 0 < j < i}
for k - 1 to j do
if not recwin(i - k,min(2k, i - k))
then return true
returnfalse
288 Exploring graphs Chapter 9
This algorithm suffers from the same defect as algorithm Fibrec in Section 2.7.5: it
calculates the same value over and over. For instance, recwin(5,4) returns false,
having called successively recwin(4, 2), recwin(3, 3), recwin(2, 2) and recwin(1, 1),
but recwin(3, 3) too calls recwin(2, 2) and recwin(1, 1).
There are two obvious approaches to removing this inefficiency. The first,
using dynamic programming, requires us to create a Boolean array G such that
G [i, j ] = true if and only if (i, j) is a winning position. As usual with dynamic pro-
gramming, we proceed in a bottom-up fashion, calculating G [r, s] for 1 < s < r < i,
as well as the values of G [i, s] for 1 • s <j, before calculating G [i, j].
procedure dynwin(n)
{For each 1 < j < i < n, sets G[ii, j] to true
if and only if position (ij) is winning}
G [0, 0]-false
for i - 1 to n do
for j - 1 to i do
k- I
while k < j and G[i - k,min(2k,i - k)] do
k- k+1
G[i,j]- not G[i - k,min(2k, i - k)]
function nim(i, j)
{For each 1 < j < i < n, returns true
if and only if position (i, j) is winning}
if known [i, j ] then return G [i, j ]
known[i, j]i- true
for k - 1 to j do
if not nim(i - k, min(2k, i - k)) then
G[i,j]- true
return true
G[i,j] -false
return false
At first sight there is no particular reason to favour this approach over dynamic
programming, because in any case we have to take the time to initialize the whole
array known[ 0. . n, . . n]. However, virtual initialization (described in Section 5.1)
allows us to avoid this, and to obtain a worthwhile gain in efficiency.
The game we have considered up to now is so simple that it can be solved
without using the associated graph; see Problem 9.5. However the same principles
apply to many other games of strategy. As before, a node of a directed graph
corresponds to a particular position in the game, and an edge corresponds to a
legal move between two positions. The graph is infinite if there is no a priori limit
on the number of positions possible in the game. For simplicity, we shall suppose
that the game is played by two players who move in turn, that the rules are the
same for both players (we say the game is symmetric), and that chance plays no
part in the outcome (the game is deterministic). The ideas we present can easily
be adapted to more general contexts. We further suppose that no instance of the
game can last forever and that no position in the game offers an infinite number
of legal moves to the player whose turn it is. In particular, some positions in the
game, called the terminal positions, offer no legal moves, and hence some nodes in
the graph have no successors.
To determine a winning strategy in a game of this kind, we attach to each node
of the graph a label chosen from the set win, lose and draw. The label refers to the
situation of a player about to move in the corresponding position, assuming neither
player will make an error. The labels are assigned systematically as follows. (In the
simple example given earlier, no draws were possible, so the label draw was never
used, and the rules stated there are incomplete.)
1. Label the terminal positions. The labels assigned depend on the game in ques-
tion. For most games, if you find yourself in a terminal position, then there is
no legal move you can make, and you have lost. However this is not always
the case. If you cannot move because of a stalemate in chess, for example, the
game is a draw. Also many games of the Nim family come in pairs, one where
290 Exploring graphs Chapter 9
the player who takes the last match wins, and one (called the misere version of
the game) where the player who takes the last match loses.
4. Any other nonterminal position leads to a draw. In this case the successors
must include at least one draw, possibly with some winning positions as well.
The player whose turn it is can avoid leaving his opponent in a winning posi-
tion, but cannot force him into a losing position.
Once these labels are assigned, a winning strategy can be read off from the graph.
In principle, this technique applies even to a game as complex as chess. At first
sight, the graph associated with chess appears to contain cycles, since if two posi-
tions u and v of the pieces differ only by the legal move of a rook, say, the king
not being in check, then we can move equally well from u to v and from v to u.
However this problem disappears on closer examination. In the variant of Nim
used as an example above, a position is defined not just by the number of matches
on the table, but also by an invisible item of information giving the number of
matches that can be picked up on the next move. Similarly, a position in chess is
not defined simply by the position of the pieces. We also need to know whose turn
it is to move, which rooks and kings have moved since the beginning of the game
(to know if it is legal to castle), and whether some pawn has just moved two squares
forward (to know if a capture en passant is possible). There are also rules explicitly
designed to prevent a game dragging on forever. For example, a game is declared
to be a draw after a certain number of moves in which no irreversible action (the
movement of a pawn, or a capture) has taken place. Thanks to these and similar
rules, there are no cycles in the graph corresponding to chess. However, they force
us to include such items as the number of moves since the last irreversible action
in the information defining a position.
Adapting the general rules given above, we can label each node as being a win-
ning position for White, a winning position for Black, or a draw. Once constructed,
this graph allows us in principle to play a perfect game of chess, that is, to win
whenever it is possible, and to lose only when it is inevitable. Unfortunately-
or perhaps fortunately for the game of chess-the graph contains so many nodes
that it is out of the question to explore it completely, even with the fastest existing
computers. The best we can do is to explore the graph near the current position, to
see how the situation might develop, just like the novice who reasons, "If I do this,
he will reply like that, and then I can do this", and so on. Even this technique is
not without its subtleties, however. Should we look at all the possibilities offered
by the current position, and then, for each of these, all the possibilities of reply?
Section 9.2 Traversing trees 291
Or should we rather pick a promising line of attack and follow it up for several
moves to see where it leads? Different search strategies may lead to quite different
results, as we describe shortly.
If we cannot hope to explore the whole graph for the game of chess, then we
cannot hope to construct it and store it either. The best we can expect is to construct
parts of the graph as we go along, saving them if they are interesting and throwing
them away otherwise. Thus throughout this chapter we use the word "graph" in
two different ways.
On the one hand, a graph may be a data structure in the storage of a computer.
In this case, the nodes are represented by a certain number of bytes, and the edges
are represented by pointers. The operations to be carried out are quite concrete:
to "mark a node" means to change a bit in storage, to "find a neighbouring node"
means to follow a pointer, and so on.
At other times, the graph exists only implicitly, as when we explore the abstract
graph corresponding to the game of chess. This graph never really exists in the
storage of the machine. Most of the time, all we have is a representation of the
current position (that is, of the node we are in the process of visiting, for, as we
saw, nodes correspond to positions of the pieces plus some extra information), and
possibly representations of a small number of other positions. Of course we also
know the rules of the game in question. In this case to "mark a node" means to take
any appropriate measures that enable us to recognize a position we have already
seen, or to avoid arriving at the same position twice. To "find a neighbouring
node" means to change the current position by making a single legal move, for if
it is possible to get from one position to another by making a single move, then an
edge exists in the implicit graph between the two corresponding nodes.
Exactly similar considerations apply when we explore any large graph, as we
shall see particularly in Section 9.6. However, whether the graph is a data structure
or merely an abstraction that we can never manipulate as a whole, the techniques
used to traverse it are essentially the same. In this chapter we therefore do not
distinguish the two cases.
Lemma 9.2.1 For each of the six techniques mentioned, the time T(n) needed to
explore a binary tree containingn nodes is in 0((n).
Proof Suppose that visiting a node takes a time in 0(1), that is, the time required is
bounded above by some constant c. Without loss of generality we may suppose
that c > T(O). Suppose further that we are to explore a tree containing n nodes,
n > 0; one of these nodes is the root, so if g of them lie in the left subtree, then there
are n - 9 1 in the right subtree. Then
This is true whatever the order in which the left and right subtrees and the root are
explored. We prove by constructive induction that T (n) < an + b, where a and b
are appropriate constants, as yet unknown. If we choose b > c the hypothesis is
true for n = 0, because c > T(0). For the induction step, let n > 0, and suppose
the hypothesis is true for all m, 0 < m < n -1. Then
from some large set of possible instances. When a solution is needed it must be
provided very rapidly, for example to ensure sufficiently fast response for a real-
time application. In this case it may well be impractical to calculate ahead of time
and to store the solutions to all the relevant instances. On the other hand, it may
be possible to calculate and store sufficient auxiliary information to speed up the
solution of whatever instance comes along. Such an application of preconditioning
may be of practical importance even if only one crucial instance is solved in the
whole lifetime of the system: this may be just the instance that enables us, for
example, to stop a runaway reactor.
As a second example of this technique we use the problem of determining
ancestry in a rooted tree. Let T be a rooted tree, not necessarily binary. We say
that a node v of T is an ancestor of node w if v lies on the path from w to the root
of T. In particular, every node is its own ancestor, and the root is an ancestor of
every node. (Those with a taste for recursive definitions may prefer the following:
every node is its own ancestor, and, recursively, it is an ancestor of all the nodes of
which its children are ancestors.) The problem is thus, given a pair of nodes (v, w)
from T, to determine whether or not v is an ancestor of w. If T contains nl nodes,
any direct solution of this instance takes a time in Q (n) in the worst case. However
it is possible to precondition T in a time in 63(n) so we can subsequently solve any
particular instance of the ancestry problem in constant time.
We illustrate this using the tree in Figure 9.2. It contains 13 nodes. To precon-
dition the tree, we traverse it first in preorder and then in postorder, numbering the
nodes sequentially as we visit them. For a node v, let prenum[v] be the number
assigned to v when we traverse the tree in preorder, and let postnum[v] be the
number assigned during the traversal in postorder. In Figure 9.2 these numbers
appear to the left and the right of the node, respectively.
10
Let v and w be two nodes in the tree. In preorder we first number a node and then
number its subtrees from left to right. Thus
prenum[v] < prenum[w] a> v is an ancestor of w or
v is to the left of w in the tree.
294 Exploring graphs Chapter 9
In postorder we first number the subtrees of a node from left to right, and then we
number the node itself. Thus
postnum[v]> postnum[w] A> v is an ancestor of w or
v is to the right of w in the tree.
It follows that
prenum[v] < prenum[w] and postnum[v] > postnum[w]
# v is an ancestor of w.
Once the values of prenum and postnum have been calculated in a time in E)(n), the
required condition can be checked in a time in 0 (1).
procedure dfsearch(G)
for each v E N do mark[v] not-visited
for each v E N do
if mark[v]# visitedthen dfs(v)
procedure dfs(v)
{Node v has not previously been visited}
mark[v] - visited
for each node w adjacent to v do
if mark[w]# visitedthen dfs(w)
How much time is needed to explore a graph with n nodes and a edges? Since
each node is visited exactly once, there are n calls of the procedure dfs. When
we visit a node, we look at the mark on each of its neighbouring nodes. If the
graph is represented so as to make the lists of adjacent nodes directly accessible
(type lisgraph of Section 5.4), this work is proportional to a in total. The algorithm
therefore takes a time in 0((n) for the procedure calls and a time in 0 (a) to inspect
the marks. The execution time is thus in E)(max(a, n)).
6
1;
11
II/I
II
I/
6
1"
II 6
Figure 9.4. A depth-first search tree; prenum on the left, highest on the right
nodes of the graph being visited. The first node visited-the root of the tree-
is numbered 1, the second is numbered 2, and so on. In other words, the nodes of
the associated tree are numbered in preorder. To implement this, add the following
two statements at the beginning of the procedure dfs:
pnum - pnum + 1
prenum[v] - pnum
where pnum is a global variable initialized to zero. For example, the depth-first
search of the graph in Figure 9.3 described above numbers the nodes of the graph
as follows.
node 1 2 3 4 5 6 7 8
prenum 1 2 3 6 5 4 7 8
These are the numbers to the left of each node in Figure 9.4. Of course the tree
and the numbering generated by a depth-first search in a graph are not unique,
but depend on the chosen starting point and on the order in which neighbours are
visited.
9.3.1 Articulation points
A node v of a connected graph is an articulationpoint if the subgraph obtained by
deleting v and all the edges incident on v is no longer connected. For example,
node 1 is an articulation point of the graph in Figure 9.3; if we delete it, there
remain two connected components {2, 3,5, 61 and {4, 7, 81. A graph G is biconnected
(or unarticulated) if it is connected and has no articulation points. It is bicoherent
(or isthmus-free, or 2-edge-connected) if each articulation point is joined by at least two
Section 9.3 Depth-first search: Undirected graphs 297
edges to each component of the remaining subgraph. These ideas are important
in practice. If the graph G represents, say, a telecommunications network, then
the fact that it is biconnected assures us that the rest of the network can continue
to function even if the equipment in one of the nodes fails. If G is bicoherent, we
can be sure the nodes will be able to communicate with one another even if one
transmission line stops working.
To see how to find the articulationpoints of a connected graph C, look again
at Figure 9.4. Remember that this figure shows all the edges of the graph G of
Figure 9.3: those shown as solid lines form a spanning tree T, while the others are
shown as broken lines. As we saw, these other edges only go from some node v to
its ancestor in the tree, not from one branch to another. To the left of each node v
is prenumr[v], the number assigned by a preorder traversal of T.
To the right of each node v is a new number that we shall call highest[v].
Let w be the highest node in the tree that can be reached from v by following
down zero or more solid lines, and then going up at most one broken line. Then
define highest[v] to be prenumr[w]. For instance, from node 7 we can go down one
solid line to node 8, then up one broken line to node 4, and this is the highest node
we can reach. Since prenum[4]= 6, we also have highest[7] = 6.
Because the broken lines do not cross from one branch to another, the highest
node w reachable in this way must be an ancestor of v. (It cannot lie below v in
the tree, because we can always get to v itself by not following any lines at all.)
Among the ancestors of v, the highest up the tree is the one with the lowest value
of prenum. If we have these values, it is therefore not necessary to know the exact
level of each node: among the nodes we can reach, we simply choose the one that
minimizes prenum.
Now consider any node v in T except the root. If v has no children, it cannot
be an articulation point of G, for if we delete it the remaining nodes are still con-
nected by the edges left in T. Otherwise, let x be a child of v. Suppose first that
highest[x] < prenumr[v]. This means that from x there is a chain of edges of G, not
including the edge {v, x} (for we were not allowed to go up a solid line), that leads
to some node higher up the tree than v. If we delete v, therefore, the nodes in the
subtree rooted at x will not be disconnected from the rest of the tree. This is the
case with node 3 in the figure, for example. Here prenum[3] = 3, and the only child
of node 3, namely node 6, has highest[61= 2 < prenum[3]. Therefore if we delete
node 3, node 6 and its descendants will still be attached to one of the ancestors of
node 3.
If on the other hand highest[x]Ž prenum[v], then no chain of edges from x
(again excluding the edge {v, x }) rejoins the tree higher than v. In this case, should
v be deleted, the nodes in the subtree rooted at x will be disconnected from the
rest of the tree. Node 4 in the figure illustrates this case. Here prenumr[41= 6, and
the only child of node 4, namely node 7, has highest[7]= 6 = prenumr[4]. Therefore
if we delete node 4, no path from node 7 or from one of its descendants leads back
above the deleted node, so the subtree rooted at node 7 will be detached from the
rest of T.
Thus a node v that is not the root of T is an articulation point of G if and only if
it has at least one child x with highest[x]> prenum[v] . As for the root, it is evident
298 Exploring graphs Chapter 9
that it is an articulation point of G if and only if it has more than one child; for in this
case, since no edges cross from one branch to another, deleting the root disconnects
the remaining subtrees of T.
It remains to be seen how to calculate the values of highest. Clearly this must
be done from the leaves upwards. For example, from node 5 we can stay where
we are, or go up to node 2; these are the only possibilities. From node 6 we can
stay where we are, go up to node 2, or else go first down to node 5 and then to
wherever is reachable from there; and so on. The values of highest are therefore
calculated in postorder. At a general node v, highest[v] is the minimum (corre-
sponding to the highest node) of three kinds of values: prenum[v] (we stay where
we are), prenum [w] for each node w such that there is an edge {v, w} in G with no
corresponding edge in T (we go up a broken line), and highest[x] for every child
x of v (we go down a solid line and see where we can get from there).
The complete algorithm for finding the articulation points of an undirected
graph G is summarized as follows.
1. Carry out a depth-first search in G, starting from any node. Let T be the tree
generated by this search, and for each node v of G, let prenum[v] be the number
assigned by the search.
2. Traverse T in postorder. For each node v visited, calculate highest[v] as the
minimum of
(a) prenum [v ];
(b) prenum [w] for each node w such that there is an edge {v, w} in G with no
corresponding edge in T; and
(c) highest[x] for every child x of v.
3. Determine the articulation points of G as follows.
(a) The root of T is an articulation point if and only if it has more than one
child.
(b) Any other node v is an articulation point if and only if it has a child x such
that highest[x]> prenum[v].
It is not difficult to combine steps 1 and 2 of the above algorithm, calculating the
values of both prenuin and highest during the depth-first search of G.
An argument identical with the one in Section 9.3 shows that the time taken by this
algorithm is also in O)(max(a, n)). In this case, however, the edges used to visit all
the nodes of a directed graph G = (N, A) may form a forest of several trees even if
G is connected. This happens in our example: the edges used, namely (1,2), (2,3),
(1,4), (4,8), (8,7) and (5,6), form the forest shown by the solid lines in Figure 9.6.
--------
I
I
I
Let F be the set of edges in the forest, so that A \F is the set of edges of G that have
no corresponding edge in the forest. In the case of an undirected graph, we saw
that the edges of the graph with no corresponding edge in the forest necessarily
join some node to one of its ancestors. In the case of a directed graph, however,
three kinds of edge can appear in A\F. These are shown by the broken lines in
Figure 9.6.
1. Those like (3, 1) or (7,4) lead from a node to one of its ancestors.
2. Those like (1, 8) lead from a node to one of its descendants.
3. Those like (5,2) or (6,3) join one node to another that is neither its ancestor
nor its descendant. Edges of this type are necessarily directed from right to
left.
These graphs also offer a natural representation for partial orderings, such as the
relation of set-inclusion. Figure 9.8 illustrates part of another partial ordering de-
fined on the positive integers: here there is an edge from node i to node j if and
only if i is a proper divisor of j.
Section 9.4 Depth-first search: Directed graphs 301
Finally, directed acyclic graphs are often used to specify how a complex project
develops over time. The nodes represent different stages of the project, from the
initial state to final completion, and the edges correspond to activities that must be
completed to pass from one stage to another. Figure 9.9 gives an example of this
type of diagram.
Depth-first search can be used to detect whether a given directed graph is acyclic;
see Problem 9.26. It can also be used to determine a topological orderingof the nodes
of a directed acyclic graph. In this kind of ordering the nodes of the graph are listed
in such a way that if there exists an edge (i,j), then node i precedes node j in the
list. For example, for the graph of Figure 9.8, the natural order 1, 2,3,4,6,8,12,24 is
adequate; but the order 1, 3, 2, 6, 4, 12, 8, 24 is also acceptable, as are several others.
On the other hand, the order 1,3,6,2,4,12,8,24 will not do, because the graph
includes an edge (2,6), and so 2 must precede 6 in the list.
Adapting the procedure dfs to make it into a topological sort is simple. Add
an extra line
write v
at the end of the procedure dfs, run the procedure on the graph concerned, and
then reverse the order of the resulting list of nodes.
To see this, consider what happens when we arrive at node v of a directed
acyclic graph G using the modified procedure. Some nodes that must follow v in
302 Exploring graphs Chapter 9
topological order may already have been visited while following a different path.
In this case they are already on the list, as they should be since we reverse the list
when the search is finished. Any node that must precede v in topological order
either lies along the path we are currently exploring, in which case it is marked as
visited but is not yet on the list, or else it has not yet been visited. In either case it
will be added to the list after node v (again, correctly, for the list is to be reversed).
Now the depth-first search explores the unvisited nodes that can be reached from
v by following edges of G. In the topological ordering, these must come after v.
Since we intend to reverse the list when the search is finished, adding them to the
list during the exploration starting at v, and adding v only when this exploration
is finished, gives us exactly what we want.
procedure dfs2(v)
P - empty-stack
mark[v] - visited
push v onto P
while P is not empty do
while there exists a node w adjacent to top(P)
such that mark[whl visited
do mark[ w]- visited
push w onto P {w isthe new top(P)}
pop P
Modifying the algorithm has not changed its behaviour. All we have done is to
make explicit the stacking and unstacking of nodes that in the previous version
was handled behind the scenes by the stack mechanism implicit in any recursive
language.
For the breadth-first search algorithm, by contrast, we need a type queue that
allows two operations enqueue and dequeue. This type represents a list of ele-
ments to be handled in the order "first come, first served" (or FIFO, for "first in, first
out"). The functionfirst denotes the element at the front of the queue. Here now
is the breadth-first search algorithm.
Section 9.5 Breadth-first search 303
procedure bfs(v)
Q - empty-queue
mark[v] - visited
enqueue v into Q
while Q is not empty do
u -first(Q)
dequeue u from Q
for each node w adjacent to u do
if mark[w]# visited then mark[w]- visited
enqueue w into Q
In both cases we need a main program to start the search.
procedure search(G)
for each v E N do mark[v] not-visited
-
for each v E N do
if mark[v]b visited then {dfs2 or bfs} (v)
For example, on the graph of Figure 9.3, if the search starts at node 1, and the
neighbours of a node are visited in numerical order, breadth-first search proceeds
as follows.
Node visited Q
1. 1 2,3,4
2. 2 3,4,5,6
3. 3 4,5,6
4. 4 5,6,7,8
5. 5 6,7,8
6. 6 7,8
7. 7 8
8. 8
As for depth-first search, we can associate a tree with a breadth-first search. Fig-
ure 9.10 shows the tree generated by the search above. The edges of the graph with
no corresponding edge in the tree are represented by broken lines; see Problem 9.30.
In general, if the graph G being searched is not connected, the search generates a
forest of trees, one for each connected component of G.
It is easy to show that the time required by a breadth-first search is in the
same order as that required by a depth-first search, namely 0 (max (a, n)). If the
appropriate interpretation of the word "adjacent" is used, the breadth-first search
algorithm-again, exactly like the depth-first search algorithm-can be applied
without modification to either directed or undirected graphs; see Problems 9.31
and 9.32.
Breadth-first search is most often used to carry out a partial exploration of an
infinite (or unmanageably large) graph, or to find the shortest path from one point
to another in a graph. Consider for example the following problem. The value 1 is
given. To construct other values, two operations are available: multiplication by 2
and division by 3. For the second operation, the operand must be greater than 2
304 Exploring graphs Chapter 9
(so we cannot reach 0), and any resulting fraction is dropped. If operations are
executed from left to right, we may for instance obtain the value 10 as
10 =1x2x2x2x2.3x2.
We want to obtain some specified value n. How should we set about it?
The problem can be represented as a search in the infinite directed graph of
Figure 9.11. Here the given value 1 is in the node at top left. Thereafter each node
is linked to the values that can be obtained using the two available operations.
For example, from the value 16, we can obtain the two new values 32 (by multiply-
ing 16 by 2) and 5 (by dividing 16 by 3, dropping the resulting fraction). For clarity,
we have omitted links backwards to values already available, for instance from 8
to 2. These backwards links are nevertheless present in the real graph. The graph
is infinite, for a sequence such as 1,2, .. .,256,512, ... can be continued indefinitely.
It is not a tree, because node 42, for instance, can be reached from both 128 and 21.
When the backwards links are included, it is not even acyclic.
To solve a given instance of the problem, that is, to find how to construct a
particular value n, we search the graph starting at 1 until we find the value we
are looking for. On this infinite graph, however, a depth-first search may not
work. Suppose for example that n = 13. If we explore the neighbours of a node
in the order "first multiplication by 2, then division by 3", a depth-first search
visits successively nodes 1,2,4, . . ., and so on, heading off along the top branch and
(since there is always a new neighbour to look at) never encountering a situation
that forces it to back up to a previous node. In this case the search certainly fails.
If on the other hand we explore the neighbours of a node in the order "first division
by 3, then multiplication by 2", the search first runs down to node 12; from there it
moves successively to nodes 24, 48, 96, 32, and 64, and from 64 it wanders off into
the upper right-hand part of the graph. We may be lucky enough to get back to
node 13 and thus find a solution to our problem, but nothing guarantees this. Even
if we do, the solution found will be more complex than necessary. If you program
this depth-first search on a computer, you will find in fact that you reach the given
value 13 after 74 multiplication and division operations.
A breadth-first search, on the other hand, is sure to find a solution to the instance
if there is one. If we examine neighbours in the order "first multiplication by 2,
Section 9.6 Backtracking 305
13=1x2x2x2x2.3x2x2x2.3.
Of course even a breadth-first search may fail. In our example, it may be that
some values n are not present in the graph at all. (Having no idea whether this is
true or not, we leave the question as an exercise for the reader.) In this case any
search technique is certain to fail for the missing values. If a graph includes one or
more nodes with an infinite number of neighbours, but no paths of infinite length,
depth-first search may succeed where breadth-first search fails. Nevertheless this
situation seems to be less common than its opposite.
9.6 Backtracking
As we saw earlier, various problems can be thought of in terms of abstract graphs.
For example, we saw in Section 9.1 that we can use the nodes of a graph to represent
positions in a game of chess, and edges to represent legal moves. Often the original
306 Exploring graphs Chapter 9
problem translates to searching for a specific node, path or pattern in the associated
graph. If the graph contains a large number of nodes, and particularly if it is infinite,
it may be wasteful or infeasible to build it explicitly in computer storage before
applying one of the search techniques we have encountered so far.
In such a situation we use an implicit graph. This is one for which we have
available a description of its nodes and edges, so relevant parts of the graph can
be built as the search progresses. In this way computing time is saved whenever
the search succeeds before the entire graph has been constructed. The economy
in storage space can also be dramatic, especially when nodes that have already
been searched can be discarded, making room for subsequent nodes to be explored.
If the graph involved is infinite, such a technique offers our only hope of exploring it
at all. This section and the ones that follow detail some standard ways of organizing
searches in an implicit graph.
In its basic form, backtracking resembles a depth-first search in a directed
graph. The graph concerned is usually a tree, or at least it contains no cycles.
Whatever its structure, the graph exists only implicitly. The aim of the search is
to find solutions to some problem. We do this by building partial solutions as the
search proceeds; such partial solutions limit the regions in which a complete solu-
tion may be found. Generally speaking, when the search begins, nothing is known
about the solutions to the problem. Each move along an edge of the implicit graph
corresponds to adding a new element to a partial solution, that is, to narrowing
down the remaining possibilities for a complete solution. The search is successful
if, proceeding in this way, a solution can be completely defined. In this case the
algorithm may either stop (if only one solution to the problem is needed), or con-
tinue looking for alternative solutions (if we want to look at them all). On the other
hand, the search is unsuccessful if at some stage the partial solution constructed so
far cannot be completed. In this case the search backs up, exactly like a depth-first
search, removing as it goes the elements that were added at each stage. When
it gets back to a node with one or more unexplored neighbours, the search for a
solution resumes.
units of weight. This can be done using backtracking by exploring the implicit tree
shown in Figure 9.12.
remain to be visited. After exploring nodes (2,2,3; 11) and (2,2,4; 12), neither of
which improves on the solution previously memorized, the search backs up one
stage further, and so on. Exploring the tree in this way, (2, 3, 3; 13) is found to be a
better solution than the one we have, and later (3,5; 15) is found to be better still.
Since no other improvement is made before the search ends, this is the optimal
solution to the instance.
Programming the algorithm is straightforward, and illustrates the close relation
between recursion and depth-first search. Suppose the values of n and W, and of
the arrays w [1 .. n] and v [1 .. nl] for the instance to be solved are available as global
variables. The ordering of the types of item is unimportant. Define a function
backpack as follows.
function backpack(i, r)
{Calculates the value of the best load that can
be constructed using items of types i to n
and whose total weight does not exceed r }
b -O
{Try each allowed kind of item in turn}
for k i to n do
if w[k]< r then
b - max(b,v[k]+backpack(k,r - w[k]))
return b
Now to find the value of the best load, call backpack(l, W). Here each recursive
call of backpack corresponds to extending the depth-first search one level down the
tree, while the for loop takes care of examining all the possibilities at a given level.
In this version of the program, the composition of the load being examined is given
implicitly by the values of k saved on the recursive stack. It is not hard to adapt
the program so that it gives the composition of the best load explicitly along with
its value; see Problem 9.42.
rows 3 and 6 are in the same column, and also two pairs of queens lie on the same
diagonal. Using this representation, we can write an algorithm using eight nested
loops. (For the function solution, see Problem 9.43.)
program queens
for il - 1 to 8 do
for i2 -1 to 8 do
for i8 1 to 8 do
sol [il. i2, . . ., i8]
if solution (sol) then write sol
stop
write "there is no solution"
The number of positions to be considered is reduced to 88 16777216, although
in fact the algorithm finds a solution and stops after considering only 1 299 852
positions.
Representing the chessboard by a vector prevents us ever trying to put two
queens in the same row. Once we have realized this, it is natural to be equally
systematic in our use of the columns. Hence we now represent the board by a
vector of eight different numbers between 1 and 8, that is, by a permutation of the
first eight integers. This yields the following algorithm.
program queens2
sol - initial-permutation
while sol : final-permutation and not solution(sol) do
sol next-permutation
if solution (sol) then write sol
else write "there is no solution"
There are several natural ways to generate systematically all the permutations of
the first n integers. For instance, we can put each value in turn in the leading
position and generate recursively, for each leading value, all the permutations of
the remaining n -1 elements. The following procedure shows how to do this.
Here Till . . n] is a global array initialized to [1,2,...,n], and the initial call of
the procedure is perm (1). This way of generating permutations is itself a kind of
backtracking.
procedure perm(i)
if i = n then use(T) {T is a new permutation}
else for j - i to n do exchange T[i] and T[j]
perm(i + 1)
exchange T[i] and T[j]
This approach reduces the number of possible positions to 8! = 40320. If the pre-
ceding algorithm is used to generate the permutations, only 2830 positions are in
fact considered before the algorithm finds a solution. It is more complicated to gen-
erate permutations rather than all the possible vectors of eight integers between 1
310 Exploring graphs Chapter 9
and 8. On the other hand, it is easier to verify in this case whether a given position
is a solution. Since we already know that two queens can neither be in the same row
nor in the same column, it suffices to verify that they are not on the same diagonal.
Starting from a crude method that put the queens absolutely anywhere on the
board, we progressed first to a method that never puts two queens in the same
row, and then to a still better method that only considers positions where two
queens can neither be in the same row nor in the same column. However, all these
algorithms share an important defect: they never test a position to see if it is a
solution until all the queens have been placed on the board. For instance, even the
best of them makes 720 useless attempts to put the last six queens on the board
when it has started by putting the first two on the main diagonal, where of course
they threaten one another.
Backtracking allows us to do better than this. As a first step, we reformulate
the eight queens problem as a tree searching problem. We say that a vector V [1 . . k ]
of integers between 1 and 8 is k-promising, for 0 < k < 8, if none of the k queens
placed in positions (1, V[1]), (2, V[2]),..., (k, V[k]) threatens any of the others.
Mathematically, a vector V is k-promising if, for every pair of integers i and j
between 1 and k with i 7 j, we have V[i]-V[1j] {i - j,0,j - i}. For k < 1, any
vector V is k-promising. Solutions to the eight queens problem correspond to
vectors that are 8-promising.
Let N be the set of k-promising vectors, 0 < k < 8. Let G = (N, A) be the di-
rected graph such that (U, V)e A if and only if there exists an integer k, 0 < k < 8,
such that
• U is k-promising,
• V is (k + 1)-promising, and
• U[i] = V[i] for every i E [1. . k].
This graph is a tree. Its root is the empty vector corresponding to k = 0. Its leaves are
either solutions (k = 8) or they are dead ends (k < 8) such as [1, 4, 2, 5, 8]: in such a
position it is impossible to place a queen in the next row without threatening at least
one of the queens already on the board. The solutions to the eight queens problem
can be obtained by exploring this tree. We do not generate the tree explicitly so
as to explore it thereafter, however. Rather, nodes are generated and abandoned
during the course of the exploration. Depth-first search is the obvious method to
use, particularly if we require only one solution.
This technique has two advantages over the algorithm that systematically tries
each permutation. First, the number of nodes in the tree is less than 8! = 40 320.
Although it is not easy to calculate this number theoretically, using a computer
it is straightforward to count the nodes: there are 2057. In fact, it suffices to ex-
plore 114 nodes to obtain a first solution. Second, to decide whether a vector is
k-promising, knowing that it is an extension of a (k - 1) -promising vector, we only
need check the last queen to be added. This can be speeded up if we associate with
each promising node the set of columns, of positive diagonals (at 45 degrees), and
of negative diagonals (at 135 degrees) controlled by the queens already placed.
Section 9.6 Backtracking 311
In the following procedure soll . .81 is a global array. To print all the solutions
to the eight queens problem, call queens(0, 0, 0, 0).
It is clear that the problem generalizes to an arbitrary number of queens: how can we
place n queens on an n x n "chessboard" in such a way that none of them threatens
any of the others? As we might expect, the advantage to be gained by using
backtracking instead of an exhaustive approach becomes more pronounced as n
increases. For example, for n = 12 there are 479 001 600 possible permutations to
be considered. Using the permutation generator given previously, the first solution
to be found corresponds to the 4 546 044th position examined. On the other hand,
the tree explored by the backtracking algorithm contains only 856 189 nodes, and
a solution is obtained when the 262nd node is visited. The problem can be further
generalized to placing "queens" in three dimensions on an n x n x n board; see
Problem 9.49.
9.6.3 The general template
Backtracking algorithms can be used even when the solutions sought do not nec-
essarily all have the same length. Here is the general scheme.
procedure backtrack(v [1 .. k] )
{v is a k-promising vector}
if v is a solution then write v
{else} for each (k + 1)-promising vector w
such that w[l.. k] v[l.. k]
do backtrack(w lI . . k + 1])
The else should be present if and only if it is impossible to have two different
solutions such that one is a prefix of the other.
Both the knapsack problem and the n queens problem were solved using depth-
first search in the corresponding tree. Some problems that can be formulated in
terms of exploring an implicit graph have the property that they correspond to an
infinite graph. In this case it may be necessary to use breadth-first search to avoid
312 Exploring graphs Chapter 9
9.7 Branch-and-bound
Like backtracking, branch-and-bound is a technique for exploring an implicit di-
rected graph. Again, this graph is usually acyclic or even a tree. This time, we
are looking for the optimal solution to some problem. At each node we calculate a
bound on the possible value of any solutions that might lie farther on in the graph.
If the bound shows that any such solution must necessarily be worse than the best
solution found so far, then we need not go on exploring this part of the graph.
In the simplest version, calculation of the bounds is combined with a breadth-
first or a depth-first search, and serves only, as we have just explained, to prune
certain branches of a tree or to close paths in a graph. More often, however, the
calculated bound is also used to choose which open path looks the most promising,
so it can be explored first.
In general terms we may say that a depth-first search finishes exploring nodes in
inverse order of their creation, using a stack to hold nodes that have been generated
but not yet explored fully. A breadth-first search finishes exploring nodes in the
order of their creation, using a queue to hold those that have been generated but
not yet explored. Branch-and-bound uses auxiliary computations to decide at each
instant which node should be explored next, and a priority list to hold those nodes
that have been generated but not yet explored. Remember that heaps are often
ideal for holding priority lists; see Section 5.7. We illustrate the technique with two
examples.
buildings and sites, where cij is the cost of erecting building i on site j, and we
want to minimize the total cost of the buildings. Other examples are easy to invent.
In general, with n agents and n tasks, there are n! possible assignments to consider,
too many for an exhaustive search even for moderate values of n. We therefore
resort to branch-and-bound.
Suppose we have to solve the instance whose cost matrix is shown in Fig-
ure 9.13. To obtain an upper bound on the answer, note that a -1, b - 2, c - 3,
d -4 is one possible solution whose cost is 11 + 15 + 19 + 28 = 73. The optimal
solution to the problem cannot cost more than this. Another possible solution is
a -4, b - 3, c - 2, d - 1 whose cost is obtained by adding the elements in the
other diagonal of the cost matrix, giving 40 + 13 + 17 + 17 = 87. In this case the
second solution is no improvement over the first. To obtain a lower bound on the
solution, we can argue that whoever executes task 1, the cost will be at least 11;
whoever executes task 2, the cost will be at least 12, and so on. Thus adding the
smallest elements in each column gives us a lower bound on the answer. In the ex-
ample, this is 11 + 12 + 13 + 22 = 58. A second lower bound is obtained by adding
the smallest elements in each row, on the grounds that each agent must do some-
thing. In this case we find 11 + 13 + 11 + 14 = 49, not as useful as the previous
lower bound. Pulling these facts together, we know that the answer to our instance
lies somewhere in [58.. 73].
1 2 3 4
a 11 12 18 40
b 14 15 13 22
c 11 17 19 23
d 17 14 20 28
Figure 9.13. The cost matrix for an assignment problem
60
58
65
78*
68
59
64
Again, the figure next to each node gives a lower bound on the cost of solutions that
can be obtained by completing the corresponding partial assignment. For example,
at node a - 2, b - 1, task 1 will cost 14 and task 2 will cost 12. The remaining
tasks 3 and 4 must be executed by c or d. The smallest possible cost for task 3 is
thus 19, while that for task 4 is 23. Hence a lower bound on the possible solutions
is 14 + 12 + 19 + 23 = 68. The other two new bounds are calculated similarly.
The most promising node in the tree is now a - 2, b -3 with a lower bound
of 59. To continue exploring the tree starting at this node, we fix one more element
Section 9.7 Branch-and-bound 315
in the partial assignment, say c. When the assignments of a, b and c are fixed,
however, we no longer have any choice about how we assign d, so the solution is
complete. The right-hand nodes in Figure 9.16, which shows the next stage of our
exploration, therefore correspond to complete solutions.
64
65 *
wi are all strictly positive, and the xi are nonnegative integers. This problem too
can be solved by branch-and-bound.
Suppose without loss of generality that the variables are numbered so that
vi/wi > v1 +1 /wi+l. Then if the values of x1,.X.,Xk, 0 < k < n, are fixed, with
kl xwi < W, it is easy to see that the value obtainable by adding further items
316 Exploring graphs Chapter 9
69
361
64
65*
k \
XiVi + W XiWi) Vk+1 /Wk+l
Here the first term gives the value of the items already in the knapsack, while the
second is a bound on the value that can be added.
To solve the problem by branch-and-bound, we explore a tree where at the root
none of the values xi is fixed, and then at each successive level the value of one
more variable is determined, in numerical order of the variables. At each node
we explore, we only generate those successors that satisfy the weight constraint,
so each node has a finite number of successors. Whenever a node is generated
we calculate an upper bound on the value of the solution that can be obtained by
completing the partially specified load, and use these upper bounds to cut useless
branches and to guide the exploration of the tree. We leave the details to the reader.
be made concerning the quality of the bound to be calculated. With a better bound
we look at less nodes, and if we are lucky we may be guided to an optimum solution
more quickly. On the other hand, we most likely spend more time at each node
calculating the corresponding bound. In the worst case it may turn out that even an
excellent bound does not let us cut any branches off the tree, so all the extra work
at each node is wasted. In practice, however, for problems of the size encountered
in applications, it almost always pays to invest the necessary time in calculating
the best possible bound (within reason). For instance, one finds applications such
as integer linear programming handled by branch-and-bound, the bound at each
node being obtained by solving a related problem in linear programming with
continuous variables.
val - -co
for each position w that is a successor of u do
if eval(w)> val then val - eval(w)
V wW
This simplistic approach would not be very successful using the evaluation function
suggested earlier, since it would not hesitate to sacrifice a queen to take a pawn!
If the evaluation function is not perfect, a better strategy for White is to assume
that Black will reply with the move that minimizes the function eval, since the
smaller the value of this function, the better the position is supposed to be for
him. Ideally, Black would like a large negative value. We are now looking one
move ahead. (Remember we agreed to call each action a move, avoiding the term
"half-move".)
val - oo
for each position w that is a successor of u do
if w has no successor
then valw - eval(w)
else valw - min{eval(x) Ix is a successor of w }
if valw > val then val - valw
V - W
There is now no question of giving away a queen to take a pawn: which of course
may be exactly the wrong rule to apply if it prevents White from finding the winning
move. Maybe if he looked further ahead the gambit would turn out to be profitable.
On the other hand, we are sure to avoid moves that would allow Black to mate
immediately (provided we can avoid this).
To add more dynamic aspects to the static evaluation provided by eval, it is
preferable to look several moves ahead. To look n moves ahead from position u,
White should move to the position v given by
val - co
for each position w that is a successor of u do
B - Black (w, n)
if B > val then val - B
V - W
function Black(w, n)
if n = 0 or w has no successor
then return eval(w)
else return min{White(x, n - 1) Ix is a successor of wI
function White(w, n)
if n = 0 or w has no successor
then return eval(w)
else return max{Black(x, n - 1) Ix is a successor of w}
Section 9.9 Problems 319
We see why the technique is called minimax. Black tries to minimize the advantage
he allows to White, and White, on the other hand, tries to maximize the advantage
he obtains from each move.
More generally, suppose Figure 9.18 shows part of the graph corresponding to
some game. If the values attached to the nodes on the lowest level are obtained by
applying the function eval to the corresponding positions, the values for the other
nodes can be calculated using the minimax rule. In the example we suppose that
player A is trying to maximize the evaluation function and that player B is trying to
minimize it. If A plays to maximize his advantage, he will choose the second of the
three possible moves. This assures him a value of at least 10; but see Problem 9.55.
Player Rule
A -, In)
B min
A max
Ft ev.
-7 5 -3 10 -20 0 -5 10 -15 20 1 6 -8 14 -30 0 -8 -9
The basic minimax technique can be improved in a number of ways. For exam-
ple, it may be worthwhile to explore the most promising moves in greater depth.
Similarly, the exploration of certain branches can be abandoned early if the infor-
mation we have about them is already sufficient to show that they cannot possibly
influence the value of nodes further up the tree. This second type of improvement,
which we shall not describe in this book, is generally known as alpha-beta pruning.
9.9 Problems
Problem 9.1. Add nodes (8,7), (7,6), (6,5) and their descendants to the graph of
Figure 9.1.
Problem 9.2. Can a winning position in the game described in Section 9.1 have
more than one losing position among its successors? In other words, are there
positions in which several different winning moves are available? Can this happen
in the case of a winning initial position (n, n - 1) ?
320 Exploring graphs Chapter 9
Problem 9.3. Suppose we change the rules of the game of Section 9.1 so that the
player who is forced to take the last match loses. This is the misere version of the
game. Suppose also that the first player must take at least one match and that he
must leave at least two. Among the initial positions with three to eight matches,
which are now winning positions for the first player?
Problem 9.4. Modify the algorithm recwin of Section 9.1 so it returns an integer k,
where k = 0 if the position is a losing position, and 1 < k < j if it is a winning move
to take k matches.
Problem 9.5. Prove that in the game described in Section 9.1 the first player has
a winning strategy if and only if the initial number of matches does not appear in
the Fibonacci sequence.
Problem 9.6. Consider a game that cannot continue for an infinite number of
moves, and where no position offers an infinite number of legal moves to the player
whose turn it is. Let G be the directed graph corresponding to this game. Show
that the method described in Section 9.1 allows all the nodes of G to be labelled as
win, lose or draw.
Problem 9.7. Consider the following game. Initially a heap of n matches is placed
on the table between two players. Each player in turn may either (a) split any heap
on the table into two unequal heaps, or (b) remove one or two matches from any
heap on the table. He may not do both. He may only split one heap, and if
he chooses to remove two matches, they must both come from the same heap.
The player who removes the last match wins.
For example, suppose that during play we arrive at the position {5,41; that is, there
are two heaps on the table, one of 5 matches, the other of 4. The player whose turn it
is may move to {4, 3,21 or {4, 11 by splitting the heap of 5, to 15,3, 11 by splitting
the heap of 4 (but not to {5, 2,21, since the new heaps must be unequal), or to {4, 41,
{4, 31, 15,31 or 15,21 by taking one or two matches from either of the heaps.
Sketch the graph of the game for n = 5. If both play correctly, does the first or the
second player win?
Problem 9.8. Repeat the previous problem for the misere version of the game,
where the player who takes the last match loses.
Problem 9.9. Consider a game of the type described in Section 9.1. When we use
a graph of winning and losing positions to describe such a game, we implicitly
assume that both players will move intelligently so as to maximize their chances of
winning. Can a player in a winning position lose if his opponent moves stupidly
and makes an unexpected "error"?
Problem 9.10. For any of the tree traversal techniques mentioned in Section 9.2,
prove that a recursive implementation takes storage space in i(n) in the worst
case.
Section 9.9 Problems 321
Problem 9.11. Show how any of the tree traversal techniques mentioned in Sec-
tion 9.2 can be implemented so as to take only a time in 6((n) and storage space
in E6(1), even when the nodes do not contain a pointer to their parents (in which
case the problem is trivial).
Problem 9.14. Show how a depth-first search progresses through the graph of
Figure 9.3 if the starting point is node 6 and the neighbours of a given node are
examined (a) in numerical order, and (b) in decreasing numerical order.
Exhibit the spanning tree and the numbering of the nodes of the graph generated
by each of these searches.
Problem 9.15. Analyse the running time of algorithm dfs if the graph to be ex-
plored is represented by an adjacency matrix (type adjgraph of Section 5.4) rather
than by lists of adjacent nodes.
Problem 9.16. Show how depth-first search can be used to find the connected
components of an undirected graph.
Problem 9.17. Let G be an undirected graph, and let T be the spanning tree gener-
ated by a depth-first search of G. Prove that an edge of G that has no corresponding
edge in T cannot join nodes in different branches of the tree, but must necessarily
join some node v to one of its ancestors in T.
Problem 9.20. Prove that for every pair of distinct nodes v and w in a biconnected
graph, there exist at least two paths joining v and w that have no nodes in common
except the starting and ending nodes.
322 Exploring graphs Chapter 9
Problem 9.21. In the algorithm for finding the articulation points of an undirected
graph given in Section 9.3.1, show how to calculate the values of both prenum and
highest during the depth-first search of the graph, and implement the corresponding
algorithm.
Problem 9.22. The example in Section 9.3.1 finds the articulation points for the
graph of Figure 9.3 using a depth-first search starting at node 1. Verify that the
same articulation points are found if the search starts at node 6.
Problem 9.23. Illustrate how the algorithm for finding the articulation points of an
undirected graph given in Section 9.3.1 works on the graph of Figure 9.19, starting
the search (a) at node 1, and (b) at node 3.
Problem 9.25. Illustrate the progress of the depth-first search algorithm on the
graph of Figure 9.5 if the starting point is node 1 and the neighbours of a given
node are examined in decreasing numerical order.
Problem 9.27. For the graph of Figure 9.8, what is the topological order obtained
if we use the procedure suggested in Section 9.4.1, starting the depth-first search
at node 1, and visiting the neighbours of a node in numerical order?
Problem 9.28. What is wrong with the following argument? When we visit node
v of a graph G using depth-first search, we immediately explore all the other nodes
that can be reached from v by following edges of G. In the topological ordering,
Section 9.9 Problems 323
these other nodes must come later than v. Thus to obtain a topological ordering
of the nodes, it suffices to add an extra line
write v
at the beginning of procedure dfs.
Problem 9.29. A directed graph is strongly connected if there exist paths from u to
v and from v to u for every pair of nodes u and v. If a directed graph is not strongly
connected, the largest sets of nodes such that the induced subgraphs are strongly
connected are called the strongly connected components of the graph. For example,
the strongly connected components of the graph in Figure 9.5 are {1,2,31, {5,6}
and {4, 7,81. Design an efficient algorithm based on depth-first search to find the
strongly connected components of a graph.
Problem 9.32. Sketch the forest generated by the search of Problem 9.31, showing
the remaining edges of the graph as broken lines. How many kinds of "broken"
edges are possible?
Problem 9.33. Justify the claim that a depth-first search of the graph of Figure 9.11,
starting at node 1 and visiting neighbours in the order "first division by 3, then
multiplication by 2", works down to node 12 and then visits successively nodes 24,
48, 96, 32 and 64.
Problem 9.34. List the first 15 nodes visited by a breadth-first search of the graph
of Figure 9.11 starting at node 1 and visiting neighbours in the order "first division
by 3, then multiplication by 2".
Problem 9.35. Section 9.5 gives one way to produce the value 13, starting from 1
and using the operations multiplication by 2 and division by 3. Find another way
to produce the value 13 using the same starting point and the same operations.
Problem 9.37. An Euler path in an undirected graph is a path where every edge
appears exactly once. Design an algorithm that determines whether or not a given
graph has an Euler path, and prints the path if so. How much time does your
algorithm take?
Problem 9.40. Sketch the search tree explored by a backtracking algorithm solving
the same instance of the knapsack problem as that in Section 9.6.1, but assuming
this time that items are loaded in order of decreasing weight.
Problem 9.41. Solve the same instance of the knapsack problem as that in Section
9.6.1 by dynamic programming. You will need to work Problem 8.15 first.
Problem 9.42. Adapt the function backpack of Section 9.6.1 to give the composition
of the best load as well as its value.
Problem 9.43. Let the vector qI [. .8] represent the positions of eight queens on a
chessboard, with one queen in each row: if q[i]= j, I < i < 8, 1 < j < 8, the queen
in row i is in column j. Write a function solution(q) that returnsfalse if at least one
pair of queens threaten each another, and returns true otherwise.
Problem 9.44. Given that the algorithm queens finds a solution and stops after
trying 1299 852 positions, solve the eight queens problem without using a com-
puter.
Problem 9.45. Suppose the procedure use(T) called from the procedure perrn(i)
of Section 9.6 consists simply of printing the array T on a new line. Show the result
of calling perm(1) when n = 4.
Problem 9.46. Suppose the procedure use(T) called from the procedure perm(i)
of Section 9.6 takes constant time. How much time is needed, as a function of n,
to execute the call perm(l)?
Rework the problem assuming now that use(T) takes a time in (9(n).
Problem 9.47. For which values of n does the n queens problem have no solu-
tions? Prove your answer.
Problem 9.48. How many solutions are there to the eight queens problem? How
many distinct solutions are there if we do not distinguish solutions that can be
transformed into one another by rotations and reflections?
Section 9.9 Problems 325
I
f X
IX D true
t
I
t
H false
Problem 9.51. The backtracking method suggested in Problem 9.50 illustrates the
principle, but is very inefficient in practice. Find a much better way of solving the
same problem.
Problem 9.52. In Section 9.7 we calculated lower bounds for the nodes in the
search tree by assuming that each unassigned task would be executed by the unas-
signed agent who could do it at least cost. This is like crossing out the rows and
columns of the cost matrix corresponding to agents and tasks already assigned,
and then adding the minimum elements from each remaining column. An alter-
native method of calculating bounds is to assume that each unassigned agent will
perform the task he can do at least cost. This is like crossing out the rows and
columns of the cost matrix corresponding to agents and tasks already assigned,
and then adding the minimum elements from each remaining row. Show how
a branch-and-bound algorithm for the instance in Section 9.7 proceeds using this
second method of calculating lower bounds.
326 Exploring graphs Chapter 9
Problem 9.53. Use branch-and-bound to solve the assignment problems with the
following cost matrices:
1 2 3 4
a 94 1 54 68
(a) b 74 10 88 82
c 62 88 8 76
d II 74 81 21
1 2 3 4 5
a 11 17 8 16 20
(b) b 9 7 12 6 15
c 13 16 15 12 16
d 21 24 17 28 26
e 14 10 12 11 15
Problem 9.54. Four types of object are available, whose weights are respectively
7, 4, 3 and 2 units, and whose values are 9, 5, 3 and 1. We can carry a maximum
of 10 units of weight. Objects may not be broken into smaller pieces. Determine
the most valuable load that can be carried, using (a) dynamic programming, and
(b) branch-and-bound. For part (a), you will need to work Problem 8.15 first.
Problem 9.55. Looking at the tree of Figure 9.18, we said that the player about to
move in this position is assured a value of at least 10. Is this strictly true?
Problem 9.56. Let u correspond to the initial position of the pieces in the game of
chess. What can you say about White(u, 12345), besides the fact that it would take
far too long to calculate in practice, even with a special-purpose computer? Justify
your answer.
Berlekamp, Conway and Guy (1982) give more versions of the game of Nim
than most people will care to read. The game in Problem 9.7 is a variant of Grundy's
game, also discussed by Berlekamp, Conway and Guy (1982). The book by Nils-
son (1971) is a goldmine of ideas concerning graphs and games, the minimax
technique, and alpha-beta pruning. Some algorithms for playing chess appear
in Good (1968). A lively account of the first time a computer program beat the
world backgammon champion is given in Deyong (1977). For a more technical
description of this feat, consult Berliner (1980). In 1994, Garri Kasparov, the world
chess champion, was beaten by a Pentium microcomputer. At the time of writing,
humans are still unbeaten at the game of go.
A solution to Problem 9.11 is given in Robson (1973). To learn more about
preconditioning, read Brassard and Bratley (1988). Many algorithms based on
depth-first search can be found in Tarjan (1972), Hopcroft and Tarjan (1973) and
Aho, Hopcroft and Ullman (1974, 1983). Problem 9.24 is solved in Hopcroft and
Tarjan (1974). See also Rosenthal and Goldner (1977) for an efficient algorithm that,
given an undirected graph that is connected but not biconnected, finds a smallest
set of edges that can be added to make the graph biconnected.
The problem involving multiplying by 3 and dividing by 2 is reminiscent of
Collatz's problem: see Problem E16 in Guy (1981). Backtracking is described in
Golomb and Baumert (1965) and techniques for analysing its efficiency are given
in Knuth (1975). The eight queens problem was invented by Bezzel in 1848; see the
account of Ball (1967). Irving (1984) gives a particularly efficient backtracking
algorithm to find all the solutions for the n queens problem. The problem can
be solved in linear time with a divide-and-conquer algorithm due to Abramson
and Yung (1989) provided we are happy with a single solution. We shall come
back to this problem in Chapter 10. The three-dimensional n2 queens problem
mentioned in Problem 9.49 was posed by McCarty (1978) and solved by Allison,
Yee and McGaughey (1989): there are no solutions for n < 11 but 264 solutions
exist for n = 11.
Branch-and-bound is explained in Lawler and Wood (1966). The assignment
problem is well known in operations research: see Hillier and Lieberman (1967).
For an example of solving the knapsack problem by backtracking, see Hu (1981).
Branch-and-bound is used to solve the travelling salesperson problem in Bellmore
and Nemhauser (1968).
Chapter 10
Probabilistic Algorithms
10.1 Introduction
Imagine you are the heroine of a fairy tale. A treasure is hidden at a place described
by a map that you cannot quite decipher. You have managed to reduce the search
to two possible hiding-places, which are, however, a considerable distance apart.
If you were at one or the other of these two places, you would immediately know
whether it was the right one. It takes five days to get to either of the possible hiding-
places, or to travel from one of them to the other. The problem is complicated by
the fact that a dragon visits the treasure every night and carries part of it away to
an inaccessible den in the mountains. You estimate it will take four more days'
computation to solve the mystery of the map and thus to know with certainty
where the treasure is hidden, but if you set out on a journey you will no longer
have access to your computer. An elf offers to show you how to decipher the map
if you pay him the equivalent of the treasure that the dragon can carry away in
three nights. Should you accept the elf's offer?
Obviously it is preferable to give three nights' worth of treasure to the elf
rather than allow the dragon four extra nights of plunder. If you are willing to take
a calculated risk, however, you can do better. Suppose that x is the value of the
treasure remaining today, and that y is the value of the treasure carried off every
night by the dragon. Suppose further that x > 9y. Remembering it will take you
five days to reach the hiding-place, you can expect to come home with x - 9y if
you wait four days to finish deciphering the map. If you accept the elf's offer, you
can set out immediately and bring back x - 5y, of which 3y will go to pay the elf;
you will thus have x - 8y left. A better strategy is to toss a coin to decide which
possible hiding-place to visit first, journeying on to the other if you find you have
decided wrong. This gives you one chance out of two of coming home with x - 5y,
and one chance out of two of coming home with x -10y. Your expected profit is
therefore x - 7.5x.
328
Section 10.2 Probabilistic does not imply uncertain 329
This fable can be translated into the context of algorithmics as follows: when an
algorithm is confronted by a choice, it is sometimes preferable to choose a course of
action at random, rather than to spend time working out which alternative is best.
Such a situation arises when the time required to determine the optimal choice is
prohibitive, compared to the time saved on the average by making this optimal
choice.
The main characteristic of probabilistic algorithms is that the same algorithm
may behave differently when it is applied twice to the same instance. Its execution
time, and even the result obtained, may vary considerably from one use to the
next. This can be exploited in a variety of ways. For example, a deterministic algo-
rithm is never allowed to go astray (infinite loop, division by zero, etc.) because if
it does so on a given instance, we can never solve this instance with that algorithm.
By contrast, such behaviour is acceptable for a probabilistic algorithm provided it
occurs with reasonably small probability on any given instance: if the algorithm
gets stuck, simply restart it on the same instance for a fresh chance of success.
Another bonus of this approach is that if there is more than one correct answer,
several different ones may be obtained by running the probabilistic algorithm more
than once; a deterministic algorithm always comes up with the same answer, al-
though of course it can be programmed to seek several.
Another consequence of the fact that probabilistic algorithms may behave dif-
ferently when run twice on the same input is that we shall sometimes allow them
to yield erroneous results. Provided this happens with reasonably small proba-
bility on any given instance, it suffices to invoke the algorithm several times on
the desired instance to build arbitrarily high confidence that the correct answer is
obtained. By contrast, a deterministic algorithm that gives wrong answers on some
inputs is worthless for most applications because it will always err on those inputs.
The analysis of probabilistic algorithms is often complex, requiring an acquain-
tance with results in probability and statistics beyond those introduced in Sec-
tion 1.7.4. For this reason, a number of results are cited without proof in this chapter.
function uniform(a, b)
return a + (b - a) x uniform (0, 1)
function uniform(i . j)
return [uniform (i, j + 1)]
function coinflip
if uniform(0. .1)= 0 then return heads
else return tails
Truly random generators are not usually available in practice. Moreover, it is not
realistic to expect uniform(0, 1) to take an arbitrary real value between 0 and 1.
Most of the time pseudorandom generators are used instead: these are deterministic
procedures able to generate long sequences of values that appear to have the prop-
erties of a random sequence. To start a sequence, we supply an initial value called
a seed. The same seed always gives rise to the same sequence. To obtain different
sequences, we may choose, for example, a seed that depends on the date or time.
Most programming languages include such a generator, although some implemen-
tations should be used with caution. Using a good pseudorandom generator, the
theoretical results obtained in this chapter concerning the efficiency and reliability
of various probabilistic algorithms can generally be expected to hold. However,
the impractical hypothesis that a genuinely random generator is available is crucial
when we carry out the analysis.
The theory of pseudorandom generators is beyond the scope of this book, but
the general principle is simple. Most generators are based on a pair of functions
f : X - X and g : X - Y, where X is a sufficiently large set and Y is the domain of
pseudorandom values to be generated. We require both X and Y to be finite, which
means that we can only hope to approximate the effect of uniform(0, 1) by using a
suitably large Y. On the other hand, Y = {0,1} is adequate for many applications:
it corresponds to tossing a fair coin. To generate our pseudorandom sequence, let
s e X be a seed. This seed defines a sequence
IXO = s
IXi f(xi -)foralli>0.
Finally, the pseudorandom sequence yO, Y1, Y2 ... is defined by Y = g (xi) for
all i > 0. This sequence is necessarily periodic, with a period that cannot exceed
the number of elements in X. Nevertheless, if X is sufficiently large and if f and g
(and sometimes s) are chosen adequately, the period can be made very long, and
the sequence may be for most practical purposes indistinguishable from a truly
random sequence of elements of Y.
Section 10.5 Numerical probabilistic algorithms 333
We give one simple example to illustrate this principle. Let p and q be two
distinct large prime numbers, both congruent to 3 modulo 4, and let n be their prod-
uct. Assume p and q are large enough that it is infeasible to factorize n. (At the
time of writing, 100 digits each is considered sufficient; see Section 10.7.4.) Let X
be the set of integers from 0 to n -1 and let Y = {0, 1}. Define f (x)= x 2 mod n
and g(x)= x mod 2. Provided the seed is chosen uniformly at random among the
elements of X that are relatively prime to n, it has been demonstrated that it is
almost always infeasible to distinguish the sequence thus generated from a truly
random binary sequence. For most practical algorithmic purposes, faster but less
secure pseudorandom generators may be preferable. For instance, linear congru-
ential pseudorandom generators of the form f (x) = ax + b mod n for appropriate
values of a, b and n are widely used.
(unlikely!) scenario, we can estimate the width of the planks with arbitrary precision
by dropping a sufficiently large number n of needles and counting the number k
that fall across a crack. Our estimate is simply w 2An/kkTr. As always, the more
needles we drop, the more precise our estimate is likely to be.
With high probability, the algorithms to estimate the value of IT and the width
of the planks return a value that converges to the correct answer as the number
of needles dropped grows to infinity (provided the needles are half as long as the
planks are wide in the first case, and both Tr and the length of the needles is known
precisely in the second case). The natural question to ask is: How quickly do these
algorithms converge? Alas, they are painfully slow: you need to drop 100 times
more needles to obtain one more decimal digit in precision.
Convergence analysis for numerical probabilistic algorithms requires more
knowledge of probability and statistics than we normally assume from our read-
ers. Nevertheless, we proceed with a sketch of the basic approach for those who
have the required background. We concentrate on the analysis of the algorithm for
estimating mrr.For technical reasons, we first analyse how good k / n would be as an
estimator for 1 / 7r. Consider an arbitrary small positive E. We associate a random
variable Xi with each needle: Xi = 1 if the i-th needle falls across a crack and Xi = 0
otherwise. By Buffon's theorem, Pr [Xi 1]= 1/Ir for each i. The expectation and
variance of Xi are E(Xi) 1/ITr and Var(Xi)= (1- ), respectively. Now, let X
denote the random variable corresponding to our estimate of 1 /It after dropping
n needles: X = YI' Xi/n. For any integer k between 0 and n
For instance, when n = 22, Pr[X = 7 18% and Pr[ 6 < X • is just
2-2]
slightly above 50%. This random variable has expectation E(X)= 1/rr and vari-
ance Var (X) = (1 ) In. By the Central Limit Theorem, the distribution of X is
almost normal provided n is sufficiently large; see Section 1.7.4. Tables for the
normal distribution tell us that
Using our values for E(X) and Var(X), we infer that Pr[ X - E ]> 90% when
E > 1.645 A (1 o- ) /n, and thus when n > 0.588/r 2 . Therefore, it suffices to drop
at least 0.588/E2 needles to obtain an absolute error in our estimate of 1/Tr that
does not exceed - nine times out of ten. This dependence on 1 /E2 explains why
one additional digit of precision-which means that E must be 10 times smaller-
requires 100 times more needles. If we are not happy with a confidence interval
that is reliable only nine times out of ten, another entry in the normal distribution
table can be used. For example, the fact that
tells us that Pr[ X - < E] > 99% provided E > 2.5761$(I- 7 )/n, which is sat-
isfied when n > 1.440/E 2 . Thus a tenfold reduction in the probability of error is
inexpensive compared to a one-figure increase in precision. In general, after we
have decided how many needles to drop, there is a tradeoff between the number
of digits that we output and the probability that those digits are correct.
Of course, we are interested in finding a confidence interval for our estimate
of ir, not of 1/rr. A straightforward calculation shows that if IX - I < E then
- - <1
< E * Moreover 9.8E < E < 10EFITrwhen E < 0.00415, which means in-
tuitively that the same number of needles will provide one less decimal place con-
cerning ir than 1 / t, although the relative error is essentially the same because ir
is about 10 times bigger than 1/ rr. Putting all this together, assuming we want an
absolute error in our estimate for ir that does not exceed E < 0.0415 at least 99%
of the time, it is sufficient to drop 144/E 2 needles. Slightly fewer needles suffice
in the limit of small E. This is nearly one and a half million needles even if we are
satisfied that our estimate be within 0.01 of the exact value of it ninety-nine times
out of one hundred! Did we ever claim this scheme was practical?
To summarize, the user of the algorithm should supply two parameters: the de-
sired reliability p and precision E. From those parameters, with the help of a table
of the normal distribution, the algorithm determines the number n of needles that
need to be thrown so the resulting estimate will be between it - a and ir + E with
probability at least p. For instance, the algorithm would choose n to be about one
and a half million if the user supplied p = 99% and E = 0.01. Finally, the algorithm
drops n needles on the floor, counts the number k of them that fall across a crack,
and announces: "The value of ir is probably between - E and n + a". This an-
swer will be correct a proportion p of the time if the experiment is repeated several
times with the same values of p and a.
There remains one subtlety if you want to implement a similar algorithm to
estimate a value that you really do not know already: we needed the value of ir
for our convergence analysis! How can we determine the required number n of
needles as a function of p and E alone? One solution is to be overly conservative in
our estimate of the variance of the random variables involved. The variance of Xi is
p (1 - p) when Xi = 1 with probability p and Xi = 0 otherwise, which is at most 1/4
(the worst case is when p = 1/2). If we determine n as if the variance of Xi were 1/4,
we will throw more needles than strictly necessary but the desired precision will
be obtained with a reliability better than required. The algorithm can then estimate
the actual reliability obtained using a table of Student's t distribution. The details
can be found in any standard text on statistics. An alternative solution is to use a
small sample of arbitrary size to get a first estimate of the required value, use this to
make a better estimate of the sample size needed, take another sample, and so on.
Again, the details are in any standard text.
b
I = ff(x) dx.
. _ f
I -
d
b-a
----------
a b
After reading the previous section, a probabilistic algorithm to estimate the integral
should immediately spring to mind: estimate the average height of the curve by
random sampling and multiply the result by b - a.
function MCint(f , n, a, b)
sum - 0
for i 1 to n do
x - uniform(a,b)
sum - sum + f (x)
return (b - a)x(sum/n)
An analysis similar to that for Buffon's needle shows that the variance of the esti-
mate calculated by this algorithm is inversely proportional to the number of sample
points, and the distribution of the estimate is approximately normal when n is large.
Therefore, the expected error in the estimate is again inversely proportional to v/n,
so that 100 times more work is needed to obtain one additional digit of precision.
Algorithm MCint is hardly more practical than Buffon's method for estimat-
ing Tr; see Problem 10.3. A better estimate of the integral can generally be obtained
by deterministic methods. Perhaps the simplest is similar in spirit to MCint except
that it estimates the average height by deterministic sampling at regular intervals.
338 Probabilistic Algorithms Chapter 10
function DETint(f , n, a, b)
sum - 0
delta - (b - a)/n
x - a + delta/2
for i - 1 to n do
sum - sum + f (x)
x - x + delta
return sum x delta
In general, this deterministic algorithm needs many less iterations than Monte
Carlo integration to obtain a comparable degree of precision. This is typical of
most of the functions we may wish to integrate. However, to every deterministic
integration algorithm, even the most sophisticated, there correspond continuous
functions that can be constructed expressly to fool the algorithm. Consider for
example the function f(x)= sin 2 ((100!)Trx). Any call on DETint(f,n,0,1) with
1 < n < 100 returns the value zero, even though the true value of this integral is 1/2.
No function can play this kind of trick on the Monte Carlo integration algorithm,
although there is an extremely small probability that the algorithm might make a
similar error even when f is a thoroughly ordinary function.
A better reason to use Monte Carlo integration in practice arises when we have
to evaluate a multiple integral. If a deterministic integration algorithm using some
systematic method to sample the function is generalized to several dimensions,
the number of sample points needed to achieve a given precision grows exponen-
tially with the dimension of the integral to be evaluated. If 100 points are needed
to evaluate a simple integral, then it will probably be necessary to use all 10000
points of a 100 x 100 grid to achieve the same precision when a double integral is
evaluated; one million points will be needed for a triple integral, and so on. With
Monte Carlo integration, on the other hand, the dimension of the integral generally
has little effect on the precision obtained, although the amount of work for each
iteration is likely to increase slightly with the dimension. In practice, Monte Carlo
integration is used to evaluate integrals of dimension four or higher because no
other simple technique can compete. Nevertheless, there are better algorithms that
are more complicated. For instance, the precision of the answer can be improved
using hybrid techniques that are partly systematic and partly probabilistic. If the
dimension is fixed, it may be preferable to choose points systematically so that they
are well-spaced yet reasonable in number, a technique known as quasi Monte Carlo
integration.
implement two procedures init(c) and tick(c), and one function count(c) such that
a call on count(c) returns the number of calls to tick(c) since the last call on init(c).
In other words, init resets the counter to zero, tick adds 1 to it, and count asks for
its current value. The algorithms should be able to maintain an arbitrarily large
number of counters cl, c2, ... , and side-effects are not allowed: no information
can be passed between calls, except through the register transmitted as an explicit
parameter.
We suggested above that we could skip some values to count farther. However,
this is nonsense if we insist on a deterministic counting strategy. Because side-
effects are not allowed, there is no way tick can add 1 to the counter every other
call: the behaviour of tick(c) must be determined completely from the current
value of c. If there exists a value of c such that tick(c) leaves c unchanged, the
counter will stick at that point until init(c) is called. (This is reasonable when the
counter has reached its upper limit.) Since c can assume only 2n different values,
the counter is forced to reassume a previously held value after at most 2n calls
on tick(c). Therefore, deterministically counting more than 2" events in an n-bit
register is impossible.
Provided we relax the requirement that count should return the exact number of
ticks since the last init, there is an obvious probabilistic strategy to count twice as far.
The register is an ordinary binary counter, initialized to zero by a call on init. Each
time tick is called, flip a fair coin. If it comes up heads, add 1 to the register. If it
comes up tails, however, do nothing. When count is called, return twice the value
stored in the register. Following a call on init, it is easy to prove that the expected
value returned by count after t calls to tick is precisely t, even when t is odd.
The variance can be shown to be reasonably small, so the estimate returned by
count is fairly accurate most of the time; see Problem 10.5.
We do not expect you to be impressed by the previous paragraph. Counting
twice as far simply means up to 21' - 2, which could have been achieved deter-
ministically if a single additional bit had been available in the register. We now
show that probabilistic counting strategies can count exponentially farther: from 0
to 22n-1 - 1. Thus, 8 bits are sufficient to count more than 5 x 1076 events! The idea
is to keep in the register an estimate of the logarithm of the actual number of ticks.
More precisely, count(c) returns 2C - 1. We subtract 1 so that a count of 0 can be
represented; moreover count(O)= 0 as it should since init(c) sets c to 0. It remains
to see how to implement tick.
Assume that 2C -1 is a good estimate of the number of ticks since the last ini-
tialization. After an additional tick, a good estimate of the total number of ticks
would be 2c, but this is not of a form that can be represented with our strategy.
To circumvent this, we add 1 to c with some probability p to be determined and we
leave it unchanged otherwise. Thus, our estimate of the number of ticks becomes
2c+1 - 1 with probability p, whereas it remains 2c 1 with complementary prob-
ability 1 - p. The expected value returned by count(c) after this tick is therefore
and we see that it suffices to set p =2 c to obtain 2', the desired expected value
for count(c). (This reasoning is not rigorous; work Problem 10.6 for a proof that
the expected value returned by count is correct.) To summarize, we obtain the
following algorithms.
procedure init(c)
c -0
procedure tick(c)
for i -1 to c do
if coinflip = heads then return
c - c+1
{The probability of overflow is too small to be worth checking}
function count(c)
return 2C - 1
for small E. The division by E is to keep the desirable property that count always
gives the exact answer after 0 or 1 ticks. Using E = 1/30, this allows counting up to
more than 125 000 events in an 8-bit register with less than 24% relative error 95%
of the time. Of course, the probability that tick will increase c when called should
no longer be 2 c: it must be set appropriately; see Problem 10.8.
For an entirely different type of probabilistic counting, which saves time rather
than storage, be sure to work Problem 10.12. Under some conditions, this algorithm
can count approximately the number of elements in a set in a time that is in the
order of the square root of that number.
Section 10.6 Monte Carlo algorithms 341
one-half.
On the other hand, Es(D) is always 0 if C = AB since in this case D = 0. This
suggests testing whether or not C = AB by computing Es (D) for a randomly chosen
subset S and comparing the result with 0. But how can this be done efficiently
without first computing D (and hence AB)?
342 Probabilistic Algorithms Chapter 10
function Freivalds(A, B, C, n)
for j- 1 to n do Xj - uniform(O.. 1)
if (YA)B = XC then return true
else return false
From the discussion above, we know that this algorithm returns a correct an-
swer with probability at least one-half on every instance: it is therefore I/2-correct
by definition. It is not p-correct for any p smaller than 1/2, however, because
its error probability is exactly 1/2 when C differs from AB in precisely one row:
an incorrect answer is returned if and only if Xi = 0 where it is row i that differs
between C and AB. Is this really interesting? Surely an error probability of 50%
is intolerable in practice? Worse, an easier "algorithm" to achieve this error rate
would be to flip a single fair coin and return true orfalse depending on the outcome
without even looking at the three matrices in question! The key observation is that
whenever Freivalds(A, B, C, n) returns false, you can be sure this answer is correct:
the existence of a vector X such that XAB differs from XC allows you to conclude
with certainty that AB 4=C. It is only when the algorithm returns true that you are
not sure whether to believe it.
Consider for example the following 3 x 3 matrices.
1 2 3 3 1 4 11 29 37
A= 4 5 6 B=1 5 9 C= 29 65 91
7 8 9 2 6 5 47 99 45
A call on Freivalds(A, B, C, 3) could choose X = (1, 1,0), in which case XA is obtained
by adding the first and second rows of A, thus XA = (5,7,9). Continuing, we
calculate (NA)B = (40,94,128). This is compared to XC = (40,94,128), which is
obtained by adding the first and second rows of C. With this choice of X, Freivalds
returns true since XAB = XC. Another call on Freivalds might choose X = (0,1,1)
instead. This time, the calculations are
and Freivalds returns false. We are luckier: the fact that XAB #6XC is conclusive
proof that AB + C.
Section 10.6 Monte Carlo algorithms 343
function RepeatFreivalds(A, B, C, n, k)
for i - 1 to k do
if Freivalds(A, B, C, n) =false then returnfalse
return true
We are interested in the error probability of this new algorithm. Consider two cases.
If in fact C = AB, each call on Freivaldsnecessarily returns true since there is no risk
of randomly choosing an X such that XAB 76 XC, and thus RepeatFreivalds returns
true after going k times round the loop. In this case, the error probability is zero.
On the other hand, if in fact C X AB, the probability for each call on Freivalds that
it will (incorrectly) return true is at most 1/2. These probabilities multiply because
the coin flips are independent from one call to the next. Therefore, the probability
that k successive calls each return the wrong answer is at most 2-k. But this is the
only way for RepeatFreivaldsto return an erroneous answer. We conclude that our
new algorithm is (1 - 2 k).-correct. When k = 10, this is better than 99.9%-correct.
Repeating 20 times brings the error probability below one in a million. Such a spec-
tacularly rapid decrease in error probability is typical for Monte Carlo algorithms
that solve decision problems-that is, problems for which the answer is either true
orfalse-provided one of the answers, if obtained, is guaranteed to be correct.
Alternatively, Monte Carlo algorithms can be given an explicit upper bound
on the tolerable error probability.
function Freivaldsepsilon(A,B, C, n, E)
k - I[lg / 1
return RepeatFreivalds(A, B, C, n, k)
An advantage of this version of the algorithm is that we can analyse its running
time as a function simultaneously of the instance size and the error probability.
In this case, the algorithm clearly takes a time in 0 (n2 log IE/).
The algorithms in this section are of limited practical interest because it takes
3n 2 scalar multiplications to compute XAB and XC, compared to n3 to compute
AB by the direct method. If we insist on an error probability no larger than one
in a million, and if in fact C = AB, the 20 required runs of Freivalds perform 60n 2
scalar multiplications, which does not improve on n 3 unless n is larger than 60.
Nevertheless, this approach is potentially useful if very large matrix products have
to be verified.
hundred decimal digits. Testing the primality of large numbers is more than a
mathematical recreation: we saw in Section 7.8 that it is crucial for modern cryp-
tology.
The story of probabilistic primality testing has its roots with Pierre de Fermat,
the father of modern number theory. He stated the following theorem, sometimes
known as Fermat's little theorem, in 1640.
aw 1 mod n = I
function Fermat(n)
a - uniform(l .. n - 1)
if expomod(a, n - 1, n)= 1 then return true
else returnfalse
that whenever this algorithm tells you n is composite, it provides no clue concern-
ing its divisors. To the best of our current algorithmic knowledge, factorization is
much harder than primality testing. Recall that the presumed difference in diffi-
culty between these problems is crucial to the RSA cryptographic system described
in Section 7.8: the easiness of primality testing is necessary for its implementation
whereas the hardness of factorization is prerequisite to its safety.
What can we say, however, if Fermat(n) returns true? To conclude that n
is prime, we would need the converse rather than the contrapositive to Fermat's
theorem. This would say that an-I mod n is never equal to 1 when n is composite
and 1 < a < n -1. Unfortunately, this is not the case since ln-1 mod n = 1 for all
n > 2. Moreover, (n -l)l 1 mod n = I for all odd n > 3 because (n -1)2 mod n = 1.
The smallest nontrivial counterexample to the converse is that 414 mod 15 = 1
despite the fact that 15 is composite. An integer a such that 2 < a • n - 2 and
an-1 mod n =1 is called afalse witness of primality for n if in fact n is composite.
Provided we modify Fermat's test to choose a randomly between 2 and n - 2, it can
only fail on composite numbers when a false witness is chosen.
The good news is that false witnesses are rather few. Although only 5 among
the 332 odd composite numbers smaller than 1000 boast no false witnesses, more
than half of them have only two false witnesses and less than 16% of them have
more than 15. Moreover, there are only 4490 false witnesses for all these composite
numbers taken together. Compare this to 172 878, the total number of candidate
witnesses that exist for the same set of numbers. The average error probability of
Fermat's test on odd composite numbers smaller than 1000 is less than 3 . 3 %. It is
even smaller if we consider larger numbers.
The bad news is that there are composite numbers that admit a significant pro-
portion of false witnesses. The worst case among composites smaller than 1000 is
561, which admits 318 false witnesses: this is more than half the potential witnesses.
A more convincing case is made with a 15-digit number: Fermat(651693055693681)
returns true with probability greater than 99.9965% despite the fact that this num-
ber is composite! More generally, for any 6 > 0 there are infinitely many composite
numbers for which Fermat's test discovers compositeness with probability smaller
than 6. In other words, Fermat's test is not p-correct for any fixed p > 0. Conse-
quently, the error probability cannot be brought down below an arbitrarily small E
by repeating Fermat's test some fixed number of times according to the technique
we saw in the previous section.
Fortunately, a slight modification in Fermat's test solves this difficulty Let n
be an odd integer greater than 4, and let s and t be integers such that n - 1 = 2s t,
where t is odd. Note that s > 0 since n - 1 is even. Let B(n) be the set of integers
defined as follows: a E B(n) if and only if 2 < a < n - 2 and
• atmodn =1,or
c there exists an integer i, 0 < i < s, such that a2 it mod n = n - 1.
function Btest(a, n)
s -0; t-n -1
repeat
s -s + 1; t - t . 2
until t mod 2 = 1
x - expomod(a, t, n)
if x = 1 or x = n - 1 then return true
fori-1 tos-1 do
x - x2 mod n
if x = n - 1 then return true
returnfalse
For example, let us see if 158 belongs to B(289). We set s 5 and t = 9 because
n -1 = 288 = 25 x 9. Then we compute
The test is not finished since 131 is neither 1 nor n - 1 288. Next, we successively
square x (modulo n) up to 4 times (s - 1 = 4) to see if we obtain 288.
At this point we stop because we have found that a2t mod n = n - 1 for i = 3 < s,
and we conclude that 158 E B(289).
An extension of Fermat's theorem shows that a e B(n) for all 2 < a < n - 2
when n is prime. On the other hand, we say that n is a strong pseudoprime to
the base a and that a is a strong false witness of primality for n whenever n > 4
is an odd composite number and a E B(n). For instance, we just saw that 158
is a strong false witness of primality for 289 since 289 = 172 is composite. This
yields a better test than Fermat's because strong false witnesses are automatically
false witnesses with respect to Fermat's test, but not conversely. In fact, strong
false witnesses are much rarer than Fermat false witnesses. For instance, 4 is a false
witness of primality for 15, but it is not a strong false witness because 47 mod 15 = 4.
We also saw that 561 admits 318 false witnesses, but only 8 of these are strong
false witnesses. Considering all odd composites smaller than 1000, the average
probability of randomly selecting a strong false witness is less than 1%; more than
72% of these composites do not admit even one strong false witness. The superiority
of the strong test is best illustrated by the fact that every odd composite integer
between 5 and 1013 fails to be a strong pseudoprime to at least one of the bases 2, 3,
5,7 or 61. In other words, five calls on Btest are sufficient to decide deterministically
on the primality of any integer up to 1013. Most importantly, unlike Fermat's test,
there is a guarantee that the proportion of strong false witnesses is small for every
odd composite. More precisely, we have the following theorem.
Section 10.6 Monte Carlo algorithms 347
Consequently, Btest(a, n) always returns true when n is a prime larger than 4 and
2 < a < n - 2, whereas it returns false with probability better than 3/4 when n is a
composite odd integer larger than 4 and a is chosen randomly between 2 and n - 2.
In other words, the following Monte Carlo algorithm is 3 /4 -correct for primality
testing; it is known as the Miller-Rabin test.
function MillRab(n)
{This algorithm should only be called if n > 4 is odd}
a - uniform(2.. n - 2)
return Btest(a, n)
function RepeatMillRab(n, k)
{This algorithm should only be called if n > 4 is odd}
for i. 1 to k do
if MillRab(n)=false then returnfalse
return true
This algorithm always returns the correct answer when n > 4 is prime. When
n > 4 is an odd composite, each call on MillRab has probability at most 1/4 of
hitting a strong false witness and erroneously returning true. Since the only way
for RepeatMillRab to return true in this case is to randomly hit k strong false wit-
nesses in a row, this happens with probability at most 4 k. For instance, it suffices
to take k = 10 to reduce the error probability below one in a million. In conclu-
sion, RepeatMillRab(-, k) is a (1- 4-k)-correct Monte Carlo algorithm for primality
testing.
How much time does it take to decide on the primality of n with an error
probability bounded by E? We must repeat the Miller-Rabin test k times where
4-k < E, which is the same as 2 2k> 1I . This is achieved with k = F Ig 1/]. Each
call on MillRab involves one modular exponentiation, with t as exponent, and
s -1 modular squarings. We know from Section 7.8 that the exponentiation re-
quires a number of modular multiplications and squarings in 0 (log t). Counting
squarings as multiplications, and since lg n > lg (n - 1) lg 25 t = s + lg t, the time
required by a call on MillRab is dominated by a number of modular multiplications
in 0 (log n). If these are performed according to the classic algorithm, each of them
takes a time in 0 (log2 n) because we reduce modulo n after each multiplication.
348 Probabilistic Algorithms Chapter 10
Putting it all together, the total time to decide the primality of n with error probabil-
ity bounded by E is thus in 0 (log 3 n ig 1 /1). This is entirely reasonable in practice
for thousand-digit numbers and error probability less than 10-1°0.
your implementation of the Miller-Rabin test is error-free than you can about the
more complicated algorithm. Thus we reach the paradoxical conclusion that a
probable prime, whatever meaning you give to this phrase, may be more reliable
than a number whose primality was "proved" deterministically! Note however
that there are probabilistic methods that can produce certified random primes in
less time than it takes to find probabilistic primes with repeated use of the Miller-
Rabin test. Because these methods are guaranteed to return a prime, they belong
to the class of Las Vegas algorithms.
The importance of interpreting the outcome of a Monte Carlo algorithm cor-
rectly is best illustrated if you wish to generate an -e-digitnumber that is probably
prime, perhaps for cryptographic purposes. The obvious algorithm is to keep
choosing random e?-digit odd integers until one is found that passes a sufficient
number of rounds of the Miller-Rabin test. More precisely, consider the following
algorithm, which should only be used with e > 1.
Section 10.6 Monte Carlo algorithms 349
function randomprime(t, k)
repeat
{Choose odd n randomly between 10'-1 and 10f -1}
n - 1I + 2 x uniform(10-1/2 ..l10/ 2 -1)
until RepeatMillRab(n, k)
return n
What can we say about the outcome of this algorithm? We already know that we
cannot claim that the number obtained is prime with probability at least 1 - 4-k.
Nevertheless, it does make sense to investigate the average number of "false primes"
(a euphemism for "composite") produced if we run the algorithm m times. It is
tempting to conclude from the discussion above that the answer is "at most 4-k m
because our error probability on each of the m numbers produced is at most 4-kke
For example, if we call randomprime(1000, 5) one million times, we would expect
less than 1000 composites among the million 1000-digit numbers thus produced
(because 4 5106 977). It would suffice to call randomprime(1000, 10) instead to
bring this expected number below 1 at the cost of less than doubling the running
time. This conclusion is correct, but only because the expected error probability of
the Miller-Rabin test on a randomly selected odd composite is much less than 1/4
However, the simple reasoning is wrong!
The problem is that a large random odd number is much more likely to be com-
posite than prime. Therefore, a call on randomprime is likely to use RepeatMillRab to
test many composite numbers before a prime number is tested, if indeed this ever
happens. The probability of error inherent in each call of RepeatMiliRab on com-
posite numbers will accumulate. Even if each call is likely to find that its number
is composite, the probability that one of the calls will err-causing randomprime to
return this composite by mistake-is not negligible if many composites are tested
before a prime is hit by chance. For the sake of argument, consider what would hap-
pen if the error probability of the Miller-Rabin test were exactly 1/4 on every odd
composite and if we called randomprime(1000, 5). Each time round the repeat loop,
one of three things happens.
The probability that a random 1000-digit odd number is prime is roughly one in a
thousand; see Problem 10.19. Hence the first two cases above are nearly equiprob-
able while the third case is overwhelmingly the most probable. As a result, the
loop in randomprime is just as likely to end for either of the possible reasons, so it
350 Probabilistic Algorithms Chapter 10
function stupid(x)
if coinflip = heads then return true
else returnfalse
whose stochastic "advantage" cannot be amplified. Provided p > 1/2, define the
advantage of a p-correct Monte Carlo algorithm to be p- 1/2. Any Monte Carlo algo-
rithm whose advantage is positive can be turned into one whose error probability
is as small as we wish. We begin with an example.
Let MC be a 3/4 -correct unbiased Monte Carlo algorithm to solve some decision
problem. Consider the following algorithm, which calls MC (x) three times and
returns the most frequent answer.
function MC3(x)
t -MC(x); u -MC(x); v -MC(x)
if t= uort=vthenreturnt
else return u
What is the error probability of MC3? Let R and W denote the right and wrong
answers, respectively. We know that t, u and v each have probability at least 3/4 to
be R, independently of one another. Assume for simplicity that this probability is
Section 10.6 Monte Carlo algorithms 351
exactly 3/4 since algorithm MC3 would clearly be even better if the error probability
of MC were smaller than 1/4. There are eight possible outcomes for the three calls
on MC, whose probabilities are summarized in the following table.
t u v prob MC3
R R R 27/64 R
R R W 9/64 R
R W R 9/64 R
R W W 3/64 W
W R R 9/64 R
W R W 3/64 W
W W R 3/64 W
W W W 1/64 W
Adding the probabilities associated with rows 1, 2, 3 and 5, we conclude that MC3
is correct with probability 27/32, which is better than 84%.
More generally, let MC be a Monte Carlo algorithm to solve some decision
problem whose advantage is £ > 0. Consider the following algorithm, which calls
MC k times and returns the most frequent answer.
function RepeatMC(x, k)
T,F - 0
for i - 1 to k do
if MC(x) then T - T + 1
else F F + 1
if T > F then return true else returnfalse
What is the error probability of RepeatMC? To find out, we associate a random vari-
able Xi with each call to MC: Xi = 1 if the correct answer is obtained and Xi = 0
otherwise. By assumption, Pr[X1 = 1]> 2+ - for each i. Assume for simplicity
that this probability is exactly I + £ since otherwise RepeatMC would be even bet-
ter. Assume also that k is odd to avoid the risk of a tie (T = F) in the majority
vote; see Problem 10.23. The expectation and variance of Xi are E(Xi)= 2 + £ and
Var(Xi) (2 + £) (2 - E)= I - £2 , respectively. Now, let X = y~kI Xi be the random
variable that corresponds to the number of correct answers in k trials. For any
integer i between 0 and k
Problem 10.25. However, it is more convenient to use the Central Limit Theorem,
which says that the distribution of X is almost normal when k is large; in practice,
k = 30 is large enough. Assume for instance that you want an error probability
smaller than 5%. Tables for the normal distribution tell us that
Using our values for E(X) and Var(X), this condition translates to
For instance, if E = 5%, Equation 10.3 tells us that it "suffices" to run a 55%-correct
unbiased Monte Carlo algorithm 269 times and take the most frequent answer
to obtain a 95%-correct algorithm. (Equation 10.3 gives k > 267.894 but we took
k = 269 rather than 268 because we wanted k to be odd; see Problem 10.23.) In other
words, this repetition translates a 5% advantage into a 5% error probability. An ex-
act calculation shows that the resulting error probability is just above 4.99%, il-
lustrating the precision of the normal approximation for such large values of k.
This demonstrates the handicap of unbiased Monte Carlo algorithms: running a
55%-correct biased algorithm just 4 times reduces the error probability to about 4.1 %
since 0.454 0.041.
Equation 10.3 shows that the number of repetitions necessary to achieve a
given confidence level (95% in this case) depends strongly on the advantage £ of
the original algorithm. An advantage 10 times smaller would necessitate about 100
times more repetitions for the same reliability. On the other hand, it is not much
more expensive to obtain considerably more confidence in the final answer. For
instance, had we wanted an error probability 10 times smaller, we would have used
Las Vegas algorithms make probabilistic choices to help guide them more quickly
to a correct solution. Unlike Monte Carlo algorithms, they never return a wrong
answer. There are two main categories of Las Vegas algorithms. They may use
randomness to guide their search in such a way that a correct solution is guaranteed
even if unfortunate choices are made: it will only take longer if this happens.
Alternatively, they may allow themselves to take wrong turns that bring them to a
dead end, rendering it impossible to find a solution in this run of the algorithm.
Las Vegas algorithms of the first kind are often used when a known determinis-
tic algorithm to solve the problem of interest runs much faster on the average than
in the worst case. Quicksort from Section 7,4.2 provides the most famous example;
see Section 10.7.2. Incorporating an element of randomness allows a Las Vegas
algorithm to reduce, and sometimes even to eliminate, this difference between
good and bad instances. It is not a case of preventing the occasional occurrence of
the algorithm's worst-case behaviour, but rather of breaking the link between the
occurrence of such behaviour and the particular instance to be solved.
Recall from Section 2.4 that analysing the average efficiency of an algorithm
may sometimes give misleading results. The reason is that any analysis of the av-
erage case must be based on a hypothesis about the probability distribution of the
instances to be handled. A hypothesis that is correct for a given application of the
algorithm may prove disastrously wrong for a different application. Suppose, for
example, that quicksort is used as a subalgorithm inside a more complex algorithm.
We saw in Section 7.4.2 that it takes an average time in O(n log n) to sort n items
provided the instances to be sorted are chosen randomly. This analysis no longer
bears any relation to reality if in fact we give the algorithm only instances that are
already almost sorted. In general, such deterministic algorithms are vulnerable to
an unexpected probability distribution of the instances that some particular appli-
cation might give them to solve: even if the catastrophic worst-case instances are
few in number, they could be the most relevant for that application, causing a spec-
tacular degradation in performance. Las Vegas algorithms free us from worrying
about such situations by evening out the time required on different instances of a
given size.
The performance of these Las Vegas algorithms is not better than that of the
corresponding deterministic algorithm when we consider the average over all in-
stances of a given size. With high probability, instances that took a long time
deterministically are now solved much faster, but instances on which the deter-
ministic algorithm was particularly good are slowed down to average by the Las
Vegas algorithm. Thus, these Las Vegas algorithms "steal" time from the "rich"
instances-those that were solved quickly by the deterministic algorithm-to give
it to the "poor" instances. We call this the Robin Hood effect, and illustrate it in
Sections 10.7.2 and 10.7.3. This is interesting when we consider deterministic al-
gorithms that are not significantly faster than average in the best case, so nothing
much is lost by slowing them down to average, but that suffer from a few painfully
bad cases.
354 Probabilistic Algorithms Chapter 10
The other category of Las Vegas algorithms now and again make choices that
bring them to a dead end. We ask that these algorithms be capable of recognizing
this predicament, in which case they simply admit failure. Such behaviour would
be intolerable from a deterministic algorithm as it would mean that it is unable to
solve the instance considered. However, the probabilistic nature of Las Vegas algo-
rithms makes this admission of failure acceptable provided it does not occur with
overwhelming probability: simply rerun the same algorithm on the same instance
for a fresh chance of success when failure occurs. There are practical problems for
which willingness to risk failure allows an efficient Las Vegas algorithm when no
deterministic algorithms are known to be efficient; see Section 10.7.4. Although no
guaranteed upper bound can be set on the time it will take to obtain an answer if
the algorithm is restarted whenever it fails, this time may be reasonable with high
probability. These algorithms should not be confused with those, such as the sim-
plex algorithm for linear programming, that are extremely efficient for the great
majority of instances to be handled, but catastrophic for a few instances: a Las
Vegas algorithm should have good expected performance whatever the instance
to be solved.
When a Las Vegas algorithm is allowed to fail, it is more convenient to represent
it in the form of a procedure rather than a function. This allows for a return
parameter success, set to true if a solution is obtained and false otherwise. The
typical call to solve instance x is LV(x, y, success), where the return parameter y
receives the solution whenever success is set to true. For convenience, we write
return success true
as a shortcut for
success - true
return
and similarly with return success - false.
Let p(x) be the probability of success of the algorithm each time it is asked to
solve instance x. For an algorithm to deserve the name "Las Vegas", we require that
p (x) > 0 for every instance x. This ensures that a solution will be found eventually
if we keep repeating the algorithm. It is even better if there exists a constant 6 > 0
such that p (x)> 6 for every instance x since otherwise the expected number of
repetitions before success could grow arbitrarily with the instance size.
Consider the following algorithm.
function RepeatLV(x)
repeat
LV(x, y, success)
until success
return y
Since each call on LV(x) has probability p(x) of being successful, the expected
number of trips round the loop is 1 / p(x). A more interesting parameter is the
expected time t(x) before RepeatLV(x) is successful. One may think at first that
this is simply 1/ p (x) multiplied by the expected time taken by each call on LV (x).
Section 10.7 Las Vegas algorithms 355
However, a correct analysis must consider separately the expected time taken by
LV(x) in case of success and in case of failure. Let these expected times be denoted
by s(x) and f(x), respectively. Neglecting the time taken by the control of the
repeat loop, t(x) is given by a case analysis.
• With probability p (x), the first call on LV (x) succeeds after expected time s (x).
• With probability 1 - p (x), the first call on LV (x) fails after expected time f (x).
After this we are back to the starting point, still an expected time t(x) away
from success. The total expected time in this case is thus f (x) +t (x).
Therefore, t (x) is given by a simple recurrence
could be farther from the truth. The algorithm uses the same sets col, diag45 and
diagl35 as in Section 9.6.2 to help determine which positions are still available in
the current row.
procedure queensLV (var sol 1 . 8], success)
array ok[l . . 8] {will hold available positions}
col, diag45, diagl35 - 0
for k - 0 to 7 do
{sol[l . k] is k-promising; let's place the (k + 1)-st queen}
nb - 0 {to count the number of possibilities}
for j - 1 to 8 do
if j 0 col and j - k O diag45 and j + k t diag135
then {column j is available for the (k + 1)-st queen}
nb -nb + 1
ok[nbb- j
if nb = 0 then return success -false
j - ok[uniform(l.. nb)]
col - col U {j}
diag45 - diag45 u {j - k}
diagl35 - diagl35 u {j + k-
sol[k + 1]- j
{end of for loop in k}
return success - true
To analyse this algorithm, we need to determine its probability p of success, the
average number s of nodes that it explores in the case of success, and the average
number f of nodes that it explores in the case of failure. Clearly s = 9, counting
the 0-promising empty vector and the 8-promising solution. Using a computer we
can calculate p 0.1293 and f z 6.971. A solution is therefore obtained more than
one time out of eight by proceeding in a completely random fashion! The expected
number of nodes explored if we repeat the algorithm until a success is finally
obtained is given by Equation 10.4: s + 1pP f ; 55.93, less than half the number of
nodes explored by systematic backtracking.
We can do better still. The Las Vegas algorithm is too defeatist: as soon as it
detects a failure it starts all over from the beginning. The backtracking algorithm, on
the other hand, makes a systematic search for a solution that we know has nothing
systematic about it. A judicious combination of these two algorithms first places
a number of queens on the board in a random way, and then uses backtracking to
try and add the remaining queens, without, however, reconsidering the positions
of the queens that were placed randomly.
An unfortunate random choice of the positions of the first few queens can make
it impossible to add all the others. This happens, for instance, if the first two queens
are placed in positions 1 and 3, respectively. The more queens we place randomly,
the smaller the average time needed by the subsequent backtracking stage, whether
it fails or succeeds, but the greater the probability of failure. This is the "fine-
tuning knob" mentioned previously. Let stopLV denote the number of queens
we place randomly before moving on to the backtracking phase, 0 < stopLV < 8.
Section 10.7 Las Vegas algorithms 357
The modified algorithm is similar to queensLV, except that we must include the
declaration of an inner procedure backtrack (see below), the loop in k goes from 0
to stopLV - 1, and we replace the last line (return success - true) by
This calls the backtracking phase provided the loop did not terminate prematurely
in failure. The procedure backtrack looks like algorithm queens of Section 9.6.2
except that it has an additional parameter success and that it returns immediately
after either finding the first solution or finding that there are none, whichever is
the case.
To set the fine-tuning knob in its optimal position, we need to determine the
probability p of success, the expected number s of nodes explored in the case of
success and the expected number f of nodes explored in the case of failure for
each possible value for stopLV. Equation 10.4 can then be used to determine the
expected number t of nodes explored if the algorithm is repeated until it eventually
finds a solution. These numbers, obtained by exploring the entire backtracking tree
with the help of a computer, are summarized in Figure 10.3. The case stopLV 0
corresponds to using the deterministic algorithm directly.
stopLV p s f t
0 1.0000 114.00 - 114.00
1 1.0000 39.63 - 39.63
2 0.8750 22.53 39.67 28.20
3 0.4931 13.48 15.10 29.01
4 0.2618 10.31 8.79 35.10
5 0.1624 9.33 7.29 46.92
6 0.1357 9.05 6.98 53.50
7 0.1293 9.00 6.97 55.93
8 0.1293 9.00 6.97 55.93
Figure 10.3. Fine-tuning a Las Vegas algorithm
more efficient the algorithm will be. Despite this, there is no question of choosing
the exact median as the pivot: this would cause an infinite recursion, as finding
the median is a special case of the selection problem under consideration. Thus we
choose a suboptimal pivot known as the pseudomedian. This avoids the infinite
recursion, but choosing the pseudomedian is still relatively costly.
On the other hand, we also saw a simpler approach that uses as pivot the first
element that remains under consideration. This assures us of a linear execution
time on the average, but with the risk that the algorithm will take quadratic time in
the worst case. Despite this prohibitive worst case, the simpler algorithm has the
advantage of a much smaller hidden constant on account of the time saved by not
calculating the pseudomedian. Any simple deterministic strategy for choosing the
pivot is likely to result in quadratic worst-case time for finding the median, and
conversely linear worst-case algorithms seem to require a large hidden constant.
The decision whether it is more important to have efficient execution in the worst
case or on the average must be taken in the light of the particular application.
If we decide to aim for speed on the average thanks to the simpler deterministic
algorithm, we must make sure that the instances to be solved are indeed chosen
randomly according to the uniform distribution. A bad probability distribution of
the instances could spell disaster.
For the execution time to depend only on the number of elements but not on
the actual instance, it suffices to choose the pivot randomly among the elements
still under consideration. The resulting algorithm is very similar to selection from
Section 7.5.
function selectionWLV(Tl.. .n], s)
{Finds the s-th smallest element in T, 1 < s -< n
i- 1; j n
repeat
{Answer lies in T[i.. j]}
p - T[uniform(i..j)]
pivotbis(T[i..j],p,k,l)
if s< k then j - k
else if s > I then i - I
else return p
The analysis requested in Problem 7.18 applies mutatis mutandis to conclude that
the expected time taken by this probabilistic selection algorithm is linear, inde-
pendently of the instance to be solved. Thus its efficiency is not affected by the
peculiarities of the application in which the algorithm is used. It is always possible
that some particular execution of the algorithm will take quadratic time, but the
probability that this will happen becomes increasingly negligible as n gets larger,
and, to repeat, this unlikely occurrence is no longer linked to specific instances.
To sum up, we started with an algorithm that is excellent when we consider
its average execution time on all the instances of some particular size but that
is inefficient on certain specific instances. Using the probabilistic approach, we
transformed this algorithm into a Las Vegas algorithm that is efficient with high
probability, whatever the instance considered. Thus we reap the benefits of both
360 Probabilistic Algorithms Chapter 10
deterministic algorithms seen in Section 7.5: expected linear time on all instances,
with a small hidden constant.
We once asked the students in an algorithmics course to implement the selec-
tion algorithm of their choice. The only algorithms they had seen in class were
those in Section 7.5. Since the students did not know which instances would be
used to test their programs-and suspecting the worst of their professors-none
of them took the risk of using a deterministic algorithm with quadratic worst case.
Three students, however, thought of using the probabilistic approach. This idea
allowed them to beat their colleagues hands down: their programs took an average
of 300 milliseconds to solve the trial instance, whereas the majority of the deter-
ministic algorithms took between 1500 and 2600 milliseconds. Moreover, their
programs were much simpler-and thus less likely to contain subtle errors-than
their colleagues'.
The same approach can be used to turn quicksort into an algorithm that sorts
n elements in worst-case expected time in 0 (n log n), whereas the algorithm we
saw in Section 7.4.2 requires a time in Q(n 2 ) when the array to be sorted is already
sorted. The randomized version of quicksort is as follows. To sort the entire array T,
simply call quicksortLV(T[l. .n]).
is compiled many times, it is always the same few programs that will require sub-
stantially more time than expected. In a real sense, these programs are paying the
price for all other programs to compile quickly. Las Vegas hashing allows us to
retain the efficiency of hashing on the average, without arbitrarily favouring some
programs at the expense of others. This is the Robin Hood effect at its best: each
program is given its fair share of the benefits to be reaped by hashing. Also, each
program will once in a while pay the price of overall efficiency by taking more time
than expected. Moreover, the good expected performance of Las Vegas hashing can
be proved mathematically without assumptions on the probability distribution of
the access sequences to the table.
The basic idea of Las Vegas hashing is for the compiler to choose the hash
function randomly at the beginning of each compilation and again whenever re-
hashing becomes necessary. This ensures that collision lists remain reasonably
well-balanced with high probability, whatever the set of identifiers in the program
to be compiled. As a result, a program that causes a large number of collisions dur-
ing one compilation will probably be luckier the next time it is compiled. But what
do we mean by "choose the hash function randomly"?
The answer lies in a technique known as universal hashing. Let U be the universe
of potential indexes for the associative table, such as the set of all possible identifiers
if we are implementing a compiler, and let B = {0, 1,2,...,N 1} be the set of
indexes in the hash table. Consider any two distinct x and y in U. Suppose first
that h: U -B is a function chosen randomly among all the functions from U to
B according to the uniform distribution. Then the probability that h(x)= h(y)
is 1/N. This is because h(y) could take any of the N values from B with equal
probability; in particular the value attributed to h(x) would also be chosen for
h(y) with probability 1/N. However, U is usually large and there are far too many
functions from U into B for it to be reasonable to choose one at random according
to the uniform distribution.
Consider now a set H of functions from U to B, and consider again any two
distinct x and y in U. Suppose that h: U - B is a function chosen randomly
from the members of H according to the uniform distribution. We say that H is a
universal class of hashfunctions if the probability that h(x)= h(y) is at most 1/N.
In other words, we require that the probability of h(x)= h(y) be small no matter
which distinct values of x and y are considered, provided the choice of h is made
independently of those values. We saw that the set of all functions from U to B is
universal, but too large to be useful. Universal classes are interesting because they
can be reasonably small, so that a random function can be chosen in practice from
such a class according to the uniform distribution. Moreover, the functions can be
evaluated efficiently. We give below one explicit example of such a universal class
of hash functions, but first let us see how good they are at solving the compilation
problem.
Let H be a universal class of hash functions from U to B. Let x and y be any
two distinct identifiers. By definition of universality, if h is chosen randomly in H
according to the uniform distribution, the probability of collision between x and
y is at most 1/N. Now consider a program with m distinct identifiers and let x
be any one of those. For each of the m - 1 identifiers other than x, the probability
362 Probabilistic Algorithms Chapter 10
Then, H {hi I 1 < i < p and 0 < j < p} is a universal class of hash functions
from U to B. Randomly choosing a function in H is as simple as choosing two
integers smaller than p. Moreover, the value of hiX (x) can be calculated efficiently,
especially if we choose N to be a power of 2, which simplifies the second modulo
operation.
function trialdiv(n)
for m - 2 to L+/nI do
if m divides n then return m
{If the loop fails to find a divisor, n is prime
return n {a prime number is its own smallest prime divisor}
This algorithm takes a time in Q ( n/E)in the worst case, which is of no practical use
even on medium-size integers: counting just one nanosecond for each trip round
the loop, it would take thousands of years to split a hard composite number with
forty or so decimal digits, where "hard" means that the number is the product of
two primes of roughly equal size.
Section 10.7 Las Vegas algorithms 363
The largest hard composite number that has been factorized at the time of
writing spans 129 decimal digits. This factorization was the key to meeting the
RSA cryptographic challenge mentioned in Section 7.10. Recall that it required
eight months of calculation on more than 600 computers throughout the world.
It is estimated that this would have taken 5000 years of uninterrupted calculation
if a single workstation that can run one million instructions per second had been
used. Although this effort is staggering, success would not have been possible
without a sophisticated algorithm. Indeed, when the challenge was issued in 1977,
it was estimated that the fastest computer then available running the best algorithm
known at the time would have completed the calculation after two million times
the age of the Universe! In this section we give but a glimpse of the basic idea
behind the successful algorithm.
Efficient splitting algorithms rest on the following theorem, whose easy proof
is left as an exercise.
Consider n = 2537 for example. Let a = 2012 and b = 1127. Note that
a 2 = 1595n + 1629 and b2 = 500n + 1629, which shows that both a 2 and b2 are
equal to 1629 modulo n. Since a # b and a + b :# n, the theorem says that
gcd(a + b, n)= gcd(3139,2537)= 43 is a nontrivial divisor of n, which indeed it is.
This suggests an approach to splitting n: find two distinct numbers between 1 and
n - 1 that have the same square modulo n but whose sum is not n, and use Euclid's
algorithm to compute the greatest common divisor of their sum with n. This is
fine provided such numbers always exist when n is composite and provided we
can find them efficiently.
The first question is quickly disposed of. Provided n has at least two distinct
prime divisors, a 2 mod n admits at least four distinct "square roots" in arithmetic
modulo n for any a relatively prime to n. Continuing our example, 1629 admits
exactly four square roots modulo 2537, namely 525, 1127, 1410 and 2012. These
roots come in pairs: 525 + 2012 = 1127 + 1410 = 2537. Any two of them will do
provided they are not from the same pair.
So how can we find a and b with the desired property? This is where ran-
domness enters the game. Let k be an integer to be specified later. An integer is
k-smooth if all its prime divisors are among the k smallest prime numbers. For in-
stance, 120 = 23 x 3 x 5 is 3-smooth but 35 = 5 x 7 is not. When k is small, k-smooth
integers can be factorized efficiently by trial division. In its first phase, the Las Ve-
gas splitting algorithm chooses an integer x randomly between 1 and n -1, and
computes y = x2 mod n. If y is k-smooth, both x and the factorization of y are
kept in a table. Otherwise, another x is chosen randomly. This process is repeated
until we have found k + 1 different integers for which we know the factorization
of their squares modulo n.
364 Probabilistic Algorithms Chapter 10
Still continuing our example with n = 2537, let us take k = 7. We are thus
concerned only with the primes 2, 3, 5, 7, 11, 13 and 17. A first integer x = 1769 is
chosen randomly. We calculate its square modulo n: x 2 = 1233n + 1240 and thus
y = 1240. An attempt to factorize 1240 = 23 x 5 x 31 fails since 31 is not divisi-
ble by any of the admissible primes. A second attempt with x = 2455 is luckier:
its square modulo n is 1650 = 2 x 3 x 52 x 11. Each attempt succeeds with proba-
bility roughly 20% in this small example. Continuing thus until 8 successes have
been recorded, we obtain the following table.
xl = 2455 Yi = 1650 = 2 x 3 x 52 x 11
X2 = 970 y2 = 2210 = 2 x 5 x 13 x 17
X3 = 1105 y3 = 728 = 23 x 7 x 13
X4 = 1458 y4 = 2295 = 33 x 5 x 17
x 5 = 216 y5 = 990 = 2x3 2 x5 x 11
X6 = 80 Y6 = 1326 = 2 x 3 x 13 x 17
X7 = 1844 Y7 = 756 = 22 x 33 x 7
x8 = 433 ys = 2288 = 24 x 11 x 13
This is used to form a (k + 1) x k matrix M over {0, 1. Each row corresponds to
one success; each column corresponds to one of the admissible primes. The entry
Mij is set to 0 if the j-th prime appears to an even power (including zero) in the
factorization of ys; otherwise Mj = 1. For example M3,1 = 1 because the first prime,
2, occurs to the odd power 3 in y3, and M3,2 = 0 because the second prime, 3, occurs
to the even power 0. Continuing our example, we obtain the following matrix.
1 1 0 0 1 0 0
1 0 1 0 0 1 1
1 0 0 1 0 1 0
0 1 1 0 0 0 1
M= 1 0 1 0 1 0 0
1 1 0 0 0 1 1
0 1 0 1 0 0 0
0 0 0 0 1 1 0
Since this matrix contains more rows than columns, the rows cannot be linearly
independent: there must exist a nonempty set of rows that add up to the all-zero
vector in arithmetic modulo 2. Such a set can be found by Gauss-Jordan elimina-
tion, although more efficient methods are available when k is large, especially for
very sparse matrices such as those obtained by this factorization algorithm when
n is large. In our example, there are seven different solutions, such as rows 1,
2, 4 and 8, or rows 1, 3, 4, 5, 6 and 7. Consider now what happens if the yj's
corresponding to the selected rows are multiplied. Our two examples yield
YIY2Y4Y8 = 26 x 34 x 54 X 112 X 132 x 172, and
YlY3Y4Y5Y6Y7 = 28 x 310 x 54 x 72 x 112 x 132 x 172,
In arithmetic modulo n, a square root of the same product can also be obtained by
multiplying the corresponding xi's since each yi = xi mod n. In our example the
two approaches to calculating a square root modulo n of y1y2y4Y8 yield
a = 23 x3 2 x5 2 x11x13x17 mod 2537 = 2012
b = 2455 x 970 x 1458 x 433 mod 2537 = 1127.
As we saw earlier, it suffices to calculate the greatest common divisor of a + b
and n to obtain a nontrivial divisor of n. In general, this technique yields two
integers a and b between 1 and n - 1 such that a 2 mod n = b2 mod n. There is
no guarantee, however, that a 7 b and a + b 7k n. Indeed, use of Y1Y3Y4Y5Y6Y7
instead of Y1Y2Y4yS results in
a' = 24 x3 5 x5 2 x7x11x13x17 mod 2537 = 1973
b' = 2455 x 1105 x 1458 x 216 x 80 x 1844 mod 2537 = 564,
which is worthless because a' + b' = n. Nevertheless, it can be proved that this
entire process succeeds with probability at least 50% unless gcd(a, n) is a nontriv-
ial divisor of n, which is just as good for splitting purposes; see Problem 10.41.
Contrary to the n queens problem, however, we should not restart from scratch in
case of failure. Why throw out so much good work? Instead, we look for other sets
of rows of M that add up to zero in arithmetic modulo 2. If this still fails, we find
a few more pairs (xi, yi) and try again with the resulting enlarged matrix.
It remains to determine what value of k should be used to optimize the per-
formance of this approach. The larger this parameter, the higher the probability
that x2 mod n will be k-smooth when x is chosen randomly. On the other hand,
the smaller this parameter, the faster we can carry out a test of k-smoothness and
factorize the k-smooth values that are found, and the fewer such values we require.
Finding the optimal compromise calls for deep number theory. Let
10.8 Problems
Problem 10.1. You have a coin biased so that each toss produces heads with proba-
bility p and tail with complementary probability q = 1 - p. Assume that each toss
of the coin is independent from previous tosses: the probability of getting heads at
any given toss is exactly p, regardless of previous outcomes. Unfortunately you
do not know the value of p. Design a simple process by which you can use this
coin to generate a perfectly unbiased sequence of random bits.
Problem 10.2. In Section 10.5.1, we saw that it "suffices" to drop about one and a
half million needles on the floor to estimate rr within 0.01 ninety-five times out of
one hundred. This was achieved by dropping needles that are half as long as the
planks in the floor are wide. Our estimate of -r was n/k, where n is the number
of needles dropped and k is the number that fall across a crack. Show that we can
improve this "algorithm" by dropping needles twice as long and producing n/2k
as estimate of rr. How many of these needles need we drop to have probability at
least 95% of obtaining the correct value of Tr within 0.01?
Problem 10.3. Yet another probabilistic approach for estimating the value of rr is
to use Monte Carlo integration to estimate the area of a quarter circle of radius 2.
In other words, we can use the relation
Tr = 44x 2 dx.
How many random values of x must we use to have probability at least 95% of
obtaining the correct value of Tr within 0.01?
Problem 10.4. Write a computer program to simulate Buffon's experiment to es-
timate the value of -T. The challenge is that you are not allowed to use the value
of Tr in your program. If you do not see why this is a difficulty, try it!
Problem 10.5. Consider the simplest probabilistic counting strategy, in which the
register is incremented with probability 1/2 at each tick, and count returns twice the
value held in the register; see Section 10.5.3. Prove that the expected value returned
by this strategy is exactly the number of ticks. What is the variance of the value
returned? Interpret this variance in terms of a confidence interval.
Section 10.8 Problems 367
You have to prove by mathematical induction that E(m) m for all m > 0.
Problem 10.7. Continuing Problem 10.6, prove that the variance of the value re-
turned by a call on count after a call on init followed by m calls on tick is m (m - 1) /2.
Interpret this in terms of a confidence interval.
Problem 10.8. Consider the modified probabilistic counting algorithm specified
by Equation 10.1. Determine the probability under which tick(c) should incre-
ment c. Do you get 2-' as you should when E = 1? Rework this problem if the
division by - is removed from Equation 10.1. What would be terribly wrong in this
case?
Note: In practice the 2" relevant probabilities would be precomputed and kept in
an array, which makes the approach interesting only if a large number of registers
is needed or if the context of Problem 10.10 applies.
Problem 10.9. Continuing Problem 10.8, what is the variance of the value returned
by count after m ticks when Equation 10.1 is followed? Give your answer as a
function of m and E. Interpret it in terms of a confidence interval.
Problem 10.10. Smart cards provide an interesting application for the probabilis-
tic counting technique of Section 10.5.3. Write-only memories are technologically
easier to implement than ordinary read-write memories. Write-only bits are initial-
ized to 0 at the factory They can be read at will, and they can be flipped to 1, but
they cannot be reset to 0. Prove that it is impossible to count more than n events
in an n-bit write-only register by any deterministic technique. Show however
that it is possible to count up to 2n - 1 events by probabilistic methods. In other
words, probabilistic counting and a write-only register cover the same ground as
deterministic counting and an ordinary register of the same length.
Problem 10.11. A room contains 25 strangers; would you be willing to bet at even
odds that at least two of them share the same birthday?
368 Probabilistic Algorithms Chapter 10
Problem 10.12. Let X be a finite set whose cardinality n we would like to know.
Unfortunately, n is too large for it to be practical simply to count the elements
one by one. Suppose, on the other hand, that we are able to choose elements
from X randomly according to the uniform distribution with a call on uniform(X).
Consider the following algorithm.
function card(X)
k .- 0
S - 0
a - uniform(X)
repeat
k - k+l
S - S u {a}
a- uniform(X)
until a E S
return k2 /2
Prove that this algorithm returns an unbiased estimate of the number n of elements
in X and that it runs in an expected time in O( /n) if calls on uniform(X) and
operations involving set S can be carried out at unit cost. If you cannot prove this
rigorously (it's hard!), give a convincing argument that it is reasonable to believe
that the number of elements in the set is roughly the square of the number of
independent draws in X before the first repetition occurs. It might help you to
work Problems 5.14 and 10.11 first.
Problem 10.13. The probabilistic counting algorithm in Problem 10.12 is efficient
in terms of time, but it may be impractical in terms of storage because of the need
to keep track of set S. Make the best of pseudorandom generation to modify the
algorithm so that it takes constant storage without increasing its running time by
more than a small constant factor. This is one of the rare instances where using
a truly random generator would be a hindrance rather than a help, although we
pay a price: the correctness of the modified algorithm can no longer be proved
mathematically.
Problem 10.14. Find an efficient Monte Carlo algorithm to decide, given two n x n
matrices A and B, whether or not B is the inverse of A. In terms of n and the
acceptable error probability £, how much time does your algorithm require?
Problem 10.15. Show that strong false witnesses of primality are automatically
false witnesses with respect to Fermat's test; see Section 10.6.2.
Hint: Use the fact that (n -1)2 mod n = 1.
Problem 10.16. Prove Theorem 10.6.2.
Problem 10.17. The algorithm randomprime of Section 10.6.3 generates probable
random primes by repeatedly choosing random odd integers until one is found
that passes enough rounds of the Miller-Rabin test. Explain how the result differs
if instead we choose a random odd starting point and successively increase it by
2 until a number is obtained that passes the same number of rounds of the same
test.
Section 10.8 Problems 369
Problem 10.18. We saw that 561 is the worst case for Fermat's primality test among
all odd composites smaller than 1000. This is true provided we consider the error
probability of the test. However, there is one odd composite smaller than 1000 that
admits even more false witnesses than 561. Which is it? How many of these false
witnesses are also strong false witnesses?
Problem 10.19. The prime number theorem asserts that the number of prime num-
bers smaller than n is approximately n/log n. (Recall that "log" denotes the natural
logarithm.) This approximation is fairly accurate. For instance, there are 50 847 478
primes smaller than 109 whereas n/logn 48254942 when n = 109. Estimate the
-~
probability that an odd 1000-digit integer chosen randomly according to the uni-
form distribution is prime.
Hint: The number of 1000-digit primes is equal to the number of primes less than
101000 minus the number of primes less than 10999.
program printprimes
print 2,3
n- 5
repeat
if RepeatMillRab(n, [lgnJ) then print n
n- n+2
adnauseam
Clearly, every prime number will eventually be printed by this program. One might
also expect composite numbers to be produced erroneously once in a while. Prove
that this is unlikely to happen. More precisely, prove that the probability is better
than 99% that no composite number larger than 100 will ever be produced, regard-
less of how long the program is allowed to run.
Note: This figure of 99% is very conservative as it would still hold even if MillRab(n)
had a flat 25% chance of failure on each composite integer.
Problem 10.21. In Section 10.6.2 we saw a Monte Carlo algorithm to decide pri-
mality that is always correct when given a prime number and that is correct with
probability at least 3/4 when given a composite number. The running time of the
algorithm on input n is in 0 (log 3 n). Find a Monte Carlo algorithm that is always
correct when given a composite number and that is correct with probability at least
1/2 when given a prime number. Your algorithm must run in a time in 0 (logk n)
for some constant k.
Problem 10.22. In Section 10.6.4, we studied amplification of the stochastic ad-
vantage of unbiased Monte Carlo algorithms for decision problems. Here, we
investigate the situation for problems that have more than two potential answers.
For general problems, instances may have more than one correct answer. Think for
example of the eight queens problem or the problem of finding an arbitrary nontriv-
ial divisor of a composite integer. When such problems are solved by probabilistic
algorithms, it may happen that different correct answers are obtained when the
370 Probabilistic Algorithms Chapter 10
same algorithm is run several times on the same input. We saw in Section 10.7 that
this is a virtue for Las Vegas algorithms, but it can be catastrophic when unbiased
Monte Carlo algorithms are concerned.
Recall that a Monte Carlo algorithm is p-correct if it returns a correct answer with
probability at least p, whatever the instance considered. The potential difficulty is
that even though a p-correct algorithm returns a correct answer with high prob-
ability when p is large, it could happen that one systematic wrong answer is re-
turned more often than any given correct answer. In this case, amplification of the
stochastic advantage by majority voting would decrease the probability of being
correct! Show that if algorithm MC is 75%-correct, it may happen that MC3 is
not even 71%-correct, where MC3 returns the most frequent answer of three calls
on MC, as in Section 10.6.4. (Ties are broken arbitrarily.) For what value of k could
RepeatMC(., k) be less than 50%-correct even though MC is 75%-correct?
Problem 10.23. Let MC be a p-correct unbiased Monte Carlo algorithm and con-
sider algorithm RepeatMC(, k) from Section 10.6.4, which runs MC k times and
produces the most frequent answer. A problem occurs if k is even in the case of a tie.
The code for RepeatMC in Section 10.6.4 returnsfalse in this case (since T = F). This
degrades the probability of correctness on instances for which the correct answer
is true. A better solution would be to flip a fair coin in case of a tie to decide which
answer to return. Prove that if RepeatMC is modified along this line, the probability
that RepeatMC(., k) returns the correct answer when k is even is exactly equal to
the probability that RepeatMC(, k -1) returns the correct answer. Conclude that it
is never a good idea to repeat an unbiased Monte Carlo algorithm an even number
of times for the purpose of amplifying the stochastic advantage.
Problem 10.24. Let E and 6 be two positive real numbers such that - + 6 <1/2.
Let MC be a (I + E) -correct unbiased Monte Carlo algorithm for a decision prob-
lem. Using only elementary combinatorial arguments, prove that RepeatMC(., k)
is (1 - 6)-correct provided k > 2 log 1/,. In other words, it suffices to repeat a
Monte Carlo algorithm whose advantage is E this number k of times to obtain a
Monte Carlo algorithm whose error probability is at most 6. (Recall that "log"
denotes the natural logarithm.) This formula is overly conservative. It suggests re-
peating a 55%-correct unbiased Monte Carlo algorithm about 600 times to achieve
95%-correctness whereas we saw in Section 10.6.4 that 269 repetitions are sufficient.
Hint: Use Equation 10.2 and the fact that -2/lg(1 - 4E2 ) < (log 2) /2E 2 .
Problem 10.25. Continuing Problem 10.24, prove that if an unbiased Monte Carlo
algorithm whose advantage is E is repeated k = 2m - 1 times and if the most fre-
quent answer is kept, the resulting algorithm is (1 - 6)-correct, where
The first part of this formula is useful to calculate the exact error probability re-
sulting from amplification of stochastic advantage. A good upper bound on the
Section 10.8 Problems 371
Problem 10.26. Following Problem 7.36, give a 1 /2-correct biased Monte Carlo
algorithm to decide if an array T contains a majority element. Your algorithm
should run in linear time and the only comparisons allowed between the elements
of T are tests of equality. Note that the only merit of this algorithm is simplicity
since the deterministic algorithm requested in Problem 7.36 solves the problem in
linear time with a very small hidden constant.
Problem 10.27. Show that the problem of primality can be solved by a Las Vegas
algorithm whose expected running time is in 0 (logk n) for some constant k. You
may take for granted the Monte Carlo algorithm required by Problem 10.21.
Problem 10.28. In the spirit of Problem 10.27, let A and B be two biased Monte
Carlo algorithms for solving the same decision problem. Algorithm A is p-correct
but its answer is guaranteed when it is true; algorithm B is q-correct but its answer
is guaranteed when it is false. Show how to combine A and B into a Las Vegas
algorithm LV(x, y, success) to solve the same problem. One call on LV should not
take significantly more time than a call on A followed by a call on B. If your Las
Vegas algorithm succeeds with probability at least r whatever the instance, what
is the best value of r you can obtain?
Problem 10.29. Let X be a finite set whose elements are easy to enumerate and
let Y be a nonempty subset of X of unknown cardinality. Assume you can decide,
given x E X, whether or not x E Y. How would you choose a random element
of Y according to the uniform distribution? The obvious solution is to make a
first pass through X to count the number n of elements in Y, then choose a ran-
dom integer k - uniform(l .. n), and finally locate the k-th element of Y by going
through X again, unless you kept the elements of Y in an array during the first pass
through X. Surprisingly, this problem can be solved with a single pass through X,
without additional storage, and without first counting the elements in Y.Consider
the following algorithm.
function draw(X, Y)
n- 0
for each x E X do
if x E Y then n - n + 1
if uniform(1.. n)= n then z - x
if n > 0 then return z
else return "Error! Y is empty!"
372 Probabilistic Algorithms Chapter 10
Hint: Look at +/E randomly chosen points in the list and start your search at the
largest of those points that is not larger than the target x. What do you do if all
your points are larger than the target?
Problem 10.41. Let n be a composite number that has at least two distinct prime
divisors and let (a, b) be the first pair obtained by the Las Vegas splitting algorithm
of Section 10.7.4. Prove that either gcd(a, n) or gcd(a + b, n) is a nontrivial divisor
of n with probability at least 50%.
Hint: If gcd(a, n)= 1 then gcd(xi, n)= 1 for each xi that entered into building b.
Take one arbitrary such xi. We know from Problem 10.40 that yi = xi mod n has
at least four distinct square roots module n, including xi. Show that if any root
other than xi and n - xi had been randomly chosen instead of xi, the splitting
would have been successful. Conclude as required.
Problem 10.42. At the end of Section 10.7.4, we claimed that we expect the prob-
ability that x 2 mod n be k-smooth to improve if x is chosen slightly larger than
/n, rather than being chosen randomly between 1 and n -1. Give a convincing
intuitive reason to support this assertion.
Hint: Show that the binary length of [,Ai 12 mod n is at most about half that of a
random square modulo n. What about the length of ([In] I + i)2 mod n for small i?
to Problem 10.1 is from von Neumann (1951). We used the highly recommended
pseudorandom generator given by L'Ucuyer (1988, 1990) in our experiments with
the n queens problem. A more interesting generator from a cryptographic point of
view is given by Blum and Micali (1984); this article and the one by Yao (1982) intro-
duce the notion of an unpredictablegenerator, which can pass any statistical test that
can be carried out in polynomial time. The generator described at the end of Sec-
tion 10.4 is from Blum, Blum and Shub (1986). More references on this subject can
be found in Brassard (1988). General techniques are given in Vazirani (1986, 1987)
to cope with generators that are partly under the control of an adversary.
The experiment devised by Georges Louis Leclerc (1777), comte de Buffon, was
carried out several times in the nineteenth century; see for instance Hall (1873).
The process by which it can be used to estimate mTis analysed in detail in Solomon
(1978). A standard text on mathematical statistics and data analysis is Rice (1988).
For an early text on numeric probabilistic algorithms, consult Sobol' (1974). The
point is made in Fox (1986) that pure Monte Carlo methods are not specially good
for numerical integration with a fixed dimension: it is preferable to choose your
points systematically so they are well spaced, a technique known as quasi Monte
Carlo. Probabilistic counting is from Morris (1978); see Flajolet (1985) for a detailed
analysis. A solution to Problem 10.12 is given in Brassard and Bratley (1988) but be-
ware that it is incorrect in the first two printings: the correct analysis was provided
to the authors by Philippe Flajolet. For a cryptographic application, see Kaliski,
Rivest and Sherman (1988). Yet a different flavour of probabilistic counting is dis-
cussed in Flajolet and Martin (1985). Numeric probabilistic algorithms designed to
solve problems from linear algebra are discussed in Curtiss (1956), Vickery (1956),
Hammersley and Handscomb (1965) and Carasso (1971). A guide to simulation is
provided by Bratley, Fox and Schrage (1983).
The Monte Carlo algorithm to verify matrix multiplication is from Freivalds
(1979); see also Freivalds (1977). The Monte Carlo primality test presented here is
equivalent to the one in Rabin (1976, 1980b); it draws on previous work of Miller
(1976). Another Monte Carlo test for primality was discovered independently by
Solovay and Strassen (1977). The expected number of false witnesses of primality
for a random composite integer is investigated in Erd6s and Pomerance (1986);
see also Monier (1980). The fact that it suffices to test strong pseudoprimality to
bases 2, 3, 5, 7 and 61 to decide deterministically if an integer up to 1013 is prime
was discovered by Claude Goutier. The proof that Fermat's test can be arbitrarily
bad follows from Alford, Granville and Pomerance (1994). The discussion on the
generation of random primes is from Beauchemin, Brassard, Crepeau, Goutier and
Pomerance (1988); see also Kim and Pomerance (1989) and Damgard, Landrock
and Pomerance (1993). Efficient methods to generate certified random primes are
given by Couvreur and Quisquater (1982) and Maurer (1995). A theoretical solu-
tion to Problem 10.21 is given in Goldwasser and Kilian (1986) and Adleman and
Huang (1992). For more information on tests of primality and their implementa-
tion, consult Williams (1978), Lenstra (1982), Adleman, Pomerance and Rumely
(1983), Kranakis (1986), Cohen and Lenstra (1987), Koblitz (1987) and Bressoud
(1989). More information on general number theory can be found in the classic
Hardy and Wright (1938).
Section 10.9 References and further reading 375
The Las Vegas approach to the eight queens problem was suggested to the au-
thors by Manuel Blum. Further investigations were carried out by Pageau (1993).
For more background on the problem, consult the references given in Section 9.10.
The term "Robin Hood" appeared in Celis, Larson and Munro (1985) in a deter-
ministic context. An early (1970) linear expected time probabilistic median finding
algorithm is attributed to Floyd: see Exercise 5.3.3.13 in Knuth (1973). It predates
the classic worst-case linear time deterministic algorithm described in Section 7.5.
A probabilistic algorithm capable of finding the i-th smallest among n elements
in an expected number of comparisons in n + i + 0 (,n) is given in Rivest and
Floyd (1973). Universal hashing was invented by Carter and Wegman (1979); see
also Wegman and Carter (1981). An early integer factorization algorithm of Pollard
(1975) has a probabilistic flavour. The probabilistic integer factorization algorithm
discussed here is from Dixon (1981), but it is based on ideas put forward by Kraitchik
(1926); see also Pomerance (1982). The history of the quadratic sieve factorization
algorithm is given by Pomerance (1984) and the double prime variation used to
take up the RSA challenge is from Lenstra and Manasse (1991). The factorization
algorithm based on elliptic curves is discussed in Lenstra (1987). The number field
sieve is described in Lenstra, Lenstra, Manasse and Pollard (1993). See also Koblitz
(1987) and Bressoud (1989). The technique for searching in an ordered list comes
from Janko (1976); see Problem 10.36. A detailed analysis of this technique is given
in Bentley, Stanat and Steele (1981), where it is also proven that an expected time
in Q( n ) is required in the worst case to solve this problem by any probabilistic
algorithm.
Several interesting probabilistic algorithms have not been discussed in this
chapter. We close by mentioning a few of them. Given the Cartesian coordinates of
points in the plane, Rabin (1976) gives an algorithm capable of finding the closest
pair in expected linear time; contrast this with Problem 7.39. A Monte Carlo algo-
rithm is given in Schwartz (1978) to decide whether a multivariate polynomial over
an infinite domain is identically zero and to test whether two such polynomials are
identical. Consult Zippel (1979) for sparse polynomial interpolation probabilistic
algorithms. Rabin (1980a) gives an efficient probabilistic algorithm for computing
roots of arbitrary polynomials over any finite field as well as an efficient proba-
bilistic algorithm for factorizing polynomials over arbitrary finite fields and for
finding irreducible polynomials. A very elegant Las Vegas algorithm for finding
square roots module a prime number is due to Peralta (1986); see also Brassard
and Bratley (1988). A rare example of an unbiased Monte Carlo algorithm for a
decision problem, which can decide efficiently whether a given integer is a perfect
number and whether a pair of integers is amicable, is described in Bach, Miller and
Shallit (1986).
Chapter 11
Parallel Algorithms
Elsewhere in this book, we implicitly assume that our algorithms will be executed
on a machine that can do only one calculation at once. Of course, any modern
machine overlaps computation with input/output operations such as waiting for a
key to be struck, or printing a file. Many of them also overlap different arithmetic
operations when computing an expression, so that additions, for example, may
be carried out in parallel with multiplications. However we have not so far con-
sidered the possibility that the machine might be able to compute several dozen,
or even several hundred, different expressions at the same time. If we allow this
possibility, then we may hope, if we are both clever and lucky, to speed up some
of our algorithms by a similar factor.
Computers that can perform such parallel computations are not yet on every
desk. However their numbers are increasing, and interest in parallelalgorithms, that
take advantage of this ability, is widespread. Research in this area is so active that
it would be unrealistic to try to mention all the areas where parallel techniques are
being studied. In this chapter we therefore present only an introductory selection
of parallel algorithms that illustrate some fundamental techniques.
We first describe more precisely the machine we have in mind when designing
such algorithms. Next we illustrate one or two basic techniques, and discuss what
we mean by an efficient parallel algorithm. Finally we give a small number of
examples from the fields of graph theory, expression evaluation and sorting.
376
Section 11.1 A model for parallel computation 377
the same storage location in the same step, no two of them may write simultane-
ously into the same location, nor may a processor write into a location that is being
read. We thus avoid having to decide what happens if two or more processors try
to write different values into the same location, or if a value changes as it is being
read. A model defined in this way is called a concurrent-read, exclusive-write, or
CREW model. Other possible models that we shall not consider for the moment are
the EREW (exclusive-read, exclusive-write) and CRCW (concurrent-read, concurrent-
write) models, defined in the obvious way. Nobody seems to have found a use for
an exclusive-read, concurrent-write model.
When analysing parallel algorithms in the following sections, we make the cru-
cial assumption that an access to memory in our hypothetical CREW p-ram, whether
for reading or for writing, can be made in constant time, regardless of the number of
processors in use. This assumption is not true in practice. Since it is not feasible to
provide direct links in hardware from all the processors to all the storage locations,
the average time required to perform a memory access on a real system increases as
the number of processors goes up; furthermore, some patterns of memory access
are faster than others. In fact it is not true that even a single processor can ac-
cess every address in an arbitrarily large memory in constant time. For simplicity,
however, we ignore this complication in this book.
For simplicity, too, we ignore most of the problems raised by the overall control
of the parallel machine. To describe our parallel algorithms, we use statements of
the general form
Step '
Step i
Step
More formally, let n = 2 k, and let T be an array indexed from 1 to 2n -1. This
array can be used to store a complete binary tree with 2n - 1 nodes, with the root
in T[l] and the children of node T[i] in nodes T[2i] and T[2i + 1], just like a heap.
Let the n elements to be summed be placed initially in the leaves of the tree, that
is, in nodes T[n] to T[2n -1]. Now the algorithm for calculating the sum of these
n elements is as follows.
function parsum(T, n)
{Calculates the sum T[n]+ * * + T[2n -1]}
for i - lgn - I downto O do
for 2i s j < 2i+1 -1 in parallel do
T[j]- T[2j]+T[2j + 1]
return T[1]
380 Parallel Algorithms Chapter 11
Here the only synchronization required is that all the parallel computations for
a particular value of i should be completed before those for the next value of i
begin. During one trip round the loop on i, each processor involved performs two
memory accesses to read its operands, one addition, and a final memory access to
store the result. Since we assume that a memory access takes constant time, the
total work required from each processor also takes constant time. Finally, because
the processors work in parallel, all the work required in one trip round the outer
loop can be executed in constant time. The algorithm makes Ig n trips round the
loop, so the total time required is in O(logn). It is evident that the maximum
number of processors ever required to operate simultaneously is n/2.
The same technique can clearly be applied to such problems as finding the
product, the maximum or the minimum of n elements, or deciding if they are all
zero.
11.2.2 Pointer doubling
In its simplest form, the pointer-doubling algorithm applies to lists of items. Sup-
pose we have a list L of n items, each containing a pointer to its successor. Let the
successor to item i be skij. If item k is the last element of the list, then s[k] is the
special pointer nil. For our first example, we wish to calculate for each item i the
distance d[il from that item to the end of the list. We suppose as many processors
are available as there are elements in L, so we can associate a separate processor to
each list item. Now the algorithm is as follows.
procedure pardist(L)
{Initialization}
for each item i e L in parallel do
if s[i]= nil then d[if- 0
else d[i]- 1
{Main loop}
repeat [ lg n1I times
for each item i E L in parallel do
if s[i]lz nil then
d[i] - d[i]+d[si[i]]
s[i- s[s[i]]
Figure 11.2 illustrates the progress of this algorithm on a list of 7 elements. As writ-
ten, the pointer fields in the original list are changed, and the list structure is de-
stroyed. If this is undesirable, copy the pointers in the initialization phase of the
algorithm, and then work with the copies.
Here the synchronization required is more subtle than in the previous example.
There is no problem with simultaneous attempts to write to the same location,
since each processor only assigns values to d and s in its own item. In the model
we adopted we do not have to worry about simultaneous attempts to read the
same location. However we do have to worry about values changing before we
have time to read them. When several processors are executing the instruction
d[i]- d[i]+d[s[i]] in parallel, for instance, the synchronization must be tight
enough to ensure that the processor assigned to item i reads the necessary value of
Section 11.2 Some basic techniques 381
Initialization:
Loop 1:
Loop 2:
Loop 3:
d [s [i]] before the processor assigned to item s [i] changes its value of d. The safest
way to ensure this for every i is to insist that all the reads necessary to evaluate
the right-hand sides are executed before any new values of the left-hand sides are
written. Similarly, in the instruction s[i] - s[s[iij, the reads necessary to evaluate
the right-hand sides must all occur before any new values of the left-hand sides are
written. The requirement stated above, that all the processors should be working
on the same instruction at the same time, now has to be interpreted more strictly,
with synchronization at the machine instruction level.
When the algorithm stops, s[i]= nil for every item i in L. To see this, observe
that the pointers are "doubled" at each execution of the statement s[El- s[s[i]].
More precisely, sfi] originally points to the element following i in L; after one
execution of this statement, s[i] points to the element originally two places along
from i; after two executions of the statement, it points to the element originally
four places along; and so on. When a pointer goes off the end of the list we give
it the special value nil. Since there are n elements in L, it suffices to "double" the
pointers FIg nI times to be sure they all go off the end. (For the case when n is
unknown, see Problem 11.21.)
To see that the computed values of d [i] are correct, observe that at the beginning
of each iteration, if we add the values of d for all the items in the sublist headed by
item i (of course, using the current values of s), we obtain the distance from i to
the end of the original list L. Now at each iteration the pointer s i] is modified so
as to omit i's immediate successor from this sublist. However the value of d for
this immediate successor is added to d[i], so the same condition is still true at the
beginning of the next iteration.
382 Parallel Algorithms Chapter 11
Still with the assumption that all memory accesses take constant time, we
see immediately that the work required from each processor on one iteration of
the repeat loop is constant, so the running time for the complete algorithm is in
O (log n). There is one processor per list element, so the total number of processors
required is n.
If the original data structure is not a list, but a disjoint set structure (see Sec-
tion 5.9), a similar algorithm can be applied. Suppose each set is represented by a
tree, as in Figure 5.21, with pointers from each item to its parent, except that the
roots point to themselves. Now applying an algorithm similar to pardist, except
that it omits all mention of the distances d, will "flatten" the trees so every item
points directly to the appropriate root. This may be seen as an extreme example of
path compression. Here is the algorithm.
procedure paroper(L)
initializationn}
for each item i E L in parallel do
d[i]-v[i]
{Main loop}
repeat LIg nI times
for each item i E L in parallel do
if s[i]#+ nil then
d[i]- d[i]od[s[i]]
s[i]- S[S[i]]
Let xi be the value of v for the i-th item in the list, 1 < i < n. (This is not the same
as v[i]. In the case of xi, the suffix indicates that we want the value of the i-th
item in the original list; in the case of v[i] the index is a pointer to an item, not its
position in the list.) Define Xi,j by
Xij = Xi Xi+0
1 ° ... * Xj,
that is, as the generalized product of the i-th to the j-th elements of L. Then it is
not difficult to prove (see Problem 11.3) that when the algorithm terminates, the
value of d for the first element of L is XI,,, the value of d for the second element of
L is X 2,", and so on, along to the last element of L whose value of d is Xnsn = x,
Figure 11.3 illustrates the operation of the algorithm on a list of seven elements
where the operator - is ordinary addition.
Section 11.3 Work and efficiency 383
Initialization:
Loop 1:
Loop 2:
Loop 3:
we need at present.) It is easily seen that for a parallel algorithm this work is
the time needed to simulate the parallel algorithm using a single processor which,
at each step of the computation, imitates each parallel processor in turn. If we
have two algorithms A and B for the same problem that require work Wa and Wb
respectively to obtain a solution, we say that A is work-efficient with respect to B if
Wa E O(Wb).
In Section 12.5.1 we shall see that an ordinary, sequential algorithm is generally
regarded as efficient if its running time for a problem of size n is in 0 (nk) for some
constant k. For a parallel algorithm to be regarded as efficient, on the other hand,
we usually expect it to satisfy two constraints, one on the number of processors
required, and one on the running time. These are
c the number of processors required to solve an instance of size n should be in
Q(Wa) for some constant a, and
c the time required to solve an instance of size n should be in 0 (logbn) for some
constant b.
We say that an efficient parallel algorithm takes a polynomial number of processors
and polylogarithmictime.
A parallel algorithm is optimal if it is work-efficient with respect to the best
possible sequential algorithm. It may sometimes be called optimal if it is work-
efficient with respect to the best known sequential algorithm. In this case, however,
it is preferable to say that the corresponding problem has optimal speed-up. We shall
see in Chapter 12 that there are many problems for which no known efficient (that is,
polynomial-time) sequential algorithm exists. For such problems we cannot expect
to find an efficient parallel solution (that is, one that uses a polynomial number of
processors and polylogarithmic time): see Problem 11.6. On the other hand, there
are many problems for which an efficient sequential algorithm is known, but for
which no efficient parallel algorithm has yet been discovered. It is believed, but not
proved, that some problems that can be solved by an efficient sequential algorithm
have no efficient parallel solution.
Take the technique described in Section 11.2.1 as an example. We saw there
that we can compute the sum of n elements stored in an array using n/2 proces-
sors and a time in 0 (log n). The work required by that algorithm is therefore in
(n/2) x((log n) = 0(n log n). Since the sum of n elements can clearly be obtained
in (9 (n) operations by straightforward addition on a sequential processor, the par-
allel algorithm, although it is efficient, is not optimal. Similarly the techniques
described in Section 11.2.2 allow us to carry out a variety of operations (calculating
the distance to the end of a list of n elements, finding the sum or the maximum
of the list elements, etc.) in a time in )(log n) using n processors. Here again the
work required is in E)(n log n). Since these operations can be carried out by a single
sequential processor in a time in 0 (n), these algorithms, too, are not optimal.
If we look more closely at the algorithm to compute the sum of n elements using
a binary tree, one possible reason for its being less than optimal is immediately
apparent. In the first trip round the main loop, n/2 processors are required, and
this determines the resources needed by the algorithm; for in the second trip round
Section 11.3 Work and efficiency 385
the loop only n/4 processors do useful work, in the third only n/8 are needed, and
so on, so that most of the time, most of the processors are idle. This suggests that
we may be able to use less processors without this having a catastrophic effect on
the computing time.
Suppose then we have only p < n/2 processors available. One way to proceed
is to divide the n numbers whose sum we require to calculate into p groups,
p - 1 of which contain [n/pl numbers, while the last contains the remaining
n - (p -1) [n/p] numbers. The last group may thus contain less than [n/p]
members, but it cannot contain more. Now assign one of the available processors
to each group, and set each processor to calculating the sum of its group. Although
the processors work in parallel, the individual calculations can be straightforward,
sequential computations taking 0 ( [n/p1) operations. Because the processors work
in parallel, the total time required for this stage is also in 0 ([ n/p] ). The problem is
now reduced to finding the sum of the p group sums, and this can be solved by the
unmodified balanced tree technique using p/ 2 processors in a time in 0 (log p).
Overall, the modified algorithm using p < n/2 processors thus takes a time in
O(Fnlpl + logp).
In particular, if we take p = n/log n we obtain an algorithm that can find
the sum of n numbers in a time in 0 (log n). We have thus reduced the number
of processors required by a factor of (log n) /2 without changing the order of the
running time. The work done by the modified algorithm is in 0(p x log n)= ((n).
Clearly no sequential algorithm can do better than this, so the modified algorithm
is an optimal parallel algorithm.
In general, we may not be so fortunate. For example, dividing the items of a list
into groups is harder than dividing the items of an array. For the former, we may
have to begin by scanning the whole list; for the latter, a simple calculation using the
array indexes is usually sufficient. Nevertheless, using a similar technique we can
always reduce the number of processors required by a parallel algorithm. Suppose
we have an algorithm that runs in time t using p processors on a problem of size
n, but that we only have q < p processors available. (Here t, p and q are functions
of n.) How should we proceed?
As before, we divide the p processors into q groups, and use one of the q avail-
able processors to simulate each group. There will be q - 1 groups containing [ p / q l
processors, and a last group containing no more processors than the others, and
maybe less. Next, suppose the original algorithm carries out steps 1, 2,..., where
the p processors execute each step independently, but have to be synchronized
between steps. In the modified algorithm using only q processors, at step 1 one of
these simulates in turn each processor in the first group; a second simulates in turn
each processor in the second group; and so on. Since there are [p/ql processors
or less to simulate in each group, the simulation of step 1 using q processors takes
[p /q] times longer than the original step 1, and so on for the other steps. Thus
the complete computation using q processors takes [p / q 1 times longer than the
complete computation using p processors. In symbols, the modified algorithm
takes a time in 0 ([p /q It). (Remember that p, q and t may be functions of the size
n of the instance.) Since p/q < [p/ql < 2p/q when p > q, we have proved the
following theorem.
386 Parallel Algorithms Chapter 11
Theorem 11.3.1 (Brent) If there exists a parallel algorithm that takes time t(n)
to solve a problem of size n using p (n) processors, then for any q (n) < p (n) there
is a modified algorithm that can solve the same problem using only q (n) processors
in a time in O (p(n)t(n)/q(n)).
Here the original algorithm does work in ( (p(n)t(n)) and the modified algorithm
does work in 0((p(n)t(n)/q(n))xq(n))= 0(p(n)t(n)),so in terms of work we
have neither gained nor lost by the modification: the modified algorithm is work-
efficient with respect to the original algorithm. We can thus reduce the number
of processors used by an algorithm without altering its efficiency. In particular,
if the original algorithm is optimal, so is the modified algorithm. Of course the
algorithm using less processors will usually take longer to finish, even though the
work performed is the same.
Here the array T stores path lengths to avoid conflicts between reading and writing
in the last for statement. There is no conflict between reading the old value of D [i, j]
and writing its new value in this statement, since this is done by the same processor.
The variables i, j and m range from 1 to n.
Analysis of the algorithm is straightforward. The first for statement can be ex-
ecuted in constant time using n2 processors. Within the repeat statement, the first
for statement can be executed in constant time using n3 processors. The minimum
of n + 1 elements can be calculated in a time in 0 (log n) using 6 (n/ log n) proces-
sors, as described in Section 11.2.1. There are n2 such minima to be computed in
parallel, so one iteration of the second for statement can be executed in a time in
E)(logn) using E)(n 3 / logn) processors. Finally the repeat statement is executed
Ig ni| times, so the complete algorithm can be executed in a time in 0 (log 2 n) using
(n3 ) processors.
It is easy to show that the number of processors can be reduced to 6 (n 3 /log n)
while keeping the same order for the time: see Problem 11.8. However this is still
not optimal.
We begin by describing just one iteration of the parallel algorithm that merges
disjoint sets. To illustrate the operation of the algorithm, suppose for example that
we have a graph with 19 nodes, and that by some means or other we have reached
the situation shown in Figure 11.4. Here nodes 1, 2 and 3 are in the set labelled 1,
nodes 4 and 5 are in the set labelled 4, and so on. In terms of our representation, this
meansthatset[l]= set[2]= set[3]= 1,set[4]= set[5]= 4,andsoon. Thenodesinany
given set are already known to be connected, but there are also some connections
not yet taken into account. These are indicated by dotted edges in Figure 11.4. Thus
for example node 3 is connected to node 7 and node 4 is connected to node 8; in
other words, L [3, 7] = true, L [4, 8]= true, and so on, while for instance L [3,4] =false
because nodes 3 and 4 are not connected. Connections between nodes in the same
set are omitted, as they are no longer of interest.
)
)
Figure 11.4. Merging sets: initial situation
In the description of the algorithm, we need three arrays S[i. . n, 1.. n], T[1. . n]
and oldT[1 . . n]. The first step of the parallel merging algorithm can now be spec-
ified as follows.
{Step 1}
for all i, j in parallel do
if L[ij] and set[i]# set[j] then S[i,j]- set[j]
else S[i,j]i oo
forall i in parallel do T[ti]- min(S[i, 1],S[i,2],...,S[i,n])
for all i in parallel do
if T[i]= oo then T[i]- set[i]
Here and throughout this section the variables i and j range from 1 to n.
The effect of this step is that for each node i, T[i] now points to the root of a
set. If node i is connected to nodes in other sets besides its own, then T[i] points
to the root of one of these other sets: in fact, to the one with the lowest number.
If node i has no connections outside its own set, then T[i]= set[i].
Applying step 1 to the situation in Figure 11.4 yields the situation in Figure 11.5,
where the connections between nodes are now omitted, and the arrows show the
values of T obtained. All the arrows point to root nodes. Thus, for instance, node 1,
which has no connections outside its own set, has T[1]= set[1]= 1. Node 3, which
is connected to node 8, a node not in its own set, points to the root of this other
Section 11.4 Two examples from graph theory 389
set, namely node 6. A slightly more complicated case is provided by node 8. This
is connected to both nodes 4 and 16, neither of which is in the same set as node 8.
Thus T[8] must point either to the root of the set containing node 4, or to the root of
the set containing node 16, that is, to either node 4 or node 15. Since the algorithm
chooses the lowest-numbered root, we obtain T[8]= 4.
{Step 2}
for all i, j in parallel do
if set[j]= i and T[j]# i then S[ij].- T[j]
else S[ij]- oo
for all i in parallel do T[i] - min(S[i,1],S[i,2],...,S[i,n])
for all i in parallel do
if T[i]= oo then T[i]- set[i]
If node i is not the label of a set, there is no node j with set[j]= i, so this step
simply sets T[i]]- set[i]. If on the other hand i is a label, the algorithm examines
all the values of T[j] for which j is a node in set i, and for which T[j]b i, that is,
T[j] points to a different set. It then chooses the smallest among these. If there
are no such values T[i] - set[i], that is, the label node i points to itself. This only
happens if none of the nodes in set i is connected to a node in a different set.
After step 2 is applied to the situation in Figure 11.5, we obtain the situation
illustrated in Figure 11.6. The arrows now show the new values of T. Every node
that is not a label points to the root of its own set, and arrows between sets only
join root nodes.
Consider this directed graph, which we shall call H: its nodes are the nodes
of G, but its edges are specified by the pointers T. It is redrawn in Figure 11.7
to make its structure clearer. Tracing through the algorithm, we see that if one of
the initial sets has no connection to any other (that is, if it includes every node in
some connected component of the graph G), then after steps 1 and 2 the pointers T
simply reproduce the initial set structure. This is the case of set 17 in the example.
The other nodes of H form one or more connected components, each of which
resembles a pair of trees whose roots are joined in a cycle. In the example one pair
of trees has nodes 1 and 6 as its roots, and the other has nodes 9 and 11.
390 Parallel Algorithms Chapter 11
To see why this is so, consider a component of H formed by the fusion of two or more
of the original sets. Suppose a is the label of the lowest-numbered set involved.
Since the label of a set is the lowest-numbered node in the set, this means that a
is in fact the lowest-numbered node in the given component of H. Now T[a]= b,
where b is the label of a set different from a, since at step 2 of the algorithm we
choose pointers across different sets whenever possible. Furthermore T[b]= a,
since if set a is connected to set b, then set b is connected to set a (because the
graph G is undirected), and T[b] is chosen in step 2 to be as small as possible.
Hence TEa]= b and T[b]= a and these two nodes form a cycle. All the remaining
nodes in this component of H must be joined to either node a or node b by a chain
of one or more pointers T, so they form two trees, one with a as root, and the other
with b as root.
The third and final step of the algorithm uses the pointer-doubling technique to
flatten these double trees, rather as in Section 11.2.2. The only subtlety is that if we
"double" the pointer of a node sufficiently often, we are sure that it will point to one
of the pair of roots, but we cannot be sure which. In the example, if we "double"
the pointer from node 5, it will point to node 6; all subsequent doublings leave this
unchanged. If we double the pointer from node 4, it will point to node 1; again,
subsequent doublings do not alter this. However if the two roots are nodes a and
b, we have seen that before any pointer doubling T[a]= b and T[b] = a. Suppose
we save the original values of T in the array oldT. After a sufficient number of
Section 11.4 Two examples from graph theory 391
doublings the pointer from node i points to one of the pair of roots, and now we
can use the values in oldT to find the other. Comparing, we can choose the root
with the lower number. As pointed out above, this is the lowest-numbered node
in this component of H.
In a component of H that has only a single root, say a, at the outset T[a] a,
so the above technique, while unnecessary, does no harm. Since the whole graph
G contains n nodes, no component of H can contain more than this. Whether for
a single or a double tree, [ lg n] pointer doublings are therefore enough to ensure
that every node points to a root.
Here is the third step of the algorithm.
{Step 3}
for all i in parallel do oldT[i] - T[i]
repeat fig n] times
for all i in parallel do T[i]- T[T[i]]
for all i in parallel do set[i] - min(T[i],oldT[T[i]])
operations are elementary, that is, they can be computed in constant time. For sim-
plicity, we suppose that the expression to be evaluated is given in the form of an
expression tree: this is a binary tree where each internal node represents one of the
four available operators, and each leaf represents an operand. Figure 11.8 shows
such a tree, corresponding to the expression
We further suppose that the leaves of the tree are numbered from left to right
around the bottom of the tree, again as illustrated in Figure 11.8. The x to the right
of each internal node in this figure will be explained later. If the tree has n leaves
(operands), then of course it has n - 1 internal nodes (operators). In the example,
n = 8.
2 3 6 7
The obvious way to evaluate such an expression tree in parallel is to assign one
processor to each internal node, and to use an algorithm with the following form.
repeat
for each internal node i in parallel do
if the values of the children of i are known then
compute the value of i
remove i's children from the tree
until only one node is left
394 Parallel Algorithms Chapter 11
The number of iterations needed is equal to the height of the expression tree. In the
worst case, however, a binary tree with n leaves can have height n - 1, and this
simple algorithm does not produce any parallelism at all! Only one processor does
useful work at each iteration, so the computation could just as well be carried out
on a sequential machine; see Problem 11.10.
To speed things up, the processors must do something useful even before both
their children have been evaluated. Hence we look for something a processor can
do when the value of at least one of its children is known. To this end, we associate
a function f (x) with each internal node of the tree. Initially, every internal node is
associated with the function x, as illustrated in Figure 11.8. The meaning of these
functions is that, when a processor at some node has calculated a value x, the value
it transmits up the tree is not x but f(x). Consider for example the fragment of
tree shown in Figure 11.9a. Here the processor assigned to internal node A receives
one value from its left child and one value from its right child, multiplies them to
obtain an answer x, then transmits the value f (x) up the tree to node B. In its turn
the processor attached to node B receives this value from its left child A and the
value 9 from its right child C (which is a leaf, corresponding to a constant operand),
adds them to obtain an answer x, then transmits the value g (x) up the tree to its
parent. And so on.
A
X g(x) + 9)
C
etc I etc 2
etc I etc 2
(a) (b)
Even before receiving a value from node A, however, the processor at node B can
do useful work reconstructing the expression tree. Because the value of node B's
right-hand child is known, it can modify the function stored at node A and remove
node B and its child (the right-hand one, in this case) from the tree. The result is
shown in Figure 11.9b. If, when it calculates a value x, the processor at node A
transmits g(f(x) +9) directly to B's parent, bypassing B and its right-hand child,
the result obtained at the root of the tree does not change.
There is nothing special about the operator + in the node B, nor about the value
9 that we gave to its right-hand child C. In general, if node B holds the operator o,
Section 11.5 Parallel evaluation of expressions 395
where a is any of +, -, x or /, and if the value of its right child is any constant k,
then B can replace A's function by g(f(x)ok) and cut itself and its right child out
of the expression tree. If the constant k is B's left child and the node A is B's right
child, B should replace A's function by g (k o f (x)). This is important because the
operators - and / are not commutative. Minor adjustments take care of the cases
when A is a leaf (take f (x) to be the constant function that returns the value of the
operand at A), or when B is the root of the tree (the final value of the tree is the
value that would normally be transmitted up to the next level).
The operation described above is called a cut. In the example, the leaf C and
its parent B were cut from the tree. It is important for what follows to see that
a cut can be performed in constant time. The manipulation of pointers in the
tree can certainly be done in constant time: only three pointers are involved in the
operation. It is less evident that the function associated with a node can be updated
quickly. For instance, if in the example nodes A and B had both been created by a
preceding series of cut operations, the functions f (x) and g (x) might already-or
so it appears-be quite complex, so that substituting f into g to obtain g (f (x)ok)
would be no trivial matter.
Fortunately, if the only operators permitted are +, x and /, the functions
that can be obtained in this way all have the form (ax + b)/(cx + d). To see
this, note first that the initial function at each internal node, namely x, can be
represented with a = d = 1 and b = c = 0. If f(x)= (ax + b)/(cx + d), then
f (x) +k = ((a +kc)x + (b + kd)) / (cx +d), and so on for the seven other operations
involving a functions (x) and a constant k. Finally if f (x)= (aix + bl) / (clx + dl)
and g(x)= (a 2 x + b2 )/(c 2 x + d2 ), then g(f(x))= (a 3 x + b 3 )/(c 3 x + d3 ), where
a3 = aa 2 + b2 cl and there are equally simple expressions for b3, c3 and d3 ; see
Problem 11.11. Hence to represent any function associated with an internal node we
keep just the four corresponding constants a, b, c and d. Provided these constants
are not too large, the representation of any functional composition of the form
g(f (x)ok) or g(k a f(x)) can then be computed in constant time when required.
Furthermore, when the value of x becomes available, f (x) too can be computed
in constant time. (The proviso is necessary because in a complicated expression
tree the constants a, b, c and d can grow exponentially. However we shall not
consider this possibility here.)
The last consideration before we state the algorithm for evaluating expressions
is that only three nodes are involved in a cut operation. In Figure 11.9, these are
the nodes A, B and C. Pointers, values and associated functions are read and
changed for these three nodes and no others. It is therefore possible to execute a
cut operation elsewhere in the tree in parallel with the operation on A, B and C,
provided none of these is also involved in the second operation. Moreover, let A, B
and C be the nodes involved in one cut operation, and A', B' and C' be the nodes
involved in another, with C and C' being the two leaves involved. Remember that
the leaves of the expression tree are numbered in order round the tree. Then it is
a sufficient condition for the operations not to interfere with one another-that is,
for the sets {A, B, C} and {A', B', C'} to be disjoint-if C and C' are nonconsecutive
leaves that are either both left children or both right children. Problem 11.12 asks
the reader to prove this.
396 Parallel Algorithms Chapter 11
The complete parallel algorithm for evaluating simple expressions can now be
stated as follows. We assume that one processor is allocated to each internal node
of the expression tree.
We first cut all the odd-numbered leaves that are left children. By the remarks
above, this can be done in parallel without the cut operations interfering with one
another. Next we cut all the odd-numbered leaves that are right children. Again,
this can be done in parallel. Since every odd-numbered leaf is either a left or a right
child, all of them have now been removed, and only the [n/21 even-numbered
leaves remain. These are renumbered by dividing their numbers by 2, ready for
the next iteration. Since each iteration removes at least half the leaves from the
expression tree, after Flg n] iterations only one leaf remains. The value of this leaf
is the value of the expression.
Figure 11.10 illustrates one iteration of this process when applied to the expres-
sion tree of Figure 11.8. The first half of the figure shows the state of the tree after
the odd-numbered left leaves have been cut, and the second half the state after the
odd-numbered right leaves have been cut. It is readily seen that if the leaf numbers
are now halved, the tree will be ready for the next iteration. The reader may verify
that the second iteration reduces the tree to three nodes (one internal node and two
leaves), while the third reduces it to a single node holding the value of the original
expression, namely 24.
Because a cut can be performed in constant time, the algorithm described above
is easily seen to take a time in 0 (log n) using 0((n) processors. The work performed
is in 0 (n log n), so the algorithm is not optimal. As in previous examples, after one
iteration only half the processors are still useful, after two iterations only a quarter,
and so on. At the cost of additional complexity, we may take advantage of this to
reduce the number of processors required to 0 (n / log n) without increasing the
time required beyond 0 (log n). The improved algorithm does work in 0(n), and
is therefore optimal.
The form of input required for the above algorithm (an expression tree with
the leaves numbered from left to right) may be thought a little unusual. Although
we omit all details here, we note that if the expression to be calculated is not in
the required form, but is stored instead as a string (that is, an array of characters),
Section 11.6 Parallel sorting networks 397
2 4 6 8
(a) (b)
Figure 11.10. (a) Left children cut (b) Right children cut
then the numbered expression tree can be obtained in a time in O(log n) using
o (n/log n) processors: exactly the same orders as for the evaluation algorithm.
Transforming the input into the required form is therefore not a bottleneck.
xI4b y, = min(xi,x 2 )
X2 t Y2 = max(x1 , x2)
comparators at all, while S2, as we have just seen, is a single comparator. One
way to build progressively bigger networks is to design S~,1 in terms of Sn, so
that starting with Si or S2 we can build all the networks we want. There are at
least two obvious ways to do this, illustrated in Figure 11.12. Both networks have
n + 1 inputs and n + 1 outputs. The network on the left corresponds to sorting by
selection: the largest element falls to the bottom, and then we use S, to sort the n
remaining values. The network on the right corresponds to sorting by insertion:
we use Sn to sort the first n inputs, and then the (n + 1)-st input is inserted in
its correct place. Interestingly, when we compare the networks obtained in this
way, they turn out to be the same. Figure 11.13 illustrates the network S5 obtained
whether we use selection or insertion.
SI' Sn1
(a) (b)
There are two useful measures of the quality of our networks. First, we can simply
count the number of comparators needed to build S,. This is called the size of
the network. In the example S5 contains 10 comparators, and it is evident that
in general S, will contain Yn=- = n(n -1)/2 comparators. The second measure
of interest is the time the network takes to sort its inputs. We assumed that a
comparator takes a constant time to operate, but of course it cannot fire before its
inputs are ready. We define the depth of a network to be the maximum number
of comparators through which an input must pass before it arrives at an output.
The depth of the network in Figure 11.13 is 7. Comparators in the same vertical
line can all operate at once, while successive lines must be executed from left to
right. In this case input 2 passes through 7 comparators. In general it is easy to
see that a network Sn designed in this way has depth 2n - 3 when n > 2. If
each phase takes constant time, the time required to sort n elements using this
type of parallel sorting network is proportional to the depth of the network, and is
therefore in 0((n).
In the following sections we shall see how to improve this design.
Section 11.6 Parallel sorting networks 399
II-I-
Proposition 11.6.1 A sorting network with n inputs correctly sorts any set of
values on its inputs if and only if it correctly sorts all the 2n input vectors consisting
only of zeros and ones.
Proof The "only if " part of the proposition is obvious. To prove the "if " part, let f: -. R
be any nondecreasing function, so that f (x) < f (y) whenever x < y. Suppose
the sorting network under consideration correctly sorts all the 2' input vectors
consisting only of zeros and ones, but that there is some vector (XI X2, x) .
of inputs that it sorts incorrectly. Notice that even an incorrect sorting network
produces a permutation of its inputs. Let the (incorrect) output vector from this
set of inputs be (YI, Y2,..., yn ), and let yi be any element of this vector such that
yi > yi~j. Such an element necessarily exists since the vector is incorrectly sorted.
Now consider what would happen if instead of (xI, x2, .. .. , x) we applied the
input vector (1(x 1 ) ... f (xn)) to the network. Because f is nondecreasing,
2(xc),
f(xi)< f(xj) whenever xi < xj. Thus the values f(xi) propagate through the
sorting network in exactly the same way as the values xi: whenever two values of
x were interchanged, now the two corresponding values of f (x) are exchanged.
(This is why we required the comparators to exchange two equal values.) The
output vector of the sorting network would therefore be (f (Y), f (Y2), . f (y) ,),
since the network would perform exactly the same permutation of its inputs as
before.
Finally let f be the function defined as follows: f (x) 0 when x < yi, and
f(x)= 1 otherwise. Now the input vector (f(xl) f(x 2 ),. .,f(xn)) is a vector
consisting solely of zeros and ones. In the output vector f (yj)= 1 and f (Yi 1)= 0,
because yi+I < yi. The output vector would therefore be incorrectly sorted, contra-
dicting the assumption that the network correctly sorts all input vectors containing
only zeros and ones. It follows that no such input vector as (xI, x2,..,xn) can
exist: the network correctly sorts any input vector, and our proof of the zero-one
principle is complete. A
400 Parallel Algorithms Chapter 11
3 1
F3 T
first I I 13
sorted'
group 64
of inputs
sorted
5 2 outputs
Is
second'
sorted' 7 7
group I S
of inputs 8 8 8
For simplicity, suppose from now on that n is a power of 2. A single comparator can
serve as Fl. From this base we can create merging networks F2 , F4 , and so on, always
designing F20 in terms of F,. This is another example of the divide-and-conquer
technique discussed in Chapter 7. Figure 11.15 shows how it is done. Suppose
the two groups of inputs are (wlW2,..., w) and (x 1 ,x 2 , . ,xn) and the out-
puts are (Y1. Y2, . . ., Y2n) . We merge the odd-numbered inputs (WI, W3, . . ., Wn 1)
from the first group with the odd-numbered inputs (xI, X3, .X. , Xn 1) from the sec-
ond group using one merge network Fn, and we merge the even-numbered inputs
(w2, w 4 .. , w,) from the first group with the even-numbered inputs (x 2 , x 4 , . . ., xn)
from the second group using another. Call the outputs of these two merges
(vIV2,...,v2n), numbering the outputs from the odd-numbered merge before
those from the even-numbered merge. Now permute the outputs vi so that, from
top to bottom, they are in the order (v 1 ,v+ 1,v 2 ,vn+2 , . . . ,v,,v2). This permu-
tation is the so-called perfect shuffle: if you cut a pack of 2n cards exactly in half
and then riffle them together so that cards fall alternately from each half, this is
the order you obtain. Finally install comparators between what are now outputs 2
and 3, 4 and 5,..., 2n - 2 and 2n - 1. The output on the right is the desired sorted
vector (YI, Y2, .. , Y2n)-
An argument exactly analogous to the one given above shows that the zero-one
principle also holds for merging networks. Hence to prove the proposed network
works, we first show that it works when the inputs w and x consist solely of zeros
and ones, and then invoke the zero-one principle to conclude that it works for any
Section 11.6 Parallel sorting networks 401
inputs. The argument is by mathematical induction. For the basis of the induction,
it is obvious that F1 , a single comparator, works correctly. For the induction step,
suppose all the networks F1 , F2, .. ., Fn have been shown to work correctly. Consider
what our proposed F2m does when the input vector w consists of r zeros followed
by n - r ones, and x consists of s zeros followed by n - s ones. (Remember that
both groups of inputs must already be sorted.) Since by the induction hypothesis
the networks F, work correctly, the output (vI, V2, . . .-, vO) of the upper merging
network consists of [r/21 + [s/2] zeros followed by the appropriate number of
ones, while the output (Vn+I, Vn +2, . . ., V2n) of the lower merging network consists
of [r /2] + [ s/2] zeros followed by ones. If r and s are both even, then after shuffling
the outputs from the two merging networks, the 2n lines from top to bottom hold
r + s zeros followed by ones; the final column of comparators is unnecessary, but
does no harm. If just one of r and s is odd, then after shuffling the outputs from
the two merging networks the 2n lines from top to bottom again hold r + s zeros
followed by ones; as before the final column of comparators is unnecessary. If both
r and s are odd, however, then after shuffling the outputs from the two merging
networks the values on the lines from top to bottom are r + s -1 zeros, 1 one, 1
zero, and then ones. This time the comparator between lines r + s and r + s + 1 is
necessary to finish the sort. The proposed F2n therefore correctly sorts any inputs
consisting solely of zeros and ones; by the zero-one principle, it correctly sorts any
inputs.
Figure 11.14 was obtained in this way, except that the network has been cleaned
up to remove some redundant crossings of the lines.
To compute the size s(n) of the network Fn obtained using this construction
we use the recurrence
s(2n)= 2s(n)+n/2 -1
that is immediate from Figure 11.15. The initial condition is s(1)= 1. Using the
methods of Section 4.7 it follows easily that s (n) = 1 + n lg n. It is equally easy to
show that the depth of the network F, is 1 + lg n.
402 Parallel Algorithms Chapter 11
It is easy to show that the sorting networks obtained in this way use a number of
comparators in O (n log 2n) and that the time they require to sort their inputs is in
e (log 2 n). For obvious reasons the networks described above are called odd-even
merging and sorting networks. They were discovered by Batcher in 1964. These
sorting networks are not optimal. Anticipating a little, we shall see in the next
chapter that any algorithm that sorts n elements by making comparisons between
them must make at least [ lg n! I in the worst case. Thus any sorting network for
n elements must include at least [ lg n! I comparators. For example, any sorting
network for 16 elements must contain at least 45 comparators. The odd-even sorting
network S16 contains 63 comparators. A different network is known that uses only
60 comparators. It seems there is still room for progress! Concerning depth, too,
the odd-even sorting network is not optimal: for 16 inputs, the odd-even sorting
network has depth 10, but a different network is known that has depth 9. A class of
sorting networks with depth in ( (log n) and size in 0 (n log n) is known to exist,
but has not yet yielded networks that are useful in practice.
from the result of Problem 11.17.) In this section we sketch a sorting algorithm due
to Cole that can be executed on a CREW p-ram, and that sorts n items in a time in
E)(logn) using a number of processors in 0(n). The work performed is therefore
in 0 (n log n). As we shall see in Section 12.2.1, this parallel algorithm is therefore
optimal, at least as far as sorting by comparison is concerned.
Throughout this section we assume for simplicity that we have n distinct items
to be sorted into ascending order, where n is a power of 2; otherwise we add the
necessary number of dummy elements. In essence, Cole's parallel algorithm is a
tree-based merge sort. It may be helpful to compare this with the ordinary sequen-
tial merge sort described in Section 7.4.1. The algorithm uses a complete binary
tree with n leaves. Initially one of the items to be sorted is placed at each leaf.
Then the computation proceeds up the tree, level by level, from the leaves to the
root. At each internal node the sorted subsets of items produced by its children are
merged. Suppose such a merge can be carried out in a time in 0 (M (n)), and that all
the merges at the same level of the tree can be performed in parallel. As there are
lg n levels of internal nodes in the tree, the whole sorting procedure can therefore
be carried out in a time in 0 (M(n)log n).
If we have no additional information about two sorted sequences each contain-
ing m items, it can be proved that with m processors the time required to merge
the sequences is in Q(loglogm). Using such an approach, we would expect a
tree-based merge sort to take a time at least in Q (log n log log n). Cole's idea was
to provide just the additional information needed for it to be possible to merge
two sorted sequences in a time in 0 (1), so the overall algorithm runs in a time in
o (log n).
The following section groups some necessary definitions, and then we outline
the algorithm.
11.7.1 Preliminaries
There are n items to be sorted. As the algorithm progresses, various subsets of
these n items are stored in ascending order in sorted arrays. In what follows, since
all the arrays of interest 'are sorted, we simply call them arrays, taking for granted
that the items they contain are ordered. We use lower-case letters to name items,
and capital letters to name arrays.
Let a, b and c be three items, with a < c. We say that b is between a and c if
a < b < c. We also say that a and c straddle b.
Let L be an array of items. If a is an item in L, then the rank of a in L is defined
straightforwardly: the smallest item in L has rank 1, the next smallest has rank
2, and so on. We tacitly suppose that every array L is extended by two invisible
items, namely - co with rank 0, and + co with rank ILI + 1. This allows us to define
the rank of an item from one array with respect to another (sometimes called the
cross-rank) as follows. Let L and J be two arrays, and let b be an item in J. If a and
c are the two consecutive items of L that straddle b (these necessarily exist because
of the presence of the two invisible items in L), then the rank of b in L is defined to
be the same as the rank of a in L. We denote by JIL the set of ranks in L of all the
items in J.
404 Parallel Algorithms Chapter 11
Again, let J and L be two arrays of items, and now let a and b be two consec-
utive items of L (including the invisible items). We define the interval induced by
these items to be [a, b). An item x belongs to this interval if a < x < b, that is, if
a and b straddle x. We say that L covers J if each interval induced by consecutive
items of L contains at most three items from J. (This time the invisible items in J are
not included.) For example, if L contains the items 10, 20 and 30, while J contains 1,
7, 22, 23, 26, and 35, then L covers J: the interval induced by the items -co and 10
from L contains two items from J, the interval induced by 10 and 20 contains none,
and so on. However if K contains the items 11, 12, 14, 17, 22, and 41, L does not
cover K because the interval induced by the items 10 and 20 from L contains more
than three items from K.
We use the symbol & to denote the operation of merging two sorted arrays.
Suppose L covers J and M covers K. Contrary to what one might hope, it is not
necessarily true that L&M covers J&K; see Problem 11.19. Finally, for any array L,
r(L) denotes the array obtained by taking every fourth item in L. If L contains less
than four items, r(L) is empty.
J *p pp p p ppK p
The items from J that lie in the first interval induced by L can be merged with those
from K that also lie in the first interval in a time in 0 (1) because there are at most
six of them altogether. The same is true of the items from J and K that lie in the
second interval induced by L, and so on. If we have enough processors, all these
merges of at most six items can be performed in parallel, so they can all be finished
in a time in 0(1). To obtain J&K, all that remains is to concatenate the results of
merging the intervals separately. With enough processors, this too can be done in
constant time. Thus the whole merge can be completed in constant time.
We call this operation merging with help. In the example above, arrays J and K
are said to be merged with help from L.
Section 11.7 Parallel sorting 405
1. Compute the array Bv(t) - r(Av(t)), and send it up the tree to v's parent.
2. Read the two arrays B, (t) and By (t) that v's children have just sent up the
tree.
3. Compute Av(t + 1)- B,(t)&By(t). This merge operation is performed in
constant time as outlined above with help from array A, (t), which, as we shall
see, covers both B,(t) and By(t).
There are differences in the three phases according to whether node v is active or
complete. However for our present purposes it is unnecessary to give the details;
it is sufficient to note that
c at stage 0 nodes at level 1 read the values sent by their children, which are
leaves, merge these two values, and become complete;
• three stages after a node becomes complete, its parent in turn becomes com-
plete.
Since there are lg n levels of internal nodes in the tree, we conclude that the
algorithm has 3 Ig n - 2 stages.
available A
initially
It remains to prove that the merges required by phase 3 of each stage can be
performed in a time in 0(1). Combined with the above result, this will show that
the complete algorithm runs in a time in 0 (log n).
storage location simultaneously, but that does not permit simultaneous writing to
the same location. It seems intuitively likely that such a machine should be more
powerful than an EREW p-ram, and less powerful than a CRCW p-ram. We now give
two simple examples to confirm this intuition.
To show that a CREW p-ram can outperform an EREW p-ram, consider the fol-
lowing problem. A binary tree contains n nodes, each node i except the root being
linked to its parent by a pointer p[i]. At the root, the pointer p has the special
value nil. Each node i also has a second pointer r[i]. We want to set the value
of r at every node so that it points to the root of the tree. Consider the following
parallel algorithm for doing this.
functionfind-max(A[P . . n])
array M[L . . n] of Boolean
for 1 < i < n in parallel do M[i] i true
for all ordered pairs (i, j) in parallel do
if A[i]< A[j] then M[i] - false
for 1 < i < n in parallel do
if M[i] then max - A[i]
return max
408 Parallel Algorithms Chapter 11
suitable integer is found, the processor concerned can send a message to the keeper
of the collection with the new information, until eventually enough suitable inte-
gers are found. If finding a suitable integer is a relatively rare event, electronic mail
is quite fast enough to provide the message path.
Using this kind of technique, Lenstra and Manasse designed an experiment that
involved recruiting volunteers with access to the Internet, supplying them with the
necessary programs, and collecting results as they were acquired. The experiment
began in the summer of 1987 using the elliptic curve method of factorization, not
described in this book. In 1988 they changed to a variant of the quadratic sieve
algorithm outlined in Section 10.7.4. Their programs were designed to run on a
workstation during periods when it would otherwise have been idle-overnight or
at weekends, for example-so the computing power used was essentially free. They
estimate that they had access to the equivalent of some 1000 million instructions
per second of sustained computing power, allowing them to factorize 100-digit
integers in about a week. They expected the required computing time to roughly
double for each extra three digits added to the size of the number to be factorized.
See Section 10.7.4 for a recent example of an even more impressive success on a
number with 129 digits.
In the case of the travelling salesperson problem, all the existing efficient com-
puter programs are based on a scheme that involves finding suitable cutting planes.
(It does not matter if you don't know what these are.) Again, several processors can
be used to search for cutting planes independently. In 1993 a team of four people
solved a 4461-city problem after computing for 28 nights in parallel on a network
of 75 machines. They estimated that the computation involved would have taken
nearly two years had it been executed on a single workstation.
11.10 Problems
Problem 11.1. Prove that algorithmflatten is correct.
Problem 11.2. Show that the minimum of n elements stored in an array can be
found in a time in O(logn) using G(n/ logn) processors.
Problem 11.3. With the definitions of Section 11.2.2, prove that when algorithm
paroper terminates, the value of d for the first element of L is Xin, the value of d
for the second element of L is X2 ,", and so on, the value of d for the last element
being Xnsw.
Problem 11.4. Write an algorithm similar to paroper of Section 11.2.2 except that
when it terminates the value of d for the first element of L is X1 ,1, the value of d
for the second element of L is X1,2, and so on, the value of d for the last element
being Xin.
Problem 11.5. Write an algorithm similar to paroper of Section 11.2.2 except that
it takes two parameters: a list and a value. Assume that the operator a takes three
parameters, not two. Two of these are pointers to list items and the third is a value k.
When both list items have values less than or equal to k, o returns a pointer to the
item with the larger value; when only one item has a value less than or equal to k,
410 Parallel Algorithms Chapter 11
o returns a pointer to this item; when neither item has a value less than or equal
to k, o returns nil. Your algorithm should return a pointer to an item in the list
whose value is k if there is one; otherwise it should return a pointer to the item
with the largest value not exceeding k. This is a kind of "binary search" on a linked
list, which cannot be done sequentially.
Problem 11.6. Show that if we have an efficient parallel algorithm (using a poly-
nomial number of processors and taking polylogarithmic time) for some problem,
then we can find an efficient sequential algorithm (taking polynomial time) for the
same problem.
Problem 11.7. Show that if p > q then p/q < p/ql < 2p/q.
Problem 11.8. Show that algorithm parpaths can be executed using 0 (n 3 / log n)
processors taking a time in 0 (log2 n).
Problem 11.9. Draw figures to illustrate the progress of algorithm concomps on
the graph of Figure 11.19.
Problem 11.10. Give an example of an expression with five operands that requires
four iterations of the naive algorithm of Section 11.5 to evaluate.
Problem 11.12. Show that two cut operations (see Section 11.5) involving leaves
C and C' do not interfere with one another if C and C' are nonconsecutive leaves of
the expression tree, and either they are both left children or both right children.
as an expression tree with the leaves numbered from left to right, and illustrate the
operation of the parallel evaluation algorithm of Section 11.5 on this tree.
Section 11.10 Problems 411
Problem 11.14. State and prove a zero-one principle for merging networks.
Problem 11.15. Show that the size and depth of the merging networks F, de-
scribed in Section 11.6.2 are 1 + n Ig n and 1 + Ig n respectively.
Problem 11.18. Consider the networks defined as follows for n > 2. The network
contains n(n -1) /2 comparators arranged like bricks in a wall as illustrated in Fig-
ure 11.20: there are [n/21 comparators between inputs 1 and 2, Ln/2j comparators
between or behind these between inputs 2 and 3, fn/2] more directly under the
first group between inputs 3 and 4, and so on. Prove that these networks are valid
sorting networks. In terms of n, what is their depth?
n=5 n-6
Problem 11.19. Let J, K, L and M be four sorted arrays such that L covers J and
M covers K, where covering is defined in Section 11.7.1. Give an example showing
that L&M does not necessarily cover J&K.
Problem 11.20. Point out where simultaneous read operations may occur in Cole's
parallel merge sort.
Problem 11.21. Suppose we want to write an algorithm like pardistof Section 11.2.2,
using pointer doubling, but on a list of unknown length. Instead of repeating the
doubling step log n times, we might try to use some construct such as the following.
repeat
the doubling step
until s[i]= s[s Li]] for every i in L
412 Parallel Algorithms Chapter 11
This poses a problem for the overall control of our parallel machine: how can every
processor be made aware that the loop has ended? Show how this can be done for
(a) an EREW p-ram, (b) a CREW p-ram, and (c) a CRCW p- ram. How much time is
required to test loop termination in each case?
Computational Complexity
413
414 Computational Complexity Chapter 12
contains an output, which we call the verdict. A trip through the tree consists of
starting from the root and asking the question that is found there. If the answer
is "yes", the trip continues recursively in the left subtree; otherwise it continues
recursively in the right subtree. The trip ends when it reaches a leaf; the verdict
found there is the outcome of the trip.
Consider again the game of 20 questions, but assume for simplicity that the
mystery number is known to be between 1 and 6. You will obviously not need
all 20 questions. How many do you really need? Figure 12.1 gives a decision tree
for this game. If the mystery number is n = 5, for example, your first question is
"Is n < 3?" and you continue in the right subtree because the answer is "no" (thus
you know that n E {4,5,6}). There, you find the question "Is n < 5?" and you
continue to the left since the answer is "yes" (so you know that n e {4, 5 }). Finally,
you ask the question "Is n • 4?" and reach the correct right-hand verdict "n = 5"
from the answer "no".
This is relevant because to any deterministic algorithm to play the game there
corresponds a decision tree, provided there is a limit to the number of questions
the algorithm may ask. Conversely, any such decision tree can be thought of as
an algorithm. Assume for simplicity that decision trees are pruned in the sense
that all the leaves are accessible from the root by making some consistent sequence
of decisions. Though wasteful, it is allowable to ask a question whose answer is
determined by the sequence of questions and answers leading to that question.
For example, one may ask whether A < C in a node reached after learning that
A < B andB < C: weshallseeinFigurel2.5thatthisbehaviourmayoccurinnatural
algorithms. Recall that the height of the tree is the distance from the root to the most
distant leaf. Since one question is asked for each internal node on that path, and
since this leaf can be reached when processing at least one input, the height of the
tree corresponds to the number of questions asked in the worst case. Moreover, the
tree must have at least one leaf for each possible verdict. (Problem 12.5 illustrates
the fact that there may be more leaves than distinct verdicts in general.)
Come back now to the question of playing the original game of twenty ques-
tions, but with only 19 questions. Any solution would give rise to a decision tree
of height at most 19 that must have at least one million leaves. This is impossible
416 Computational Complexity Chapter 12
because the decision tree is binary, any binary tree of height k has at most 2 k leaves
(see Problem 12.1), and 219 is less than one million. We conclude immediately that
the game of twenty questions cannot be solved with 19 questions in the worst case.
Decision trees can also be used to analyse the complexity of a problem on
the average rather than in the worst case. Let T be a binary tree. Define the
average height of T as the sum of the depths of all the leaves divided by the num-
ber of leaves. For example, the decision tree of Figure 12.1 has average height
(3 + 3 + 2 + 3 + 3 + 2)/6 = 8/3. Just as the height of a pruned decision tree gives
the worst-case performance of the corresponding algorithm, its average height
gives the average-case performance, provided each verdict is equally likely and
each possible verdict appears exactly once as a leaf of the tree. Continuing our
simplified example, if each integer between 1 and 6 is equally likely to be the mys-
tery number, the algorithm corresponding to Figure 12.1 asks 8/3 questions on
the average. Can one do better? Can the average-case performance be improved,
perhaps if one is willing to ask more questions than required on a few instances?
The following theorem tells us that if the mystery number is randomly chosen
between 1 and 6 according to the uniform distribution, any algorithm to play the
game must ask at least lg 6 2.585 questions on the average, no matter how many
questions it may ask in some cases. But clearly the average number of questions
asked by any deterministic algorithm must be an integer divided by 6 since the
number of questions is an integer for each of the 6 equiprobable verdicts. The so-
lution given in Figure 12.1 asks 16/6 questions on the average. Any improvement
would ask no more than 15/6 = 2.5 questions. This is ruled out since Ig 6 > 15/6.
We conclude that our decision tree provides an optimal algorithm when the mys-
tery number is between 1 and 6, both in the worst case and on the average. Similarly,
20 questions are necessary in the worst case for the original game with one million
verdicts, and no algorithm can ask less than lg 106 19.93 questions on the average
if all verdicts are equally likely.
Theorem 12.2.1 Any binary tree with k leaves has an averageheight ofat least lg k.
Proof Let T be a binary tree with k leaves. Define H(T) as the sum of the depths of the
leaves. For example, H(T) = 16 for the tree in Figure 12.1. By definition, the average
height of T is H(T)/ k, and thus our goal is to prove that H(T) k lg k. The root of
T can have 0, 1 or 2 children; see Figure 12.2. In the first case, the root is the only
leaf in the tree: k = 1 and H(T) = 0. In the second case, the single child is the root of
a subtree A, which also has k leaves. The distance from any leaf to the root of T is
one more than the distance from the same leaf to the root of A, so H(T) = H(A) +k.
In the third case, T is composed of a root and of two subtrees A and B with i and
k - i leaves, respectively, for some 1 < i < k. By a similar argument we obtain this
time H(T)= H(A)+H(B)+k.
Section 12.2 Information-theoretic arguments 417
I leaf
k leaves
k - i leaves
Figure 12.2. Minimizing the average height of a binary tree with k leaves
For k 2 1, define h(k) as the smallest value possible for H(X) when X is a binary
tree with k leaves. In particular, H(T)> h(k). Clearly, h(1)= 0. If we define
h (0) = 0, the preceding discussion and the principle of optimality used in dynamic
programming lead to
for every k > 1. At first sight this recurrence is not well founded since it defines
h(k) in terms of itself when we take i = 0 or i = k in the minimum. However it is
impossible that h(k)= h(k)+k, so those terms cannot yield the minimum. We can
thus reformulate the recurrence that defines h(k).
C0 if k <1
h(k)= k+ min (h(i)+h(k- i)) otherwise
1<i<k-1
We now prove by mathematical induction that h(k)> klg k for all k > 1.
The base k = 1 is immediate. Let k > 1. Assume the induction hypothesis that
h (j) > j Ig j for every positive integer j smaller than k. By definition,
To have available the tools of real analysis for function minimization, let
9 [1,k 1]--R be defined as g(x)= xlgx + (k - x)lg(k - x). Calculating
the derivative gives g'(x)= lgx - lg(k - x), which is zero if and only if x
k - x. Since the second derivative is positive, g (x) attains its minimum at x
k/2. This minimum is g(k/2)= klgk - k. But the minimum value attained by
418 Computational Complexity Chapter 12
g (i) when i is an integer between 1 and k - 1 cannot be less than the minimum
value of g(x) when x is allowed to be a real number in the same range. There-
fore,
min g(i)> min g(x)> klgk - k.
iE[1..k 1] xe[lk-l]
Every valid decision tree for sorting n items gives rise to an ad hoc sorting
algorithm for the same number of items. For example, to the decision tree of
Figure 12.3 there corresponds the following algorithm.
at least lg n! by Problem 12.1 and Theorem 12.2.1. This translates directly into the
complexity of sorting: any algorithm that sorts n items by comparisons must make
at least [ lg n! 1 comparisons in the worst case and ig n! on the average, provided
all verdicts are equally likely. Since each comparison must take at least some
constant amount of time and since lg n! E 0 (n log n) by Problem 3.24, it follows
that it takes a time in Qi(n log n) to sort n items both in the worst case and on the
average, no matter which comparison-based sorting algorithm is used. Thus we
see that quicksort is optimal on the average, even though its worst-case performance
is pitiful; see Section 7.4.2.
We have proved that any deterministic algorithm for sorting by comparison
must make at least f lg n! I comparisons in the worst case when sorting n items.
Beware of making complexity arguments such as these say things they do not say.
In particular, the decision tree argument does not imply that it is always possible
to sort n items with as few as Flg n! I comparisons in the worst case. In fact, it has
been proved that 30 comparisons are necessary and sufficient in the worst case for
sorting 12 items, yet [ lg 12! ] = 29.
In the worst case, the insertion sorting algorithm makes 66 comparisons when
sorting 12 items, whereas heapsort makes 59, of which the first 18 are made during
construction of the heap. Hence they are both far from optimal. However, it can
be shown that heapsort never makes much more than twice the optimal number of
comparisons, whereas insertion sorting becomes arbitrarily bad when the number
of items to be sorted becomes large. Even better than heapsort from this standpoint
is mergesort, which makes a number of comparisons that is essentially optimal;
see Problem 12.7. Do not believe that optimizing the number of comparisons is
Section 12.2 Information-theoretic arguments 421
EFGH
ABCD
IJKL
each of the 12 coins, there are also two possible verdicts corresponding to that coin
being either lighter or heavier than the others. The total number of verdicts is thus
1 + 12 x 2 = 25. This is disquieting at first since we have seen that a decision tree of
height 3 cannot accommodate more than 23 = 8 verdicts. Fortunately, our decision
tree is ternary rather than binary for this problem because each measurement has
three possible outcomes: the scale may tilt to the left, it may stay balanced, or it may
tilt to the right. Just as binary trees of height k have at most 2 k leaves, ternary trees
of height k have at most 3 k leaves. All is well since a ternary decision tree of height
3 can accommodate up to 33 = 27 verdicts and we need only 25. Nevertheless,
this does not prove that the problem has a solution: recall that 30 comparisons
are required in the worst case to sort 12 items even though [ lg 12! 29. See also
Problem 12.12.
Decision trees are useful for finding the solution because they help us avoid
false starts. The key insight is that there must remain at most 32 = 9 possible verdicts
for each of the three potential outcomes of the first measurement. Otherwise, there
would be no hope of solving the problem in the worst case with only two additional
measurements. Note also that it is pointless to use the scale with a different number
of coins on each pan: no useful information can be extracted if the pan tilts to the
side containing more coins. Therefore, we first use the scale to compare two sets of
k coins, for some 1 < k < 6. If the scale stays balanced, the odd coin can be any of
the 12 - 2k remaining coins, which leaves 25 - 4k possible verdicts, counting the
possibility that all the coins are the same. If the scale tilts to one side, on the other
hand, it could be because one of the coins on that side is heavier or because one
of the coins on the other side is lighter, leaving 2k possible verdicts. As we have
seen, there is no hope of solving the problem if there remain more than 9 possible
verdicts after the first measurement. Thus, we need simultaneously 25 - 4k < 9
and 2k < 9. The only integer solution to these equations is k = 4. This reasoning
still does not prove that a solution can be found if we start by comparing two sets
of four coins, but it does tell us that there is no point trying anything else.
There are two cases to consider after the first measurement: either one set
weighs more than the other or the two sets weigh the same. In either case, the
second measurement must be such that at most three verdicts remain possible
after its outcome becomes known. This is because the final measurement cannot
distinguish between more than three possibilities. If one set is found heavier than
the other in the first measurement, reasoning similar to the above shows that the
second measurement must involve either 5 or 6 of the 8 coins used in the first
measurement. Knowing this, it is easy to fill in the details and work out exactly
the last two measurements.
Section 12.3 Adversary arguments 423
The situation is more interesting if both sets weigh the same in the first mea-
surement: we are left with what looks like the original problem, except that we
have four coins rather than twelve and we are allowed only two measurements
rather than three. Using information-theoretic arguments yet again, it does not
take long to realize that this scaled-down version of the problem has no solution
(Problem 12.13), and it seems at first that this nails the coffin on the original prob-
lem as well. What saves us is that we are allowed to use some of the eight coins
that participated in the first measurement even though we know the coin we are
looking for is not among them. Since information-theoretic arguments tell us we
cannot succeed if we don't use at least one of those initial coins, we know we have
to try this if we are to succeed at all. At this point, not much challenge is left and
we invite you to work Problem 12.11 for the details.
Is it possible to do better?
If we try to use an information-theoretic argument, we find that any decision
tree for this problem must accommodate n possible verdicts. Since the tree is binary,
it must have height at least [ lg n]. Therefore, any comparison-based algorithm to
find the maximum must perform at least FIg ni comparisons in the worst case. This
is a far cry from n - 1, the best we know how to achieve. Can adversary arguments
provide a tighter lower bound?
Consider an arbitrary comparison-based algorithm for this problem. Let it
run on an array T[1. . n], as yet unspecified. The daemon answers any question
concerning a comparisonbetween items as if T[i] were equal to i for all i. Each time
the algorithm asks "Is T[i] < T[j] ?" for i z j, we say that the smaller of i and j has
"lost a comparison". Assume the algorithm performs less than n - 1 comparisons
before it outputs the answer k. Let j be an integer different from k, 1 < j < n, that
has not lost any comparisons. Such a j exists since by assumption at most n - 2
comparisons have been performed and each comparison makes at most one new
loser. At this point, the daemon can claim that the algorithm is wrong. For this, it
pretends that T[i]= i for each i 7 j but that T[j]= n + 1. Indeed, T[k]= k is not
maximum in this array, yet the answers obtained by the algorithm are consistent
with it. This completes the proof that n - 1 comparisons are necessary to solve the
maximum problem with any comparison-based algorithm.
This result does not hold if arithmetic is allowed on the items in addition
to comparisons. To do better, compute a = iTr]
and b = ,. pnT[ ],where
Section 12.3 Adversary arguments 425
?e= In/21. If there is anitem x in T[L. .P?] that is larger thananyitemin T[f + 1.. ,
then a > b because
e n
EnT[i] >x > I T[i]
Similarly, a < b if the maximum of T is in the second half. If the maximum appears
in both halves, any relation between a and b is possible. Thus, a single comparison
between a and b suffices to determine whether the maximum is in the first or the
second half of T. Proceeding as with binary search allows us to find the answer
after at most [ Ig nl comparisons. Of course, it would be silly to use this approach
in practice since it trades inexpensive comparisons for a larger number of time-
consuming arithmetic operations.
set was split between V and W. In fact, it can be shown by a more sophisticated
adversary argument that in the worst case each of the n(n -1)/2 potential edges
must be queried by any correct algorithm; see Problem 12.16.
• If both T[i] and T[j] are uninitialized, the daemon sets T[i] to i and T[j]
to 3n + j. Thus, T[i] becomes low and T[j] becomes high.
c If only one of T[i] and T[j] is initialized, there are five subcases.
- If a single item of T remains uninitialized, the daemon sets its value to 2n,
which is neither low nor high. This item becomes the provisional median:
all the low items are smaller, all the high items are larger, and there are as
many low as there are high items. However, the median may prove to be
elsewhere in the end.
- Otherwise, if T[i] is low, the daemon sets T[j] to the high value 3n + j.
To restore the balance between low and high items, it selects an arbitrary
uninitialized T[k] and sets it to the low value k.
Section 12.4 Linear reductions 427
- If T[j] is low, the daemon acts as in the previous subcase with i and j
interchanged.
- If T[i] is high, the dxmon sets T[j] to the low value j. To restore the
balance between low and high items, it selects an arbitrary uninitialized
T[k] and sets it to the high value 3n + k.
- If T[j] is high, the daemon acts as in the previous subcase with i and j
interchanged.
o Otherwise, both T[i] and T[j] are initialized. If they are both low or if one is
low and the other is the provisional median, we say of the smaller that "it has
lost a comparison"; if they are both high or if one is high and the other is
the provisional median, we say of the larger that "it has lost a comparison".
Neither loses a comparison if one is low and the other is high.
Finally, the daemon answers the algorithm's request for a comparison in accordance
with the current values of T[i] and T[j].
Now assume the algorithm makes less than 3(n -1)/2 comparisons before it
outputs its guess of the median. Because items of T become initialized in pairs, ex-
cept for the provisional median, and because we assumed that each item is involved
in at least one comparison, exactly (n - 1) /2 comparisons were of the first or second
type above. Consequently, less than 3 (n- 1) - (n- 1)= n -1 comparisons in-
volved two items already initialized. Since items can lose a comparison only in this
case, at least one item in addition to the provisional median never lost a comparison.
Call it T[i]. Assume without loss of generality that it is low. By the definition of
losing a comparison, the daemon never had to admit to the algorithm that this item
is smaller than any other low item or than the provisional median. Therefore, no
contradiction appears if the daemon changes its mind and increases T [i], provided
it keeps it smaller than any high item. This gives the daemon a choice of keeping
T [i] i so that the provisional median is indeed the final median, or of resetting
T[i] 2n + 1 so T[i] becomes the final median. Whichever answer was returned
by the algorithm, the daemon can thus exhibit an array T with a different median
even though it is consistent with all the answers seen by the algorithm. This proves
that the algorithm is incorrect unless it makes at least 3 (n -1)/2 comparisons in
the worst case.
2 2
=-(X+y)
XXY 4 (X -y)
Section 12.4 Linear reductions 429
From these formulas we see that one squaring cannot be harder than one multipli-
cation, which is no big surprise, whereas one multiplication cannot be much harder
than two squarings.
Formally, we prove the computational equivalence of these problems by ex-
hibiting two algorithms. Each solves one problem by calling on an arbitrary al-
gorithm for the other. These algorithms could be used to teach one operation to
someone who only knows how to perform the other.
function square(x)
return mult(x, x)
function mult(x, y)
return (square(x + y) -square(x - y))/4
Let M(n) and S(n) be respectively the time needed to multiply and to square
integers of size at most n. The first algorithm makes it plain that S(n)< M(n)+c
for a small constant c that takes account of the overhead in square over and above
the time it spends inside mult. Therefore, S(n)e 0(M(n)).
The second algorithm must be analysed with slightly more care because x + y
can be one digit longer than x and y. Assume without loss of generality that x > y
to avoid worrying about the possibility that x - y might be negative. Thus
where f (n) is the time needed to perform the addition, subtractions and division
by 4 required by the algorithm, in addition to the overhead in mult. We know
already that additions and subtractions can be performed in linear time. Divi-
sion by 4 is trivial if the numbers are represented in binary and it can be done in
linear time even if another base is used; see Problem 12.21. Thus f(n)e 0(n) is
negligible since both M(n) and S(n) are in Q(n). However, Equation 12.1 is not
sufficient to conclude that M(n)e 0(S(n)). Consider what would happen if the
squaring algorithm were so inept as to take a time in O3(n!) to square an n-figure
number. In this case, multiplication of two numbers of that size could take a time in
( ((n + 1)!), which is about n times bigger than the time spent to square one such
number. Nevertheless, M(n)e 0(S(n)) does follow from a reasonable assump-
tion; see Theorem 12.4.4 below. Thus we conclude that the problems of integer
multiplication and squaring have the same complexity up to a multiplicative fac-
tor.
What can we say about other elementary arithmetic operations such as integer
division and taking a square root? Everyday experience leads us to believe that the
second of these problems, and probably also the first, is genuinely more difficult
than multiplication. This turns out not to be true. Under reasonable assumptions,
it takes the same time to multiply two n-figure numbers, to compute the quotient
when a 2n-figure number is divided by an n-figure number, and to compute the
430 Computational Complexity Chapter 12
integer part of the square root of an n-figure number. This is brought about by
exotic formulas such as
x = 1 I1 x
x x-,-
x1 x+ x- x- x+1-i x 1+
and classic techniques such as Newton's method to find zeros of functions. Work
Problems 12.27 and 12.28 for more detail.
In addition to its usefulness for complexity purposes, the fact that one multi-
plication reduces to two squarings is interesting from an algorithmic point of view:
it provides an instructive example of preconditioning; see Section 9.2. Pretend for
the sake of argument that you are a Roman and that you know no other notation
for numbers. If you must frequently multiply numbers between 1 and 1000, you
may find it worthwhile to compile a multiplication table once and for all. However,
this table will span more than half a million entries even if you only compile the
product of x and y when x > y. A better solution is to compile a table of values
for x 2 /4 for all x between 1 and 2000. (You will have to invent a pseudo-Roman
symbol to denote one-quarter.) Once you have this table, you can perform any
required multiplication with one addition and two subtractions, together with two
table look-ups.
Definition 12.4.1 Let A and B be two problems. We say that A is linearly re-
ducible to B, denoted A <,0 B, if the existence of an algorithmfor B that works in a
time in 0 (t(n)) foran arbitraryfunction t(n) implies that there exists an algorithm
for A that also works in a time in 0(t(n)). When A <e B and B <e A both hold,
we say that A and B are linearly equivalent and write A - e B.
Intuitively, A <e B means that someone who knows how to handle problem B
can easily be taught how to handle problem A as well. It follows from Problem 3.10
that linear reductions are transitive: if A <e B and B <,e C, then A <e C. Linear
equivalences are also transitive, in addition to being reflexive and symmetric.
Formally, we prove A <t B by exhibiting an algorithm that solves an instance
of A by transforming it into one or more instances of problem B. The conclusion
that A <5t B follows immediately if the required instances for problem B are the
same size as the original instance of problem A, if a constant number of them are
required, and if the amount of work in addition to solving those instances is not
larger (up to a multiplicative constant) than the time required to solve problem B.
For example, we saw that one integer squaring reduces to a single multiplication
of the same size plus a constant amount of work. Thus integer squaring linearly
reduces to integer multiplication.
Section 12.4 Linear reductions 431
It is easy to show that strongly quadratic functions are at least quadratic. Most
of the theorems that follow are stated conditionally on a "reasonable" assumption,
such as A 'i, B assuming B is smooth. This can be interpreted literally as meaning
that A <t B follows under the assumption that B is smooth. From a more practical
point of view it also means that, for any smooth function t(n), the existence of
an algorithm for B that works in a time in O(t(n)) implies that there exists an
algorithm for A that also works in a time in 0 (t (n)). Moreover, all these theorems
are constructive: the algorithm for A follows from the algorithm for B and the proof
of the corresponding theorem.
We are now in a position to state precisely and demonstrate the linear equiva-
lence between integer squaring and multiplication.
Theorem 12.4.4 Let SQR and MLT be the problems consisting of squaring an
integer of size n and of multiplying two integers of size n, respectively. Under the
assumption that SQR is smooth, the two problems are linearly equivalent.
Proof We argued previously that both problems must be at least linear. Let M(n) and
S(n) respectively be the times needed to multiply and to square operands of size
at most n. We saw that there exist a constant c and a function f (n)E 0 (n) such
that S(n)< M(n)+c and M(n)< S(n + 1)+S(n)+f(n). Because M(n) is at least
linear, M(n)> c for all sufficiently large n. Therefore, S(n) < 2M(n), also for all
sufficiently large n, which implies by definition that S(n) e 0 (M(n)) and thus
SQR -1 MLT.
For the other direction, assume that S(n)c O(s(n)) for some smooth func-
tion s(n). Let a, bi and b2 be appropriate constants such that s(2n)< as(n),
S(n)< bis(n) and s(n)< b2 S(n) for all sufficiently large n. Because any smooth
function is eventually nondecreasing by definition,
for all sufficiently large n. Because f(n)e 0(n) and S (n) cQi(n), there exists a
constant d such that f (n) < dS (n) for all sufficiently large n. Putting it all together,
we conclude that
for all sufficiently large n, and thus M (n) e 0(S(n)) and MLT <t SQR by definition.
.
Proof Any algorithm that can multiply two arbitrary square matrices can be used directly
for multiplying upper triangular and symmetric matrices. U
434 Computational Complexity Chapter 12
Proof Suppose there exists an algorithm able to multiply two n x n upper triangular
matrices in a time in 0 (t (n)), where t (n) is a smooth function. Let A and B be
two arbitrary n x n matrices to be multiplied. Consider the following product of
3n x 3n upper triangular matrices
O A
0
0 0O
0
x
O0 0
0
00 0
0
B
) 12
=
~0
0
0 0
0
AB
0
where "O" denotes the n x n matrix all of whose entries are zero. This product
shows us how to obtain the desired result AB by a reduction to one larger multi-
plication of upper triangular matrices. The time required for this operation is in
0(n 2 ) for the preparation of the two upper triangular matrices and the extraction
of AB from their product, plus 0(t(3n)) for the multiplication of the two upper tri-
angular matrices. By the smoothness assumption, t(3n)E 0(t(n)). Because t(n)
is at least quadratic, t(n)e 0(n 2 ) and thus n2 E 0(t(n)). Consequently, the total
time required to multiply arbitrary n x n matrices is in 0(n 2 + t(3n)), which is
the same as 0(t(n)).
Proof This is similar to the proof of Theorem 12.4.6: we reduce the multiplication of two
arbitrary n x n matrices A and B to a multiplication of 2n x 2n symmetric matrices.
0 A) X ( BtX (AB tB
(At 0)x (B ) (0 AtBt)
We leave the details to the reader. Note that the product of two symmetric matrices
is not necessarily symmetric. U
Proof Suppose there exists an algorithm able to invert a nonsingular n x n upper trian-
gular matrix in a time in 0(t(n)), where t(n) is a smooth function. Let A and B
Section 12.4 Linear reductions 435
(I A O I -A AB I 0 0
0 I B x 0 I -B = 0 I 0
0 0 I 0 0 I 0 0 I
where I is the n x n identity matrix. This product shows us how to obtain the
desired product AB by inverting the first of the upper triangular matrices above:
the result appears in the upper right corner of the inverse.
I A 0 1 I -A AB
0 I B = I -B
yO I O 0 I
As in the proof of Theorem 12.4.6, this entire operation takes a time in 0 (n2 + t (3n)),
which is the same as 0(t(n)). U
It remains to prove that IT•5 MQ, which is the most interesting of these reduc-
tions. For this it is useful to introduce yet another problem: IT2 is the problem of
inverting nonsingular upper triangular matrices whose size is a power of 2.
Proof Suppose there exists an algorithm able to invert a nonsingular m x m upper trian-
gular matrix in a time in 0 (t (m) I m is a power of 2), where t (m) is a smooth func-
tion. Let A be a nonsingular n x n upper triangular matrix for arbitrary n. Let m
be the smallest power of 2 not smaller than n. Let B be the m x m upper triangular
matrix such that Bi Aij for 1 < i < n and 1 < j < n, Bi, = I for n < i < m and
B 1j = 0 otherwise:
B A~0)
where the O's are rectangular zero matrices of the proper size so that B is m x m
and I is the (m - n) x (m - n) identity matrix. It is easy to verify that the inverse
of A can be read off directly as the n x n submatrix in the upper left corner of B 1.
Thus the calculation of A 1 takes a time that is in 0(t(i)) for inverting B, plus
something in 0(n 2 ) to prepare matrix B and read the answer from B-1. Because
t(m) is a smooth function and m is even, t(m)= t(2(m/2))< ct(m/2) for an ap-
propriate constant c, provided n is sufficiently large. Because smooth functions
are eventually nondecreasing and m /2 < n, t (m /2) < t (n), again provided n is
sufficiently large. It follows that t (m) < ct (n). Thus matrix A can be inverted in a
time in 0(ct(n) +n2 ). This is the same as 0 (t(n)) because t(n) is atleastquadratic.
U
436 Computational Complexity Chapter 12
A= ( CD).
Note that B and D are upper triangular whereas C is arbitrary. Similarly let F, G
and H be unknown ' x n matrices such that
A-' (F G).
( O H)
The lower left submatrix is zero because the inverse of a nonsingular upper tri-
angular matrix is upper triangular; see Problem 12.30. The product of A and A
should be the identity matrix.
Since both B and D are nonsingular upper triangular matrices half the size of A, this
suggests a divide-and-conquer algorithm to compute A 1 via two recursive calcu-
lations of inverses, two matrix multiplications, and some additional bookkeeping
operations that take a negligible time g(n) 0(n 2 ).
Let I(n) be the time spent by this algorithm to compute the inverse of an
n x n upper triangular matrix when n is a power of 2. Let M(n) be the time
we need to multiply two n x n arbitrary matrices. From the above discussion,
I(n)< 2I(n/2)+2M(n/2)+g(n) when n is a power of 2 larger than 1. By the as-
sumption that MQ is strongly quadratic, M(n) G3(t(n)) for some strongly quad-
ratic function t(n). Let a, b and c be constants such that g(n)< an2 , t(n)> bn2
and M (n) < c t (n) for all sufficiently large n. Constant b exists because all strongly
Section 12.4 Linear reductions 437
for all n > no that are powers of 2, for appropriate constants d and no. Without
loss of generality, we may choose no to be a power of 2.
It remains to prove that I(n)e O(t(n) I n is a power of 2). For this we use
constructive induction to determine a constant u such that l(n) < ut (n) for all
n > no that are powers of 2. The basis of the induction is established provided we
choose u > I (no) /t (no). For the induction step, consider any n > no that is a power
of 2 and assume the partially specified induction hypothesis that I (n/2)< ut (n/2).
By Equation 12.2, the induction hypothesis, and the fact that t (n/2)• I t (n),
This shows that I(n)< ut(n) provided 2 + d < u, which is the same as u > 2d.
In conclusion, I(n) < ut (n) holds for all n > no that are powers of 2 provided we
choose u > max(I (no) /t(no), 2d). This completes the proof that I(n)e O (t(n) I n
is a power of 2), and thus that IT2< MQ assuming MQ is strongly quadratic.
The reduction used in this proof is different from the reductions seen previ-
ously in the sense that a single inversion of an upper triangular matrix involves a
large number of matrix multiplications if those implied by the recursive calls are
counted. The linearity of the reduction is possible only because most of the implied
multiplications are performed on matrices much smaller than the one we seek to
invert. U
Proof This is almost immediate from the two preceding lemmas. The only technical
problem is that we need the assumption that IT2 is smooth to apply Lemma 12.4.9,
and this is not a consequence of Lemma 12.4.10 even if MQ is strongly quadratic.
All is well nevertheless because the proof of Lemma 12.4.10 makes do with the
multiplication of matrices of size x to invert an upper triangular matrix of size
n x n, when n is a power of 2. Equation 12.2 can thus be refined as
I(n):5 2I(n/2)+dt(n/2)
438 Computational Complexity Chapter 12
and from there it follows that there exists a constant C1such that I(n) < ft(n/2)
provided n is a sufficiently large power of 2. The proof of Lemma 12.4.9 then goes
through without needing t(n) to be smooth: it is enough that t(n) be eventually
nondecreasing, which it is by virtue of being strongly quadratic. We leave the
details to the reader. U
This is the minimum cost of going from x to z passing through exactly one node
in Y. Notice the analogy between this definition and ordinary matrix multiplication:
addition and multiplication are replaced by the minimum operation and addition,
respectively.
The preceding notation becomes particularly interesting when the sets X, Y
and Z, and also the functions f and g, coincide. In this case f x A, which we shall
write f 2 , gives the minimum cost of going from one node of X to another (possibly
the same) while passing through exactly one intermediate node (still possibly the
same). Similarly, min(, f2 ) gives the minimum cost of going from one node of
X to another either directly or by passing through exactly one intermediate node.
The meaning of f i is similar for any i > 0. It is natural to define f 0 as the cost of
going from one node to another while staying in the same place.
f 0 (xy) =0 if x =Y
(cc otherwise
The minimum cost of going from one node to another without restrictions on the
number of nodes on the path, which we write f *, is therefore
f*(xy)= minf'(x,y).
When the range of the cost functions is restricted to {O, oo, calculating f *
comes down to determining for each pair of nodes whether or not there is a path
joining them. We saw in Problem 8.18 that Warshall's algorithm solves this problem
in a time in 0(n3 ). Let MULB and TRCB be the problems consisting of calculating
f x g and i *, respectively, when the cost functions are restricted in this way. It is
clear that MULB-P MUL and TRCB Jt TRC since the general algorithms can also
be used to solve instances of the restricted problems. Furthermore, the proof that
440 Computational Complexity Chapter 12
MUL_ TRC can easily be adapted to show that MULB- eTRCB under similar as-
sumptions. This is interesting because of the following theorem, which involves
the problem MQ of ordinary square matrix multiplication, which we studied in
Section 12.4.2.
problems for which even the best possible algorithm takes an exorbitant amount
of time even on small instances. Might it not be reasonable to admit that such
problems are inherently intractable rather than claiming that clever algorithms are
efficient even though they are too slow to be used in practice?
For our present purposes we answer this question by stipulating that an algo-
rithm is efficient if there exists a polynomial p (n) such that the algorithm can solve
any instance of size n in a time in 0 (p (n)). We say of such algorithms that they
are polynomial-time. This definition is motivated by the comparison in Section 2.6
between an algorithm that takes a time in 0 (2')and one that only requires a time
in 0(n 3 ), and also by some of the examples given in Section 2.7. An exponential-
time algorithm becomes rapidly useless in practice, whereas generally speaking a
polynomial-time algorithm allows us to solve much larger instances.
This notion of efficiency should be taken with a grain of salt. Given two al-
gorithms requiring a time in O (nlg9 flg) and in 0 (nW), respectively, the first is in-
efficient according to our definition because it is not polynomial-time. However,
it will beat the polynomial-time algorithm on all instances of size less than 10300,
assuming the hidden constants are similar. In fact, it is not reasonable to assert
that an algorithm requiring a time in 0 (n10 ) is efficient in practice. Nonetheless, to
3 4
decree that W(n ) is efficient whereas 0 (n ) is not, for example, would be rather
too arbitrary. Moreover, even a linear-time algorithm may be unusable in prac-
tice if the hidden multiplicative constant is too large, whereas an algorithm that
takes exponential time in the worst case may be very quick on most instances.
Nevertheless, there are significant technical advantages to considering the class of
all polynomial-time algorithms. In particular, all reasonable deterministic single-
processor models of computation can be simulated on each other with at most a
polynomial slow-down. Therefore, the notion of polynomial-time computability
is robust: it does not depend on which model you prefer, unless you use possibly
more powerful models such as probabilistic or quantum computers. Furthermore,
the fact that sums, products and composition of polynomials are polynomials will
be useful.
In this section, all our analyses for the time taken by an algorithm will be "up to
a polynomial". This means that we do not hesitate to count at unit cost an oper-
ation that really takes a polynomial amount of time. For example, we may count
additions and multiplications at unit cost even on operands whose size grows with
the size of the instance being solved, provided this growth is bounded by some
polynomial. This is allowable because we only wish to distinguish polynomial-
time algorithms from those that are not polynomial-time, and because it takes
polynomial time to execute a polynomial number of polynomial-time operations;
see Problem 12.35. On the other hand, we would not count at unit cost arithmetic
that involves operands of size exponentially larger than the instance. If the algo-
rithm needs such large operands, it must break them into sections, keep them in
an array, and spend the required time to carry out multiprecision arithmetic; such
an algorithm cannot be polynomial-time.
Our goal is to distinguish problems that can be solved efficiently from those
that cannot. For technical reasons we concentrate on the study of decision problems.
Section 12.5 Introduction to N?-completeness 443
For these, the answer is either yes or no, or equivalently either true orfalse. For ex-
ample, "Find a Hamiltonian cycle in G " is not a decision problem, but "Is graph
G Hamiltonian?" is. A decision problem can be thought of as defining a set X
of instances on which the correct answer is "yes". We call these the yes-instances;
any other instance is a no-instance. We say that a correct algorithm that solves a
decision problem accepts the yes-instances and rejects the no-instances.
Any q such that (x, q) E F is called a proof or a certificate that x E X. We did not
specify explicitly in the above formal definition that
(Vx X X) (Vq E Q) [(x,q) X F]
For another example, if X is the set of all composite numbers, we may take
Q = N as the proof space and
F = {(n, q)I1 < q < n and q divides n}
as the proof system. This proof system is not unique. Another possibility would
be
F' {(n,q) I1 < q < n and gcd(q,n)# 1}.
Still more proof systems for the same problem may come from the discussion in
Section 10.6.2, which shows that certificates that a number is composite may be of
no help in factorizing it.
The class NP corresponds to the decision problems that have an efficient proof
system, which means that each yes-instance must have at least one succinct certifi-
cate, whose validity can be verified quickly.
Definition 12.5.2 XNP is the class of decision problems X that admit a proof
system F c X x Q such that there exists a polynomial p(n) and a polynomial-time
algorithm A such that
o For all x E X there exists a q E Q such that (x, q) E F and moreover the
size of q is at most p (n), where n is the size of x.
o For all pairs (x,q), algorithm A can verify whether or not (x,q) E F.
In oTher words, F e P.
Proof Intuitively, this is because there is no need for help from an omniscient being when
we can handle our decision problem ourselves. Formally, consider an arbitrary
decision problem X C P. Let Q ={0} be a trivial proof space. Define
F ={(x,0) I xX}.
Clearly, any yes-instance admits one succinct "certificate", namely 0, and no-
instances have no certificates at all. Moreover, it suffices to verify that x E X and
q = 0 in order to establish that (x, q) e F. This can be done in polynomial time
precisely because we assumed that X E T. U
The central open question is whether or not the set inclusion in Theorem 12.5.3
is proper. Is it possible that P = fP? If this were the case, any property that can be
verified in polynomial time given a certificate could also be decided in polynomial
time from scratch. Although this seems very unlikely, no one has yet been able to
settle the question. In the remainder of this section, we shall study the consequences
of the conjecture that
P YA JP.
For this, we need a notion of reduction that allows us to compare the intrinsic
difficulty of problems in WP9 and to discover that there are problems in NP? that
are as hard as anything else in V P. Such problems, which are called NP- complete,
can be solved in polynomial time if and only if all the other problems in XVP can,
which is the same as saying P = fP. Thus, under the conjecture that P + XNP,
we know that f?- complete problems cannot be solved in polynomial time.
In other words, the algorithm for solving problem A may make whatever use it
chooses of an imaginary algorithm that can solve problem B at unit cost. This imagi-
nary algorithm is sometimes called an oracle. As in the linear case, a reduction proof
usually takes the form of an explicit algorithm to solve one problem by calling on
an arbitrary algorithm for the other problem. Again, this could be used to teach
someone who only knows how to solve one problem how to solve the other. Again
too, polynomial reductions are transitive: if A <P B and B <4 C, then A <4 C. Un-
like linear reductions, however, we allow the first algorithm to take a polynomial
amount of time, still counting the calls to the second algorithm at unit cost, and to
call the second algorithm a polynomial number of times on arbitrary instances of
size polynomial in the size of the original instance.
As a first example, we prove the polynomial equivalence of two versions of the
Hamiltonian cycle problem. Let HAM and HAMD denote the problems of finding a
Hamiltonian cycle in a graph if one exists and of deciding whether or not a graph
is Hamiltonian, respectively. We allow an algorithm for HAM to return an arbitrary
answer when presented with a non-Hamiltonian graph. The following theorem
says that it is not significantly harder to find a Hamiltonian cycle than to decide if
a graph is Hamiltonian.
Proof First we prove the obvious direction: HAMD <T HAM. Consider the following
algorithm.
This algorithm solves HAMD correctly provided algorithm Ham solves problem
HAM correctly: by definition of HAM, algorithm Ham must return a Hamiltonian
cycle in G provided one exists, in which case HamD will correctly return true.
Conversely, if the graph is not Hamiltonian, the output ar returned by Ham cannot
be a Hamiltonian cycle, and thus HamD will correctly returnfalse. It is clear that
HamD takes polynomial time provided we count the call on Ham at unit cost.
Consider now the interesting direction: HAM <4 HAMD. We are tofind a Hamil-
tonian cycle assuming we know how to decide if such cycles exist. The idea is to
consider each edge in turn. For each, we ask if the graph would still be Hamiltonian
if this edge were removed. We keep the edge only if its removal would make the
graph non-Hamiltonian; otherwise we remove it before we proceed with the next
edge. The resulting graph will still be Hamiltonian since we never make a change
Section 12.5 Introduction to .P- completeness 447
that would destroy this property. Moreover, it contains only the edges necessary
to define a Hamiltonian cycle, for any additional edge could be removed without
making the graph non-Hamiltonian, and hence it would have been removed when
its turn came. Therefore, it suffices to follow the edges of the final graph to obtain a
Hamiltonian cycle in the original graph. Here is a sketch of this greedy algorithm.
Clearly Ham takes polynomial time if we count each call on HamD at unit cost. E
Theorem 12.5.6 Consider two problems A and B. If A :ST B and if B can be solved
in polynomial time, then A can also be solved in polynomial time.
In other words, the reduction function maps all yes-instances of problem X onto
yes-instances of problem Y, and all no-instances of problem X onto no-instances
of problem Y; see Figure 12.7. Note that a necessary condition for the reduction
function f to be computable in polynomial time is that the size of f (x) must
be bounded above by some polynomial in the size of x for all x E I. Many-one
reductions are useful tools to establish Turing reductions: to decide if x E X, it
suffices to compute y = f (x) and ask whether or not y E Y. Thus we have the
following theorem.
Proof Imagine solutions to problem Y can be obtained at unit cost by a call on DecideY and
let f be the reduction function between X and Y computable in polynomial-time.
Consider the following algorithm.
function DecideX(x)
Y - f (x)
if DecideY(y) then return true
else returnfalse
This theorem is so useful that we shall often prove X < P Y in cases where we really
need to establish X ST Y. Beware that the converse of this theorem does not hold in
general: it is possible for two decision problems X and Y that X <T Y yet X UP Y;
see Problems 12.38, 12.39 and 12.40.
Consider for example the travelling salesperson problem. An instance of this
problem consists of a graph with costs on the edges. The optimization problem,
denoted TSP, consists of finding a tour in the graph that begins and ends at some
node, after having visited each of the other nodes exactly once, and whose cost is
the minimum possible; the answer is undefined if no such tour exists. To define
an instance of the decision problem TSPD, a bound L is provided in addition to
the graph: the question is to decide whether or not there exists a valid tour whose
total cost does not exceed L. Problem 12.47 asks you to prove that this problem
is decision-reducible: TSP -T TSPD. Now we prove that the Hamiltonian cycle
problem is polynomially reducible to the travelling salesperson problem. In fact,
these problems are polynomially equivalent, but the reduction in the other direction
is more difficult.
Proof Let G = (N, A) be a graph with n nodes. We would like to decide if it is Hamil-
tonian. Define f (G) as the instance of TSPD consisting of the complete graph
H = (N, N x N), the cost function
C(u, v) JI if {mv} eA
{2 otherwise
and the bound L = n. Any Hamiltonian cycle in G translates into a tour in H that
has cost exactly n. On the other hand, if there are no Hamiltonian cycles in G,
450 Computational Complexity Chapter 12
any tour in H must use at least one edge of cost 2, and thus be of total cost at
least n + 1. Therefore, G is a yes-instance of HAMD if and only if f (G) = (H, c, L)
is a yes-instance of TSPD. This proves that HAMD <P TSPD because function f is
easy to compute in polynomial time. U
o X e'TP;and
o Y <P Xfor every problem Y E fXfP.
Some authors replace the second condition by Y <P X or by other kinds of reduc-
tion. It is not known if this gives rise to a genuinely different class of fP- complete
problems.
What would happen if some Xf- complete problem X could be solved in
polynomial time? Consider any other problem Y C JP. We have Y <?P X by
definition that X is . P- complete. Therefore, Y can also be solved in polynomial
time by Theorem 12.5.6. Thus any problem in ?P belongs to P, implying that
fT cPP. But we know P c XP, and therefore P = XP. This proves that if any
V?- complete problem can be solved in polynomial time, then so can all problems
in XP. Conversely, no LP- complete problem can be solved in polynomial time
under the assumption that P LAP.
How can we prove that a problem is LAP- complete? If we already have a pool
of problems that have been shown to be XP- complete, the following theorem is
useful.
Section 12.5 Introduction to ?P-completeness 451
Proof To be NtP- complete, Z must satisfy two conditions by Definition 12.5.10. The first
is that Z e JP, which is in the statement of the theorem. For the second condition,
consider an arbitrary Y E XP. Since X is XP- complete and Y E MP, it follows
that Y <T X. By assumption, X <T Z. By transitivity of polynomial reductions,
Y <P Z, which is what we had to prove to establish the WP- completeness of Z.
large, since there are 2" possible assignments. No efficient algorithm to solve this
problem is known. On the other hand, any assignment purporting to satisfy a
Boolean formula is both succinct and easy to verify, which shows that SAT E XNP.
Consider now a special case of Boolean formulas.
(p + q + r) (p + q + r) q r
(p + qr) (p + q (q + r))
(p => q)># (p + q)
The first formula is composed of four clauses. It is in 3-CNF, and therefore in CNF,
but not in 2-CNF. The second formula is not in CNF since neither (p + qr) nor
(p + q (q + r)) is a clause. The third formula is also not in CNF since it contains
operators other than conjunction, disjunction and negation.
large number of variables, among which xi, X2, .. ., Xn correspond in a natural way
to the bits of instances of size n for A. The Boolean formula is constructed so that
there exists a way to satisfy it by choosing the values of its other Boolean variables
if and only if algorithm A accepts the instance corresponding to the Boolean value
of the x variables. For example, algorithm A accepts the instance 10010 if and only
if formula xIx 2 x 3 x 4xY' 5 (A) is satisfiable.
The proof that this Boolean formula exists and that it can be constructed effi-
ciently poses difficult technical problems beyond the scope of this book. We content
ourselves with mentioning that the formula 'l' (A) contains among other things a
distinct Boolean variable bit for each bit i of storage that algorithm A may need to
use when solving an instance of size n, and for each unit t of time taken by this
computation. Once the variables xi, x2 , .. ., xn are fixed, the clauses of En,(A) force
the other Boolean variables to simulate the step-by-step execution of the algorithm
on the corresponding instance.
Consider now an arbitrary problem Y E NP whose proof space and efficient
proof system are Q and F, respectively. Assume without loss of generality that
there is a polynomial p (n) such that for all y E Y there exists a certificate q E Q
whose length is exactly p(n), where n is the length of y. Assuming that we can
solve instances of SAT-CNF at unit cost, we want to decide efficiently if y E Y for
any given instance y. For this, consider algorithm Ay, whose specific purpose is to
verify if its input is a certificate that y E Y. In other words, A, (q) returns true if and
only if (y, q) c F. This can be done efficiently by the assumption that proof system
F is efficient. By definition, Boolean formula Tp(n) (Ay) is satisfiable if and only if
there exists a q of length p(n) such that Ay accepts input q. By definition of the
proof system, this is equivalent to saying that y C Y. Therefore, it suffices to decide
whether or not Tp(n) (Ay) is satisfiable to know whether or not y E Y. This shows
how to reduce an arbitrary instance of problem Y to the satisfiability of a Boolean
formula in CNF, and therefore Y <m SAT-CNF. We conclude that Y <P SAT-CNF
for all problems Y in NVP. Remembering that SAT-CNF is itself in •NP, we obtain
the following fundamental theorem.
Armed with this first XAP- completeness result, we can now apply Theorem
12.5.11 to prove the •N- completeness of other problems.
We have just seen that SAT-CNF is •NP-complete. Let Z be some other decision
problem in NyP. To show that Z too is XP- complete, Theorem 12.5.11 applies and
we need only prove SAT-CNF < I Z. Thereafter, to show that some other W in AP is
3NP- complete, we have the choice of proving SAT-CNF •T W or Z < P W. Beware
454 Computational Complexity Chapter 12
Proof We already know that SAT is in iMP. Since SAT-CNF is the only problem that we
know to be f§P-complete so far, we must show that SAT-CNF <T SAT to apply
Theorem 12.5.11. This is immediate since Boolean formulas in CNF are a special case
of general Boolean formulas and it is easy to tell, given a Boolean formula, whether
or not it is in CNF. Therefore any algorithm capable of solving SAT efficiently can
be used directly to solve SAT-CNF. C
Proof We already know that SAT-3-CNF is in N . Because we now know two differ-
ent N?-complete problems, we have the choice of proving either SAT-CNF <r
SAT-3-CNF or SAT <T SAT-3-CNF. Let us prove the former and proceed by many-
one reduction: we prove SAT-CNF <P SAT-3-CNF. Consider an arbitrary Boolean
formula I' in CNF. We are to construct efficiently a Boolean formula ( = f (f) in
3-CNF that is satisfiable if and only if ' is satisfiable. Consider first the case when
T' contains only one clause, which is a disjunction of k literals.
* If k < 3, let = T', which is already in 3-CNF.
* If k = 4, let PI ,2 , i 3 and T4 be literals such that P is P1 + f2 + F3 + P4 . Let u
be a new Boolean variable. Take
= VI + f2 +U) + 6+4).
Note that if at least one of the Pi's is true then TIis true and it is possible to select
a truth value for u so that 6 is true also. Conversely, if all the ei's arefalse then
T isfalse and 6 isfalse whatever truth value is chosen for u. Therefore, given
any fixed truth values for the -Pi's, ' is true if and only if 6 is satisfiable with a
suitable choice of value for u.
* More generally, if k > 4, let fl, P2 , , 1k be the literals such that ' is F, + f 2 +
Again, given any fixed truth values for the Pi's, T' is true if and only if X is
satisfiable with a suitable choice of assignments for the ui's.
Section 12.5 Introduction to WNP-completeness 455
'P (p+4+r+s)(r+s)(p+s+x+v
4w)
we obtain
Because each clause is "translated" with the help of different u variables, and
because the only way to satisfy ' is to satisfy each of its clauses with the same truth
assignment for the Boolean variables, any satisfying assignment for ' gives rise to
one for Y and vice versa. In other words, ' is satisfiable if and only if 6 is. But X is
in 3-CNF. This shows how to transform an arbitrary CNF formula efficiently into
one in 3-CNF in a way that preserves satisfiability. Thus SAT-CNF <P SAT-3-CNF,
which completes the proof that SAT-3-CNF is X99- complete. U
Problems 12.43 and 12.44 ask you to prove that these problems are polynomially
equivalent: any one of them can be solved in polynomial time if and only if they
all can. As we are about to prove that 3COL is XP- complete, this is evidence that
all four problems are hard.
Proof It is easy to see that 3COL is in •P since any purported 3-colouring can be verified
efficiently. To show that 3COL is NP-complete we shall prove this time that
SAT-3-CNF sP 3COL. Given a Boolean formula ' in 3-CNF, we have to construct
456 Computational Complexity Chapter 12
efficiently a graph G that can be painted with three colours if and only if 'P is
satisfiable. This reduction is considerably more complex than those we have seen
so far.
Suppose for simplicity that every clause of the formula 'P contains exactly
three literals (see Problem 12.54). Let k be the number of clauses in T. Suppose
further without loss of generality that the Boolean variables appearing in ' are
XI, X2,..., xt. The graph G we are about to build contains 3 + 2t + 6k nodes and
3 + 3t + 12k edges. Three special nodes of this graph are linked in a control triangle
shown on top of Figure 12.8: call them T, F and C. Because each is linked to the other
two, they must be a different colour in any valid colouring of the graph. When the
time comes to paint G in three colours, imagine that the colours assigned to T and
F represent the Boolean values true andfalse, respectively. We shall say that a node
is coloured true if it is the same colour as T, and similarly for nodes colouredfalse.
Any node coloured either true orfalse is called a truth node.
For each Boolean variable xi of ' the graph contains two nodes y, and zi linked
to each other and to the control node C. In any valid three-colouring of G, this
forces y, to be coloured either true orfalse and zi to be the complementary colour.
Think of the colour of node y, as the truth assignment for Boolean variable xi so
the colour of node zi corresponds to the truth value of x1 . We may think of the
yi's and zip's as corresponding to literals in the Boolean formula. For example,
Figure 12.8 shows the part of the graph that we have constructed up to now if the
formula is on three variables.
We still have to add 6 nodes and 12 edges for each clause in 'P. These are
added so that the graph will be colourable with three colours if and only if the
choice of colours for Y1, y2,.Y, yt corresponds to an assignment of Boolean values
to xI, x2,
X2. - Xt that satisfies every clause. This is accomplished thanks to the widget
illustrated in Figure 12.9. We say that a widget is linked to nodes a, b and c if these
are the edge endpoints marked 1, 2 and 3. Each widget is also connected directly
to nodes T and C by two other dangling edges. It can be verified by trying all eight
possibilities that if the widget is linked only to truth nodes, then it can be painted
with the colours assigned to the control triangle if and only if it is linked with at least
one node coloured true. Thus, the widget can be used to simulate the disjunction
of the three literals represented by the nodes to which it is joined. To complete the
Section 12.5 Introduction to NP-completeness 457
graph, it suffices to include one copy of the widget for each clause in T. Each widget
is linked to nodes chosen from the yj's and zip's so as to correspond to the three
literals of the clause concerned. Any valid three-colouring of the graph provides
a truth assignment for the Boolean formula and vice versa. Therefore, the graph
can be painted with three colours if and only if T is satisfiable. Because it can be
constructed efficiently starting from any Boolean formula T in 3-CNF, we conclude
that SAT-3-CNF sP 3COL, and therefore that 3COL is NP- complete. a
both COLO and COLC are X[P-hard even though they are not •NP-complete
because they are not decision problems. We shall see many PVP-hard problems
that are not ' P-complete for this reason in Chapter 13.
The notion of J'[P-hardness is interesting also for decision problems. There are
decision problems that are known to be TP-hard but believed not to be in X9P,
and thus not 2VP-complete. Consider for example the problem COLE of exact
colouring: given a graph G and an integer k, can G be painted with k colours but
no less? Again it is obvious that 3COL <P COLE because a graph is 3-colourable if
and only if its chromatic number is either 0, 1, 2 or 3. From the JNS- completeness of
3COL we conclude that the exact graph colouring problem is •P-hard. However,
this decision problem does not seem to be in WP. Although any valid colouring
of G with k colours can be used as a succinct certificate that G can be painted with
k colours, it is hard to imagine what a succinct certificate that G cannot be painted
with fewer colours would look like, and there are strong theoretical reasons to
believe that such certificates do not exist in general.
Finally, XN P-hardness is often the only thing we really care to establish. Unless
it should turn out that P = fP, it is not very useful in practice to know that a
given problem belongs to fAVTP. Thus even if the problem considered is a decision
problem and even if it is reasonable to expect it to be in •P, why waste time and
effort exhibiting a proof system for it?
whose effect is to set n to some integer value between i and j, inclusive. The actual
value assigned to n is not specified by the algorithm, nor is it subject to the laws
of probability. Thus nondeterministic algorithms should not be confused with
probabilistic algorithms
The effect of the algorithm is determined by the existence or the nonexistence
of sequences of nondeterministic choices that lead to an accept instruction. We are
not concerned with how such sequences could be determined efficiently or how
their nonexistence could be established. For this reason nondeterministic algo-
rithms are only a mathematical abstraction that cannot be used directly in practice:
we never program such an algorithm in the hope of running it efficiently on a real
Section 12.5 Introduction to NP-completeness 459
Note that a nondeterministic algorithm accepts input x even if it has only one
accepting computation to pit against many rejecting computations. Note also that
there is no limit on how long a polynomial-time nondeterministic algorithm may
run if the "wrong" nondeterministic choices are made or if it is run on a rejected
instance; the algorithm may even loop forever in these cases. A computation may
be arbitrarily long even on an accepted instance, provided the same instance also
admits at least one polynomially bounded computation.
Consider for example the following nondeterministic algorithm to decide if a
graph is Hamiltonian. It chooses a sequence of nodes nondeterministically in the
hope of hitting a Hamiltonian cycle. Clearly, there exists at least one sequence of
choices that leads to acceptance if and only if a Hamiltonian cycle exists. On the
other hand, it would be pointless to try running the algorithm after replacing the
nondeterministic choices by probabilistic ones.
Consider now an arbitrary problem X c JfP and let Q and F be its proof
space and efficient proof system, respectively. Assume for simplicity that Q is
the set of all binary strings. The relevance of nondeterministic algorithms is that,
given any x, they can nondeterministically choose a q e Q such that (x, q) e F
provided such a q exists. This q can be chosen bit by bit, after a sequence of binary
nondeterministic choices. In a sense, the algorithm uses its nondeterministic power
to guess a certificate that x E X if one exists. After guessing q, the algorithm
verifies deterministically whether or not (x, q) e F, and it accepts if this is so. This
nondeterministic algorithm accepts each yes-instance because there is at least one
sequence of binary nondeterministic choices that hits upon a proper certificate,
yielding an accepting computation. On the other hand, no-instances cannot be
accepted because (x, q) X F no matter which q is chosen when x et X. Moreover,
this nondeterministic algorithm runs in polynomial time because the existence
of a succinct certificate on yes-instances is guaranteed and because the test that
(x, q) E F can be performed in polynomial time by definition of J'P.
Formally, here is the polynomial-time nondeterministic algorithm to solve
problem X. Note that it never halts on no-instances, which is allowed in the defi-
nition of polynomial-time for nondeterministic algorithms.
procedure XND(x)
q - empty binary string
while (x, q) t F do
choose b between 0 and 1
append bit b to the right of q
accept
some kind (decision problems for instance) that can be solved using a given model
of computation without exceeding some given amount of resources. For example,
P is the class of all decision problems that can be solved by deterministic algo-
rithms in polynomial time and NP is the class of all decision problems that can be
solved by nondeterministic algorithms in polynomial time. Many other complex-
ity classes have been studied. Here we merely scratch the surface of this rich topic.
Figure 12.10 summarizes the discussion below.
PSPACE = IP
PSP.ACE is the class of all decision problems that can be solved using at most a
polynomial number of bits of storage. More precisely, a decision problem belongs
to PSPACE if there exists an algorithm A that solves it and a polynomial p (n)
such that the amount of storage needed by A on any instance x is no more than
p (n) bits, where n is the size of x. Because any algorithm can be transformed
without significant slow-down into one that needs no more space than it uses
time, it is clear that P c PSPACE. However, it is not known whether or not
this inclusion is proper. This is even more embarrassing than our inability to
prove that P ¢ NP because JNP c PSTA CE; see Problem 12.60. You guessed it:
we do not know whether or not this latter inclusion is proper. Nevertheless, there
are things that we do know about PSPACE. There exist problems in PSPACE
to which all other problems in PSPACE can be polynomially Turing reduced.
Those PSPACE-complete problems can be solved in polynomial time if and only
if P = PSPACE. A surprising result is that nondeterminism does not buy much
computing power when the limiting resource is storage: PSPACE = NPSPA CE,
where NPSPT4CEis the nondeterministic version of PSPACE; see Problem 12.61.
Just as PSPSACE is believed to lie beyond 2 and NP, £ OGSPACE is believed
to be more restrictive than P. This is the class of all decision problems that can
be solved with an amount of storage that is no more than some constant times the
logarithm of the instance size. For this definition to make sense, we assume that the
instance is given in read-only storage, and we count only the number of additional
bits of read/write storage needed to perform the calculation. Problem 12.62 asks
you to prove that LO£9SPACE c P, but we have to admit ignorance of whether
or not this inclusion is proper. However, we do know that OGSPACE
f is strictly
included in PSPACE. To summarize, we know that
LOGSPiACE - P ASP
N PSP.ACE
and at least one of these inclusions is strict. It is conjectured they all are.
If C is a complexity class of decision problems, we denote by co-C the class
of decision problems whose complements are in C. In other words, if I is a set
of instances and X c I is a decision problem that belongs to C, then I \ X belongs
to co-C. For example, the set of Boolean formulas that are not satisfiable and the
set of graphs that do not contain a Hamiltonian cycle belong to co-MP. It is clear
that P is a subset of both NP and co-NX, and that NP and co-XP are both
subsets of PSTAC E. The discussion just before Theorem 12.5.3 gives credence to
the conjecture than NP X co-NP, but this is not something we know how to prove.
Nevertheless, it is known that NP = co-NP if and only if there exists an NP- com-
plete problem in co-NP. Not many problems are known to be in NPn co-NbP yet
believed not to be in P. Among those, we mention the set of prime numbers
and the decision problem polynomially Turing equivalent to factorization given in
Problem 12.48.
Probabilistic algorithms give rise to probabilistic complexity classes. For tech-
nical reasons, the formal definition of those classes restricts probabilistic algorithms
to tossing fair coins rather than having access to uniform (a, b) for arbitrary real
numbers a and b.
Section 12.6 A menagerie of complexity classes 463
• Monte Carlo algorithms give rise to the class BRPP, which stands for bounded-
errorprobabilisticpolynomial-time. A decision problem belongs to BP? if there
is a p-correct probabilistic algorithm that solves it in polynomial time for some
p > 1/2. As we saw in Section 10.6.4, the error probability canbe reduced below
any desired threshold by repeating the algorithm some number of times and
taking the most frequent answer.
c Las Vegas algorithms give rise to the class ZP§P, which stands for zero-error
probabilistic polynomial-time. A decision problem belongs to ZP? if there is a
probabilistic algorithm that solves it with no possibility of error in expected
polynomial time.
• Between B?? and ZiP? is the class RP, which stands for random polynomial-
time. A decision problem belongs to R? if there is a p-correct probabilistic
algorithm that solves it in polynomial time for some p > 0 so that the cor-
rect answer is obtained with certainty on all no-instances. As we saw in Sec-
tion 10.6.4, the error probability can be reduced below any desired threshold
much more efficiently than with general BP? algorithms. It is conjectured that
R? / co-R2P.
In addition, Problems 12.64 and 12.65 ask you to prove that RP c LNP and
ZPP = RiPn co-RP. On the other hand, it is believed that neither NiP nor 2?PP
is a subset of the other.
When we defined JsP, we said we could think of it as the class of decision
problems for which an omniscient being could convince you of the validity of
any yes-instance by showing you a succinct certificate whose validity you could
verify efficiently even though you may be unable to find the certificate yourself.
It is natural to extend this notion to allow interaction with the being: it shows you
something, you issue a challenge, it answers, you issue another challenge, and
so on. A decision problem X belongs to the class IP, which stands for interactively
provable, if the being can convince you that x E X whenever this is so, but if you
are almost certain to catch it lying if it tries to convince you that x E X when
in fact this is not so. The entire interaction is required to take a time bounded
by some polynomial in the size of the instance, assuming that the being answers
instantly. See Problem 12.66 for an example. It is obvious that iP c 1? since the
interaction could consist simply in the being showing you an X 7Pcertificate. Could
it be that there are statements that can be proved interactively, but only if several
rounds take place? In other words, is the inclusion WP c IP strict? Although we
do not know the answer for sure, it is a safe bet that this is so because one of the
most striking recent results in computational complexity is that IP = PSPAC§E.
Parallel algorithms give rise to parallel complexity classes. Although there
is a great number of these, we mention only the most popular. WC is the class
of problems that can be solved by efficient parallel algorithms. Recall from Sec-
tion 11.3 that this means that the problem can be solved in polylogarithmic time
464 Computational Complexity Chapter 12
12.7 Problems
Problem 12.1. Prove by mathematical induction on the height that a binary tree
of height k has at most 2 k leaves. Conclude that any binary tree with t leaves must
have height at least [ lg t].
Problem 12.2. Consider a positive integer k. Let t = Llg kI and e = k - 2t. Prove
that h(k)= kt + 2?, where h(k) is the function used in the proof of Theorem 12.2.1.
Give an intuitive interpretation of this formula in the context of the average height
of a tree with k leaves.
Problem 12.3. Give the decision trees corresponding to the algorithms for sorting
by selection (Section 2.4) and by merging (Section 7.4.1), and to quicksort (Sec-
tion 7.4.2) for the case of three items. In the latter two cases, stop the recursive calls
when only a single item remains to be "sorted".
Problem 12.4. Give a valid decision tree for sorting four items.
Section 12.7 Problems 465
Problem 12.5. Give a valid decision tree for determining the median of five items.
Note that it has only 5 different verdicts, but many more leaves.
Problem 12.6. Give exact formulas for the number of comparisons carried out in
the worst case by the insertion and the selection sorting algorithms when sorting
n items. How well do these algorithms perform compared with the information-
theoretic lower bound [ lg 50! ] for n = 50?
Problem 12.8. Continuing Problem 12.7, find an explicit formula for the number
of comparisons performed in the worst case by mergesort in general, as a function
of the number n of items to be sorted. Find the smallest positive integer such that
mergesort requires more comparisons than the information-theoretic lower bound
when sorting this number of items.
Problem 12.9. Suppose we ask our sorting algorithm not merely to determine the
order of the items but also to determine which ones, if any, are equal. For example,
a verdict such as A < B < C is not acceptable: the algorithm must specify whether
B = C or B < C. Give an information-theoretic lower bound on the number of
comparisons required in the worst case to handle n items. Rework this problem if
there are three possible outcomes for each comparison: " < ", " = " and " > ".
Problem 12.10. Let T[1. .n] be an array and k • n an integer. The problem
consists of returning in descending order the k largest items of T. Prove by
an information-theoretic argument that any comparison-based deterministic al-
gorithm that solves this problem must make at least k lg n comparisons, both
in the worst case and on the average. Conclude that this must take a time in
Q (k log n). On the other hand, give an algorithm able to solve this problem in a
time in 0 (n log k) and a space in 0(k) in the worst case. Your algorithm should
make no more than one sequential pass through the array T. Justify your analysis
of the time and space used by your algorithm.
Problem 12.11. Give a complete decision tree for the 12-coin problem of Sec-
tion 12.2.2. Each node of the tree should specify which coins are in each pan of the
scale. The left child of an internal node gives the next measurement to make if the
balance tilts to the left, as do the middle child if the scale is balanced and the right
child if the balance tilts to the right. Omit the right-hand descendants of the root to
make the tree smaller; these can be handled by symmetry. Refer to the coins as A, B,
C, ... , L, so the root of the decision tree should read ABCD: EFGH in accordance
with Figure 12.6 and the information-theoretic reasoning of Section 12.2.2.
466 Computational Complexity Chapter 12
Problem 12.13. Continuing Problems 12.11 and 12.12, prove that the four-coin
problem cannot be solved with only two measurements unless an additional coin
known to be of the "proper" weight is available.
Problem 12.15. Let T[I. . n] be a sorted array of distinct integers, some of which
may be negative. Problem 7.12 asked you for an algorithm that can find an
index i such that 1 < i < n and T[i]= i, provided such an index exists, in a
time in 0(logn) in the worst case. Use an information-theoretic argument to
show that any comparison-based algorithm that solves this problem must take a
time in Q(logn). On the other hand, prove by an adversary argument that any
comparison-based algorithm that solves this problem would require a time in 2(n)
in the worst case were it not for the restriction that the items of T be distinct.
Problem 12.16. Use an adversary argument to prove that any correct deterministic
algorithm to decide if an undirected graph is connected must ask for each pair
{i, j} of vertices whether or not there is an edge between i and j. Assume as in
Section 12.3.2 that the only questions about the graph that are allowed are of the
form "Is there an edge between vertices i and j?"
Problem 12.18. We saw that any correct comparison-based algorithm for finding
the median among n items must make at least 3 (n - 1) /2 comparisons in the worst
case. Use a much simpler adversary argument to prove that when all the items are
Section 12.7 Problems 467
distinct it is not possible to locate the median with certainty without looking at
each item. On the other hand, show by an example that this is not true if the items
are not distinct.
Problem 12.19. The obvious algorithm to find both the minimum and the maxi-
mum items in an array of n items takes 2n - 3 comparisons. Prove by an adver-
sary argument that any comparison-based algorithm for this problem requires at
least [3n/21 - 2 comparisons in the worst case. (Optional: find an algorithm that
achieves this lower bound.)
Problem 12.20. Show how to use a factorization algorithm to split composite num-
bers and to decide on the primality of arbitrary numbers. (This is not to say that
the best primality test proceeds by factorization!)
Problem 12.21. Assume that large integers are represented in decimal in an array.
For example, T[l. . n] represents integer Zin=. 10-l 1 T[i]. Give an algorithm to
perform division by 4 of such integers in a time in 0 (n). Analyse the time taken
by your algorithm. Generalize it to bases other than 10.
Problem 12.23. Prove that any strongly quadratic function is at least quadratic;
see Section 12.4.1.
Problem 12.24. Continuing Problem 12.23, give an explicit example showing it
was necessary to specify in the definition of a strongly quadratic function that it
be eventually nondecreasing. Specifically, exhibit a function t : N - 0R21such that
t(an) a 2 t(n) for every positive integer a and every sufficiently large integer n,
yet t(n) is not at least quadratic.
positive integer a and every sufficiently large integer n. Show that n2 log n is
strongly quadratic but not supra quadratic.
Problem 12.27. Let SQR, MLT and DIV be the problems consisting of squaring an
integer of size n, of multiplying two integers of size n, and of determining the
quotient when an integer of size 2n is divided by an integer of size n, respec-
tively. Clearly, these problems are at least linear because any algorithm that solves
them must take into account every bit of the operands involved. Assuming that
the three problems are smooth and that MLT is strongly linear, prove that the three
problems are linearly equivalent.
Hint: If lOn-1 < i < ion -1, its pseudo-inverse i* is defined as 102n-1 : i. For ex-
ample, 36* = 27 and 27* = 37. (In practice we would probably not use base 10.)
Let INV be the problem of computing the pseudo-inverse of an integer of size n.
468 Computational Complexity Chapter 12
Problem 12.38. Exhibit two very simple decision problems X and Y such that
X<S Y, yet X ?m Y
Problem 12.39. Exhibit two decision problems X and Y such that X < PY, yet there
are good reasons to believe that X i Y. To make this problem more interesting than
Problem 12.38, your sets X and Y must be infinite and so must their complements.
Problem 12.40. Following Problem 12.39, exhibit two decision problems X and Y
such that X <S Y, yet you can prove that X P Y. Again, X and Y must be infinite
and so must their complements.
Problem 12.41. Consider two decision problems X and Y. Prove that if X G WNP
and Y <P X, then Y E AP.
Problem 12.42. Continuing Problem 12.41, give convincing evidence that it is pos-
siblethatX E NPandY ST X,yetY X J%[eventhoughYisadecisionproblem.
Problem 12.43. Prove that the problem of optimal graph colouring is decision-
reducible. Specifically, consider the problems COLD, COLO and COLC introduced
in Definition 12.5.18. Prove that these three problems are polynomially Turing
equivalent.
Problem 12.44. Continuing Problem 12.43, prove that problem 3COL, also intro-
duced in Definition 12.5.18, is polynomially Turing equivalent to COLD, COLO
and COLC. You may assume the result required in Problem 12.43 and you may use
the fact that 3COL is WNP-complete.
Problem 12.45. Given an undirected graph G = (N, A), a clique is a set of nodes
such that there is an edge in the graph between any two nodes in the clique. (Some-
times a clique is defined as a maximal set having this property; we do not insist on
this condition.) There are three natural problems concerning cliques.
• CLQD: Given a graph G and an integer k, does there exist a clique of size k
in G?
• CLKO: Given a graph G, find the size of the largest clique in G.
• CLKC: Given a graph G, find a clique of maximum size in G.
Prove that the clique problem is decision-reducible: all three problems above are
polynomially Turing equivalent.
Problem 12.49. Prove that any two •MP-complete problems are polynomially
Turing equivalent.
Problem 12.50. Prove that HAMD, the problem of deciding if a graph is Hamilto-
nian, is JP- complete.
Problem 12.51. Prove that COLD is N P- complete; see Definition 12.5.18.
Problem 12.55. Prove that CLQD, the decision version of the clique problem in-
troduced in Problem 12.45, is LNP- complete.
Problem 12.56. You are given a collection xI, X2, Xn of n integers. Your task
is to decide whether or not there exists a set X c {1, 2_ . ., n} such that ZXie xi -
'itx xi. This is known as the PARTITION problem.
(a) Prove that this problem is NJP- complete.
(b) Prove that it is decision-reducible: an oracle to solve the decision problem can
be used in polynomial time to find an appropriate X whenever it exists.
Problem 12.58. Give explicitly a nondeterministic algorithm that solves the prob-
lem of nonprimality in polynomial time.
Problem 12.59. Continuing Problems 12.36 and 12.58, give explicitly a nondeter-
ministic algorithm that solves the problem of primality in polynomial time. Anal-
yse the running time of your algorithm.
Problem 12.60. Prove that 3VP c PSP.ACE. For this note that a polynomial
amount of storage is sufficient to enumerate all polynomially bounded potential
certificates and to try each of them to see if at least one is adequate.
Problem 12.61. Prove that PSPJ4CE = APSPJ4CE. For this, show that if
s (n)> Ig n can be computed efficiently, any decision problem that can be solved
Section 12.8 References and further reading 471
Problem 12.62. Prove that LOGSTACE¶ c- For this, note that any deterministic
algorithm that finds itself twice in the same configuration loops forever; there are
only 25 different configurations when only s bits of storage are available; and
2 k1g = nk-
Problem 12.64. Prove that RP c N[P. For this note that any sequence of prob-
abilistic choices that leads an RP probabilistic algorithm to accept is convincing
evidence that the instance considered is a yes-instance.
Problem 12.65. Prove that ZPP = RPn co-XP. Note the similarity with Prob-
lem 10.28.
Problem 12.66. Two graphs G = (V,A) andH = (W,B) are isomorphic if there is a
correspondence between the vertices of G and those of H that preserves adjacency.
Formally, G and H are isomorphic if there exists a bijective function a : V - W such
that (V1 , V2 )e A if and only if (oi(VI), cr(v2 ))e B for all V1, V2 E V. Even though no
polynomial-time algorithm is known to decide whether or not two given graphs
are isomorphic, this problem is obviously in .ACP since the function C can serve as a
certificate. However, it is believed that the problem of graph nonisomorphism is not
in XP: what kind of succinct evidence could prove that two given graphs are not
isomorphic? Nevertheless, your problem is to show that graph nonisomorphism
belongs to the class ?IP.
Hint: If in fact G and H are isomorphic and if you present me with a graph K chosen
randomly among all graphs isomorphic to G, there is no way I can tell whether
you produced K from G or from H.
The notion of smooth problems and its application to linear reductions is orig-
inal to Brassard and Bratley (1988). The linear reduction from integer division to
integer multiplication is from Cook and Aanderaa (1969). For further informa-
tion on reductions among arithmetic problems, consult Aho, Hopcroft and Ullman
(1974). The reduction from the inversion of arbitrary nonsingular matrices to ma-
trix multiplication is from Bunch and Hopcroft (1974). If f and g are cost functions
as in Section 12.4.3, an algorithm asymptotically faster than the naive algorithm
for calculating fg is given in Fredman (1976). The linear reduction from cost func-
tion multiplication to the calculation of transitive reflexive closures is from Fischer
and Meyer (1971) and the converse reduction is from Furman (1970); together they
prove Theorem 12.4.12. In the case of cost functions whose range is restricted to
{0, + oo , Arlazarov, Dinic, Kronrod and Faradzev (1970) present an algorithm to
calculate f g using a number of Boolean operations in 0 (n3 / log n); Theorem 12.4.13
is from Fischer and Meyer (1971).
The theory of .P-completeness originated with two fundamental papers:
Cook (1971) proves that SAT-CNF is JfP-complete and Karp (1972) underlines the
importance of this notion by presenting a large number of XNP-complete prob-
lems. To be historically exact, the original statement from Cook (1971) is that
X <T TAUT-DNF for every X e NVP, where TAUT-DNF is concerned with tautolo-
gies in disjunctive normal form; however this problem is probably not NiP- com-
plete because it does not belong to N'P unless •NP = co-NP. A similar theory
was developed independently by Levin (1973), who used tiling problems instead
of tautologies. The idea that polynomial time is a fundamental concept came ear-
lier to Cobham (1964) and Edmonds (1965). The uncontested authority in matters
of XP- completeness is Garey and Johnson (1979). A good introduction is also
provided by Hopcroft and Ullman (1979).
The term "decision-reducible" was suggested to the authors by Papadimitriou;
read Bellare and Goldwasser (1994) for more on the complexity of decision versus
search. Decision-reducibility should not be confused with the better-known no-
tion of self-reducibility according to which the solution of many problems can be
reduced to solving the same problem on smaller instances. See Naik, Ogiwara and
Selman (1993) for more detail on how decision-reducibility and self-reducibility
relate.
Probabilistic complexity classes were investigated by Gill (1977); the class RP
is from Adleman and Manders (1977). Interactive proofs and the class 2? are from
Goldwasser, Micali and Rackoff (1989); a similar idea was developed independently
by Babai and Moran (1988). The first serious investigation of [C is from Pippenger
(1979). Quantum computing originated with Benioff (1982), Feynman (1982, 1986)
and Deutsch (1985); see also Deutsch and Jozsa (1992), Bernstein and Vazirani
(1993), Lloyd (1993), Berthiaume and Brassard (1994), Brassard (1994), Shor (1994)
and Simon (1994). An encyclopedic source of information on the menagerie of
classical complexity classes is Johnson (1990); see also Papadimitriou (1994).
Many of the problems concerning polynomial reductions in Section 12.7 are
solved in Karp (1972). Problem 12.34 is from Blum (1967). The fact that the set of
primes is in NP (Problems 12.36 and 12.59) is from Pratt (1975); more succinct
primality certificates are given by Pomerance (1987). Problems 12.48 and 12.63 are
Section 12.8 References and further reading 473
from Brassard (1979). Part of the solution to Problem 12.52 is from Stockmeyer
(1973). Problem 12.61 is from Savitch (1970). Problem 12.66 is from Goldreich,
Micali and Wigderson (1991).
Several important computational complexity techniques have gone unmen-
tioned in this chapter. An algebraic approach to lower bounds is described in Aho,
Hopcroft and Ullman (1974), Borodin and Munro (1975) and Winograd (1980).
Although we do not know how to prove that there are no efficient algorithms for
:1P- complete problems, there exist problems that are intrinsically difficult, as de-
scribed in Aho, Hopcroft and Ullman (1974). These can be solved in theory, but
it can be proved that no algorithm can solve them in practice when the instances
are of moderate size, even if it is allowed to take a time comparable to the age of
the Universe and as many bits of storage as there are elementary particles in the
known Universe; see Stockmeyer and Chandra (1979). There also exist problems
that cannot be solved by any algorithm, whatever the resources available; read
Turing (1936), Gardner and Bennett (1979) and Hopcroft and Ullman (1979) for a
discussion of these undecidable problems.
Chapter 13
Heuristic and
Approximate Algorithms
474
Section 13.1 Heuristic algorithms 475
already have both a red neighbour and a blue neighbour. In this case the result is
not optimal. The greedy algorithm is therefore no more than a heuristic that may
possibly, but not certainly, find an optimal solution.
Even though the heuristic may not find an optimal solution, we may hope
that in practice it will be able to find a "good" solution, not too different from the
optimum. Let us see whether this hope is justified.
First, it is not hard to show that for any graph G there is at least one ordering
of the nodes that allows the greedy algorithm to find an optimal solution. In other
words, whatever graph you are working on, there is always a chance you might be
lucky and find an optimal solution. To see this, consider any graph G and suppose
that an optimal solution requires k colours. Suppose further that by magic you
are given a way of colouring G using just k colours. Number these k colours
arbitrarily, and then number the nodes of G as follows. First number consecutively
all the nodes of G that are painted with colour 1 in the optimal solution. Continue
the sequence by numbering all those nodes that are painted with colour 2 in the
optimal solution, and so on. When you finish colour k all the nodes will have been
numbered. Between nodes of the same colour it doesn't matter which is numbered
first.
Now if you apply the greedy heuristic to the graph G considering the nodes
in order of the numbers you just assigned, it is sure to find an optimal solution.
This may not be the same as the solution you were given to start with, however.
For consider applying colour 1. You will certainly be able to paint all the nodes
that had colour 1 in the original solution; maybe you will be able to paint some
more as well. When you apply colour 2, some nodes that had this colour in the
original solution may already be painted with colour 1. However there are sure to
be one or more nodes that had colour 2 in the original solution that have not been
painted. (Problem 13.2 asks you to justify this remark.) You will be able to paint
all these with colour 2, and maybe some more nodes as well. (The presence of
extra nodes of colour 1 cannot make it impossible to paint an unpainted node with
colour 2.) Continuing in this way, when you finish colour 1 you will have painted
at least as many nodes as had colour 1 in the original optimal solution; when you
finish colour 2 you will have painted in all at least as many nodes as had colour 1
or colour 2 in the original solution; and so on, until when you finish colour k you
will have painted at least as many nodes as had colours 1, 2, .. ., k in the original
solution: in other words you will have painted all the nodes. Thus you have found
a solution using just k colours, which is optimal.
On the negative side, there are graphs that make this heuristic as bad as you
choose. More precisely, there are graphs that can be coloured with just k colours for
which, if you are unlucky, the heuristic will find a solution using c colours where
c / k is as large as you please. To see this, consider a graph with 2n nodes numbered
from 1 to 2n. When i is odd, node i is adjacent to all the even-numbered nodes
except node i + 1; when i is even, node i is adjacent to all the odd-numbered nodes
except node i - 1. Figure 13.2 shows such a graph for the case n = 4. This graph
is bipartite: the nodes can be divided into two sets N1 and N2 (the odd and even-
numbered nodes respectively, in this example) such that every edge joins a node
in N 1 to a node in N2. Such a graph can always be coloured with just two colours.
Section 13.1 Heuristic algorithms 477
For example, we might paint the odd-numbered nodes red and the even-numbered
nodes blue. The greedy heuristic will find this optimal solution if it tries to paint
nodes in the order 1, 3,...,2n - 1, 2,4,...,2n, for example. On the other hand, if it
looks at nodes in the natural order 1, 2,. . ., 2n - 1, 2n, then it is easy to see that it
finds a solution requiring n colours: nodes 1 and 2 can be painted with colour 1,
then nodes 3 and 4 must be painted with a new colour 2, nodes 5 and 6 must be
painted with a new colour 3, and so on. By choosing n sufficiently large, we can
make this solution as bad as we please.
For example, suppose our problem concerns six towns with the following dis-
tance matrix.
From To: 2 3 4 5 6
1 3 10 11 7 25
2 8 12 9 26
3 9 4 20
4 5 15
5 18
In this instance the optimal tour has length 58. This can be achieved with a tour
that visits nodes 1, 2, 3, 6, 4, and 5 in that order before returning to the starting
point at node 1.
One obvious greedy heuristic consists of starting at an arbitrary node, and
then choosing at each step to visit the nearest remaining unvisited node. In the
example, if we start at node 1, then the nearest unvisited node is node 2. From
node 2 the nearest unvisited node is node 3, and so on. After visiting the last
node we come back to the starting point. The tour constructed in this way visits
nodes 1, 2, 3, 5, 4, 6 and 1, and has a total length of 60. Thus although the greedy
algorithm does not find an optimal solution in this case, it is not far wrong. With
other examples, however, it can be catastrophic; see Problem 13.4. Is it possible
to find an approximate algorithm that is guaranteed to find a reasonably good
solution? We are about to see that the answer is yes if we restrict the class of
instances considered (Section 13.2.1), and probably no otherwise (Section 13.3.2).
visits each node of G exactly once but does not return to its starting point: this is
called a Hamiltonian path. Since the edge removed has a nonnegative length, the
length of this Hamiltonian path is at most h. However, the Hamiltonian path is
also a spanning tree for the graph G. If the length of a minimum spanning tree for
G is m (G), it follows that the Hamiltonian path must have length greater than or
equal to m(G). Thus for any Hamiltonian cycle in G, h > m (G).
Now suppose the distance matrix of G has the metric property. We illustrate
how the approximate algorithm works using the distance matrix given above. First
find a minimum spanning tree for G using either Kruskal's or Prim's algorithm;
see Sections 6.3.1 and 6.3.2. Figure 13.3 shows a minimum spanning tree for our
example. It is drawn with node 1 at the root since we are interested in tours that
start and finish at this node (but see Problem 13.8). The minimum spanning tree has
length 34. Now imagine you are an ant crawling round the outside of this figure.
You start at the root (node 1), crawl down the left-hand side of edge {1, 21 to node 2,
round node 2, back up the right-hand side of edge {1, 21 to node 1, down the left-
hand side of edge {1, 51 to node 5, past node 5 and down the left-hand side of edge
{5, 31 to node 3, round node 3, and so on. Eventually you arrive back at the root
after crawling up the right-hand sides of edges {4,6}, {4,5} and {5, 11. The dotted
line in the figure illustrates your complete track.
Since the tree spans the underlying graph, you are sure to visit each node at least
once during your tour. In fact, as the example shows, you may visit some nodes
more than once: the complete tour in the figure visits nodes 1, 2, 1, 5, 3, 5, 4, 6, 4, 5
and 1 in that order. Call this tour to. It is clear that during your tour you crawl along
each edge in the spanning tree twice, once down the left-hand side and once up the
right-hand side. If the length of the minimum spanning tree is m(G), therefore,
the length of your tour, len(to) say, is 2m(G).
The approximate algorithm now proceeds by cutting out duplicate nodes from
the tour. In the example, the first node revisited is node 1, which is revisited
between nodes 2 and 5. We shorten the tour by omitting this second visit to node 1.
480 Heuristic and Approximate Algorithms Chapter 13
which must be greater than or equal to zero by the metric property of the distance
matrix. Thus t1 is no longer than to. In ti node 5 is revisited between nodes 3
and 4. As before, we can omit the second visit to node 5, obtaining a new tour t2
that visits nodes 1, 2, 5, 3, 4, 6, 4, 5, 1. A similar argument to the previous case
shows that len (t 2 ) < len (t1). Proceeding thus, we omit successively from the tour
any nodes that have been visited previously (except for the final return to node 1);
each new tour obtained is no longer than its predecessor. In the example, the final
tour obtained, that we shall call simply t, visits nodes 1, 2, 5, 3, 4, 6 and 1. Its
length is 65. In general, since len(t) • len(to)= 2m(G), and the length h of any
Hamiltonian cycle in G is at least m(G), we have that
The length of the tour found by this approximate algorithm is therefore no more
than twice the length of the optimal tour. In the example, the length of the optimal
tour is at least 34 and at most 65.
Although the proof that the algorithm works required us to obtain t in a round-
about way, it is easy to see that in fact t is simply a list of the nodes in a minimum
spanning tree of G in preorder; see Section 9.2. Implementing the algorithm is
straightforward. For a graph with n nodes, finding a minimum spanning tree
takes a time in 0 (n 2) using Prim's algorithm; see Section 6.3.2. Exploring this tree
in preorder takes a time in 0 (n). The approximate algorithm therefore takes a time
in E)(n 2 ).
Using a more sophisticated approximate algorithm, we can guarantee to get
within a factor 3/2 of the optimal solution; see Problem 13.10. There are many
variations on the theme of the travelling salesperson problem. For example, the
graph maybe directed, so that distance(i,j) is not necessarily equal to distance(j, i),
or certain of its edges may be missing. We shall not study these variations here,
beyond noting that for the case of a directed graph no heuristic with a guaranteed
worst-case performance is known even in the metric case.
In Section 8.4 we tackled the more challenging problem of finding an optimal solu-
tion when splitting objects is not allowed: we may take an object or leave it behind,
but we may not take a fraction of an object. In this case, we saw that the greedy
algorithm can be suboptimal, which is not surprising since this version of the prob-
lem is MP-hard. Nevertheless, we saw a dynamic programming algorithm that
finds an optimal solution in a time in 0 (nW), which may be prohibitive when W
is large. Finally, we saw a third variation on the theme in Sections 9.6.1 and 9.7.2.
This will not concern us here, but look at Problem 13.12.
Although suboptimal when splitting objects is not allowed, the greedy algo-
rithm is so efficient in terms of computing time that it may be useful if its relative
error is guaranteed to be within control. For definiteness, we state this algorithm
explicitly.
but for which the capacity of the knapsack is increased to W' = I wi. The proof
of Theorem 6.5.1 applies mutatis mutandis to show that for the modified instance
it is optimal to pack the first X?objects. The solution of this instance is therefore
opt' =$ vi. But the optimal solution of the original instance cannot be larger
than that of the modified instance since more value cannot be packed into a smaller
knapsack when the same objects are available: opt < opt'. It remains to note that
greedy-knap(w, v, W)> e
vti because the greedy algorithm will put the first - 1
objects into the knapsack before failing to add the t-th. (It may put in a few more
as well.) Moreover, biggest > ve, where biggest is the largest vi as calculated in
approx-knap. Putting it all together, and using the fact that max (x, y) > (x + y) /2,
we finally obtain
> optl2,
which proves that the approximate solution is within a factor 2 of the optimum,
as desired. In Section 13.5, we shall see that still better approximations can be
obtained efficiently for the knapsack problem.
Let the optimal solution to the instance with two bins be s, that is, we can load
s objects into the two bins. Suppose first, however, that instead of two bins with
capacity W each, we have just one bin with capacity 2W. Construct the optimal
solution to this new instance by putting objects into the bin in numerical order.
Suppose t objects can be loaded in this way. Clearly s < t since splitting one large
bin into two can never allow us to include more objects than before. In the optimal
solution for the instance with one large bin, let j be the smallest index such that
Z wi > W. The index j is well defined unless ' 1'wi < W, in which case the
trivial optimal solution is to put all n objects in the first bin. Using this, and the
fact that Y_'=, w i < 2W, we obtain Y.'.j~l wi < W. The situation is illustrated in
Figure 13.4.
w w
Returning to the instance with two bins, it is therefore possible to load objects 1 to
j - 1 into the first bin and objects j + 1 to t into the second bin. However because
the objects are numbered in order of nondecreasing weight, we have
tl1 t
SWi < Wi,
j~j i j+1
so the greedy approximate algorithm will put objects 1 to j - 1 into the first bin,
and objects j to t - 1 into the second. Object t may possibly fit into the second bin,
too. The solution found by the greedy algorithm is therefore at least t - 1. Since
s < t, this solution is in error by at most one object.
If the n objects are not initially sorted then a time in 0 (n log n) is required
to sort them. Thereafter the greedy approximate algorithm takes a time in 0(n).
The approximate algorithm can be extended in the obvious way to the case where
k bins are available. In this case the solution it finds is never in error by more than
k - 1 items. This can be proved by an easy extension of the argument above.
The second of the two related problems asks, given n objects, how many bins
are needed to store them all. It is tempting to try the obvious variant of the ap-
proximate algorithm described above: take the objects in order of nondecreasing
weight, put as many of them as possible into the first bin, then into the second bin,
and so on, and count how many bins are necessary to store all the n objects.
Let b be the optimal number of bins required, and let s be the solution found
by this approximate algorithm. This time it is not true that the absolute error s - b
is bounded by a constant; however it is true that s is less than a constant multiple
484 Heuristic and Approximate Algorithms Chapter 13
Thus the objects whose weight is less than W are packed two by two into k/2 bins,
while the remaining k objects occupy one bin each, for a total of 3k/2 bins. In this
case the difference between the optimal solution b = k and the solution s = 3k/2
found by the approximate algorithm can be made as large as we please by choosing
k large enough. On the other hand, for this family of instances the ratio s/b = 3/2
is constant.
In general it can be shown that for this approximate greedy algorithm
17
s <2+ lb.
10
A better approximate algorithm is obtained if objects are considered in order of
nonincreasingweight. Now we take each object in turn and try to add it to bin 1;
if it will not fit, we try to add it to bin 2, and so on; if it will not fit in any of the
bins used so far, we start a new bin with the next highest number. Observe that for
the instance discussed in the previous paragraph, this algorithm finds the optimal
packing. In general for this approximate greedy algorithm s < 4 + 1 b. The proofs
of the bounds given in this paragraph are not simple.
efficient approximate algorithm can guarantee a fixed upper bound on the absolute
error of its solutions unless P = NP; see Section 13.3.1. A spectacular example of
how the same optimization problem may give rise to two quite different approxi-
mation problems is presented in Section 13.4.
Consider an optimization problem and let opt (X) denote the value of an optimal
solution to instance X. For example, if we consider the graph colouring problem
of Section 13.1.1 and if G is a graph, opt(G) denotes the chromatic number of G:
the smallest number of colours sufficient to colour the vertices of G so that no two
adjacent vertices are assigned the same colour. On instance X, an approximate
algorithm will find some value opt(X) that may be suboptimal but is required to be
feasible. For example, if graph G can be coloured with five colours but no less, then
opt(G)= 5 and an algorithm that returns opt(G)= 7 is suboptimal yet acceptable,
because it is possible to colour G with seven colours provided there are at least this
many vertices. An algorithm that returned opt (G) = 4, on the other hand, would be
incorrect because G cannot be coloured with four colours. In most cases, requiring
feasibility corresponds to requiring opt(X)> opt(X) for minimization problems and
opt(X) < opt(X) for maximization problems. In practice, we may want the optimal
or approximate solution itself rather than merely its value or cost: we may want an
actual assignment of colours to the nodes of G using no more than opt(G) colours.
However, these problems are often equivalent by virtue of decision-reducibility:
see Problem 12.43.
Let c and E be positive constants. To each optimization problem P, there
correspond absolute and relative approximation problems. Assume for simplicity
that all feasible solutions to instances of problem P are strictly positive. The c-abso-
lute approximation problem, denoted c-abs-P, is the problem of finding, for any
instance X, a feasible solution opt(X) whose absolute error compared to the optimal
solution opt(X) is at most c:
former, in the sense that the c-absolute approximate metric travelling salesperson
problem is just as hard as the exact problem no matter how large we are willing
to choose c. The unconstrained travelling salesperson problem is harder still to
approximate: even the E-relative approximate travelling salesperson problem is
as hard as the exact problem no matter how large we choose E. To prove these
results, we use the notion of polynomial reductions seen in Section 12.5.2 to show
how the exact problem could be solved efficiently if only we knew how to solve
the corresponding approximation problem efficiently.
Proof Let c be a positive constant and consider an arbitrary algorithm to solve c-abs-
MTSP, the c-absolute approximate metric travelling salesperson problem. Con-
sider an instance of MTSP represented by a symmetric n x n integer matrix M that
respects the triangle inequality. Let opt (M) be the length of an optimal tour on this
instance. Construct a new instance M' by multiplying each entry of M by Lc I + 1.
It is clear that M' also satisfies the triangle inequality and that it is symmetric; hence
it defines a legitimate instance of MTSP. It is equally clear that any optimal tour of
the cities according to distance matrix M is also optimal according to M', and vice
versa, except that the length of the tour is [c] + 1 times greater according to M'.
Therefore, opt(M')= (Lc I + 1)opt(M). Consider now the result opt(M') of running
our assumed c-absolute approximate algorithm on M'. By definition,
There are many other problems for which it is WT -hard to find c-absolute
approximate solutions, no matter how large c is allowed to be. Among these
are the knapsack problem and the maximum clique problem; see Problems 13.15
and 13.16. This is the case for all problems that allow "scaling up": if any instance
can be transformed efficiently into another whose optimal solution is [ cI + 1 times
larger, and if the optimal solution is a positive integer, then it is just as hard to find
c-absolute approximate solutions as to find optimal solutions..
Proof Let E be a positive constant and consider an arbitrary algorithm to solve E-rel-
TSP, the E-relative approximate travelling salesperson problem. Consider an in-
stance of the Hamiltonian decision problem HAMD given by a graph G = (N A).
Let the number of nodes in G be n and assume without loss of generality that
N = {1, 2, . . ., n }. Construct an instance M of the travelling salesperson problem as
follows.
M I
1 if {i,j} E A
L2 + [nE] otherwise
Let opt(M) denote an optimal solution to the travelling salesperson problem M and
let opt(M) denote the approximation returned by our assumed E-relative approxi-
mate algorithm. By definition,
• If there is a Hamiltonian cycle in G, this cycle defines a tour for the travelling
salesperson problem that uses only edges of length 1. Hence there is a solution
of length n, which is clearly optimal. In this case,
opt(M)< (1 + E) opt(M)= (1 + E) n.
• If there are no Hamiltonian cycles in G, any tour for the travelling salesperson
must use at least one edge of length 2 + I nE I in addition to n - 1 edges of
length at least 1 each, for a total length of at least 2 + [nE] + (n - 1) > (1 + E) n.
Therefore,
opt(M) > opt(M) > (1 + E) n.
Note that the instance of the travelling salesperson problem constructed in the
above proof is not metric. If there is an edge in G between vertices i and k and
between vertices k and j but not between vertices i and j, and if n > I/E,
Proof Recall from just before Theorem 12.5.9 that TSPD denotes the travelling salesper-
son decision problem. We know from Problem 12.47 that the travelling sales-
person problem is decision-reducible in the sense that TSP -T TSPD. On the
other hand, we know from Problem 12.50 that the Hamiltonian cycle problem
is XP- complete. By definition of S'EP- completeness and from the obvious fact
that TSPD belongs to WN, it follows that TSPD <T HAMD. We have just shown
that HAMD <T E-rel-TSP. We reach the desired conclusion by transitivity of poly-
nomial reductions. E
There are many other problems for which it is NP-hard to find E-relative
approximate solutions, no matter how large Eis allowed to be (subject to E < 1 in the
case of maximization problems). Among these are the minimum cluster problem
(see Section 13.4 below), the maximum clique problem and the problem of finding
the chromatic number of a graph. In fact the latter two problems are known to
be even harder than this to approximate: assuming P a Ad, no polynomial-time
algorithm can find a clique in an n-node graph that is guaranteed to be within a
factor n of the optimal for 6 > 6, and the same holds for optimal graph colouring
with 6 > 14. On the other hand, there are optimization problems such as the metric
Section 13.4 The same, only different 489
Although it is equivalent to maximize the total cost of the cross edges or to mini-
mize the total cost of the internal edges, consider the following two optimization
problems.
• MAX-CUT is the problem of maximizing the total cost of the cross edges over
all partitions of N.
• MIN-CLUSTER is the problem of minimizing the total cost of the internal edges
over all partitions of N.
Proof Let G = (N, A) be a graph and c: A -. R be a cost function. Consider the following
greedy approximate algorithm. Initially, N 1 , N2 and N3 are empty; they will form
a partition of N by the time the algorithm terminates. Consider each node of G
in turn. Add it to the cluster that causes the smallest increase in the cost of the
internal edges, and thus the largest increase in the cost of the cross edges. On the
graph in Figure 13.5, the algorithm first puts nodes a, b and c into N 1, N2 and N3 ,
respectively. It then considers node d. Adding it to cluster N 1, N2 or N3 would
increase the cost of internal edges by 3, 2 or 3, respectively: thus the algorithm
adds it to cluster N2, which becomes {b, d}. Finally, adding node e to cluster N 1,
N2 or N3 would increase the cost of internal edges by 6, 7 or 8, respectively: thus
the algorithm adds it to cluster N 1, which becomes {a, e }. The solution returned
by the algorithm is N1 = {a, e}, N2 = {b, d} and N3 = {c}, whose cost in terms of
cross edges is 27, better than 87% of the optimal solution 31.
Before we prove that the cost of cross edges returned by this algorithm is
never less than two-thirds of the optimum, it is useful to give explicit code for this
approximate algorithm. Here, sum accumulates the total cost of all edges in G and
clstr accumulates the cost of all internal edges in the approximate solution that is
chosen. The desired cost of all cross edges is given by the difference between sum
and clstr, which is computed at the end of the algorithm.
for i - 1 to 3 do
cost 0
for each v E N, do
if {u,v} E A then cost - cost + c({u,v})
sum - sum + cost
if cost < mincost then mincost - cost
k - i
Nk - Nk U {U}
clstr - clstr + mincost
return sum - clstr
For each node u, let zi, Z2 and Z3 be the values accumulated in cost when i 1,
2 and 3, respectively; this is the increase in the total cost of the internal edges that
would be incurred by adding u to the corresponding cluster. The algorithm adds
u to the cluster Nk that minimizes zi and adds that value of Zk to clstr. All three
z's are added to sum. Therefore, each time round the outer loop, the value of sum
Section 13.4 The same, only different 491
is increased by at least three times the increase in clstr. Since both sum and clstr
are initialized to zero, it follows that sum Ž 3 x clstr at the end of the algorithm.
The total cost of the cross edges in the solution discovered by the algorithm is thus
But the total cost opt (G, c) of the cross edges in an optimal solution cannot be
smaller than the cost of the cross edges in the approximate solution found by the
algorithm, and it cannot be greater than the total cost sum of all the edges in the
graph. It follows that
Proof We shall in fact prove that 3COL <S E-rel-MIN-CLUSTER, where 3COL is the prob-
lem encountered in Section 12.5.4 of deciding if a given graph can be painted with
three colours. The desired result follows along the lines of Corollary 13.3.3 from the
fact that 3COL is JVP- complete (Theorem 12.5.19) and MIN-CLUSTER is decision-
reducible. We leave the details to the reader.
To prove that 3COL ST E-rel-MIN-CLUSTER, let £ be a positive constant and
consider an arbitrary algorithm to solve E-rel-MIN-CLUSTER. Let G = (N, A) be
a graph; we would like to know if it can be coloured with three colours. Consider
the complete graph K on node set N; in this graph there is an edge between u and
v for each distinct u and v in N. Define the following cost function on the edges
of K.
c(uVI) 1 2 l if {u,v} E A
f(+
M E) nt2 if {u, v} e A
492 Heuristic and Approximate Algorithms Chapter 13
Let opt(K, c) denote an optimal solution to the minimum cluster problem on graph K
with cost function c, and let opt(K, c) denote the approximation returned by our
assumed £-relative approximate algorithm. By definition,
In contrast, there are problems for which arbitrarily good E-relative approxima-
tions can be obtained. An approximationscheme is an algorithm that takes as input,
in addition to the instance itself, an upper bound - on the acceptable relative error.
Even though it is natural to expect the algorithm to work harder when E is smaller,
it is best if the tolerance can be reduced at a reasonable cost in computing time.
We say that the approximation scheme isfully polynomial if a time in O (p (n, 1IE)) is
sufficient in the worst case to find E-relative approximations for instances of size n,
where p is some fixed polynomial in two variables.
There are many LNP-hard problems for which fully polynomial approximation
schemes cannot exist unless 2 = NP and there are others for which they are known
to exist. Here we give one example of each situation.
Clearly, algorithm BP runs in a time polynomial in the size of the instance since
11E = n + 1 in this case. By definition of E-relative approximations, the solution
opt returned by BPapprox(w,W,1/(n + 1)) must be such that
opt<opt< (1+1/(n+1))opt=opt+optl(n+1).
But opt! (n + 1) < 1 since it is surely possible to store the objects in n separate bins.
Therefore,
opt < opt < opt + 1,
and so opt = [Lpit since opt is an integer. This completes the proof that BP finds an
exact solution in polynomial time in the worst case. This is impossible if P # NAP
since the bin packing problem is W2-hard.
and the solution is then found in V[n, W]. (Return to Section 8.4 for details.)
A similar approach uses a table U[1.. n, O. . M], where M is an upper bound
on the optimal value that we can carry in the knapsack, again with one row for
each object available, but now with one column for each possible total value that
can fit inside the knapsack. We can obtain the bound M by running our I/ 2-relative
approximate algorithm and multiplying its answer by 2. (The value opt' defined
in Section 13.2.2 when we proved that our approximate algorithm returns a value
that is at least half the optimum is slightly better and easier to compute.) This time,
U [i, jI gives the minimum weight of the objects we can transport to reap exactly
value j if we only take objects among the first i. This table is built one entry at a
time using the rule
Out-of-bound values are taken to be +oo (or any value larger than W) with the
exception of U[0, 0], which is taken to be 0. Intuitively, this rule says that to reap
value j with the first i objects, we may either not use object i at all, in which case
the load weighs at least Ul[i - 1, j], or we may add object i to a collection of objects
among the first i - 1 whose total value was j - vi and thus whose total weight was
at least U [i - 1, j - vi I before the addition of object i. Once the table is constructed,
the solution is given by the largest j such that U[n, j1 < W. This approach takes a
time in 0 (n log n + nM) since a time in 0 (n log n) is needed to calculate the upper
bound M, each of the nM entries of U takes constant time to fill in, and a time in
0(M) is spent at the end to scan the n-th row of U. Table U can also be used to
determine not only the value of the optimal load, but also its composition, much
as in Section 8.4.
You may wonder why anyone would use this approach, which is slightly more
complicated and perhaps less natural than the dynamic programming algorithm
of Section 8.4. The point is that this new algorithm is preferable when the values
are smaller than the weights, since the time it requires depends on the total value of
the optimal solution rather than the total weight capacity of the knapsack. As we
shall see, we can force the values-but not the weights-to be small provided we
are satisfied with an approximate solution.
Section 13.5 Approximation schemes 495
Similarly,
opt= E vi> Eviv = opt (13.1)
icX* icX'
since X* is optimal for the original instance and X' is feasible. By definition of v'
we have vi > kv' > vi - k. Putting it all together,
opt = vi k E vi
ieX' iX
= V-
i Ek
iGX* iGX*
> opt - kn.
Equations 13.1 and 13.2 say that opt is an E-relative approximation to opt.
It may seem at first that a proper choice of k cannot be made until we know
the value of the optimal solution, but this is not the case thanks to our 1/2-relative
approximate algorithm approx-knap. To summarize, the approximation scheme
proceeds as follows:
496 Heuristic and Approximate Algorithms Chapter 13
It remains to see how long it takes to compute this approximation. The first
step takes a time in O(nlogn) to compute the '/2-relative approximation. If
E < 2n/A, it takes a time in 0 (nM) to obtain the exact solution by dynamic pro-
gramming. ThisisinO(n 2 /E) sincenM = 2nA <4n2 /E. If E > 2n/A,thetimetaken
by the other steps is analysed as follows. The second step is negligible. The third
step takes a time in 0 (nM'), where M' = [2A/ k] is the upper bound on the optimal
solution of the modified instance used by the dynamic programming algorithm.
This is also in 0 (n2 /E) since > EA1 > EA/2n. The last step is negligible. In con-
clusion, this approximation scheme takes a time in 0 (n2 IE), which is indeed fully
polynomial. Another scheme is known that can find an E-relative approximate
solution to the knapsack problem in a time in 0 (n log n + ni/E2); yet another takes
a time in O(nlog'1/ + 1IE4).
13.6 Problems
Problem 13.1. Give an efficient algorithm to determine whether a graph can be
painted with just two colours, and if so how to do it.
Problem 13.2. In Section 13.1.1, while proving that the greedy heuristic can always
find an optimal solution, we remarked that "When you apply colour 2, some nodes
that had this colour in the original solution may already have been painted with
colour 1. However there are sure to be one or more nodes that had colour 2 in the
original solution that have not been painted." Justify this remark.
Problem 13.3. Show that any planar graph (one that you can draw on a sheet of
paper in such a way that none of the edges cross) can be painted using at most four
colours.
Problem 13.4. Show that the greedy heuristic from Section 13.1.2 canbe arbitrarily
bad: as a function of parameter o > 1, construct an explicit instance of the travelling
salesperson problem on which the heuristic finds a tour at least oa times longer than
the optimum.
Section 13.6 Problems 497
Problem 13.6. Consider the complete undirected graph with 8 nodes and the
following distance matrix.
From To: 2 3 4 5 6 7 8
1 41 19 99 83 108 120 140
2 35 88 96 121 137 151
3 80 70 95 108 127
4 53 66 87 86
5 26 42 57
6 22 34
7 27
This distance matrix has the metric property. Use the algorithm of Section 13.1.2
to obtain an approximate solution to the travelling salesperson problem for this
graph. You can do this without a computer. If you do have a machine available,
use an exhaustive search to obtain the exact solution.
Problem 13.7. The approximate algorithm of Section 13.1.2 begins by finding a
minimum spanning tree of the graph. Is there a good reason for preferring either
Kruskal's algorithm or Prim's algorithm in this context?
Problem 13.8. To illustrate the approximate algorithm of Section 13.1.2, after find-
ing a minimum spanning tree of the graph we arbitrarily chose node 1 as the root of
the tree. However any other node would serve as well. Using the example of Sec-
tion 13.1.2, explore what happens if you choose another node as the root. Does the
left-to-right order of the branches make a difference to the approximation found?
Problem 13.9. In some instances it is possible to find a shorter optimal tour for the
travelling salesperson if he is allowed to pass through the same town more than
once. Give an explicit example illustrating this. On the other hand, show that if
the distance matrix has the metric property then it is never advantageous to pass
through the same town more than once.
Problem 13.10. We saw in Section 13.1.2 an efficient approximate algorithm for the
metric travelling salesperson problem that is guaranteed to find a solution within
a factor 2 of the optimum. Give another efficient algorithm that is guaranteed to
find a solution within a factor 3/2 of the optimum.
Problem 13.12. We saw in Section 13.2.2 that the greedy algorithm of Section 8.4
can be arbitrarily bad at solving the knapsack problem. Show that this is not the case
when we consider the variation on the knapsack theme studied in Sections 9.6.1
and 9.7.2: the greedy algorithm is guaranteed to return a solution within a factor 2
of the optimum if we can use as many copies as we wish of each available object.
Problem 13.13. You are given 9 objects whose weights are respectively 2,2,2, 3,3,
4, 5, 6 and 9, and a number of bins of capacity 12. Use the approximate algorithms
of Section 13.2.3 to estimate (a) what is the most objects you can pack into 2 bins,
and (b) how many bins are needed to pack all 9 objects. Find the optimal solutions
and compare these to your approximate answers.
Problem 13.15. For every constant c, prove that it is just as hard to find a c-abso-
lute approximate solution to the knapsack problem as to find an exact solution.
Problem 13.16. Prove that finding a c-absolute approximate solution to the max-
imum clique problem is NP-hard for every positive constant c. For this, prove
that CLKO <T c-abs-CLKO for all c, where CLKO was defined in Problem 12.45.
You may use the results of Problems 12.45 and 12.55.
Problem 13.17. Continuing Problem 13.16, prove that finding an E-relative ap-
proximate solution to the maximum clique problem is NP-hard for every positive
constant E smaller than 1. Be warned that this is very difficult.
Problem 13.18. Failing Problem 13.17, prove that for any positive constants Eand
6 smaller than 1, finding an E-relative approximation to the maximum clique prob-
lem is polynomially equivalent to finding a 6-relative approximation. In symbols,
prove that E-rel-CLKO =T 5-rel-CLKO.
Problem 13.19. Following Problem 13.3, prove that it is easy to compute 1-abso-
lute as well as 1/3-relative approximations to the problem of painting a planar
graph with the minimum number of colours. On the other hand, prove that better
approximations would be as hard to compute as the optimal solution.
Problem 13.20. Prove that finding an E-relative approximate solution to the bin
packing problem of Section 13.2.3 is W P-hard for any E < 1/2. You may use without
proof the fact that PARTITION is •XP- complete; see Problem 12.56.
Problem 13.22. Continuing Problem 13.21, prove that it is as hard to find an E-rela-
tive approximation to the chromatic number of general graphs as it is to find the
exact solution for all E > 0.
Section 13.6 Problems 499
Problem 13.23. Prove that algorithm MAX-CUT-approx from the proof of Theo-
rem 13.4.1, which is guaranteed to find a 1/3-relative approximation to the MAX-CUT
problem, can yield arbitrarily bad relative approximations to the MIN-CLUSTER
problem. For this, show how to assign positive integer costs to the edges of the
complete graph on four nodes as functions of an arbitrary of > 1 so that the approx-
imate solution found by the algorithm for the MIN-CLUSTER problem is at least ac
times greater than the optimum.
Problem 13.24. The maximum cut and minimum cluster problems can be gener-
alized in the obvious way to the case where we wish to create k clusters for any
constant k > 2. This gives rise to the problems k-MAX-CUT and k-MIN-CLUSTER.
Although we introduced these problems with k = 3 to make the proof of Theo-
rem 13.4.2 easier, it is more usual to define MAX-CUT and MIN-CLUSTER with k = 2.
(a) Give an efficient 1/k-relative approximate algorithm for k-MAX-CUT for all
k > 2.
(b) For all k > 3 and E > 0, prove that finding an E-relative approximate solution
to k-MIN-CLUSTER is as hard as finding an exact solution.
Problem 13.26. Give an elementary proof that there cannot exist a fully polyno-
mial approximation scheme for the maximum clique problem, unless P = -NP.
This is obvious in the light of Problem 13.17, but no elementary proofs are known
even for the existence of a positive E < 1 for which finding an E-relative approxi-
mate solution to the maximum clique problem is •[P-hard. You may use the fact
that the maximum clique problem is fkP-hard.
Problem 13.27. Consider the following instance of the knapsack problem. There
are four objects whose weights are respectively 2,5,6 and 7 units, and whose values
are 1,3,4 and 5. We can carry a maximum load of 11 units of weight.
(a) Apply the 1 /2 -relative approximate greedy algorithm from Section 13.2.2 to this
instance. Deduce an upper bound on the value of the optimal solution. (There
are two ways to obtain this upper bound: one is obvious and one is clever.)
500 Heuristic and Approximate Algorithms Chapter 13
(b) Now that you have an upper bound on the value of the optimal solution, use it
to apply the dynamic programming algorithm given in Section 13.5.2 to find
an optimal solution. Give a table resembling that of Figure 8.4. Determine
not only the optimal value that can be carried, but also the list of objects that
should be packed.
501
502 References
BERGE, Claude (1958), Theorie des graphes et ses applications,Dunod; 2nd edition, 1967. Trans-
lated as The Theory of Graphs and Its Applications, Methuen, 1962.
BERGE, Claude (1970), Graphes et hypergraphes,Dunod. Translated as Graphsand Hypergraphs,
North Holland, 1973.
BERLEKAMP, Elwyn R., John H. CONWAY and Richard K. GuY (1982), W inning Ways for Your
Mathematical Plays; Volume 1: Games in General, Academic Press.
BERLINER, Hans J. (1980), "Backgammon computer program beats world champion", Artifi-
cial Intelligence, vol. 14, pp. 205-220.
BERNSTEIN, Ethan and Umesh V. VAZIRANI (1993), "Quantum complexity theory", Proceed-
ings of the 25th Annual ACM Symposium on Theory of Computing, pp. 11-20.
BERTHIAUME, Andre and Gilles BRASSARD (1995), "Oracle quantum computing", Journal of
Modern Optics, vol. 41, no. 12, pp. 2521-2535.
BISHOP, Errett (1972), "Aspects of constructivism", 10th Holiday Mathematics Symposium,
New Mexico State University, Las Cruces.
BITTON, Dina, David J. DEWITT, David K. HSIAO and Jaishankar MENON (1984), "A taxonomy
of parallel sorting", Computing Surveys, vol. 16, no. 3, pp. 287-318.
BLUM, Leonore, Manuel BLUM and Mike SHUB (1986), "A simple unpredictable pseudo-
random number generator", SIAM Journalon Computing, vol. 15, no. 2, pp. 364-383.
BLUM, Manuel (1967), "A machine independent theory of the complexity of recursive func-
tions", Journalof the ACM, vol. 14, no. 2, pp. 322-336.
BLUM, Manuel, Robert W. FLOYD, Vaughan R. PRATT, Ronald L. RIVEST and Robert E. TARJAN
(1972), "Time bounds for selection", Journal of Computer and System Sciences, vol. 7, no. 4,
pp. 448-461.
BLUM, Manuel and Silvio MICALI (1984), "How to generate cryptographically strong se-
quences of pseudo-random bits", SIAM Journalon Computing, vol. 13, no. 4, pp. 850-864.
BORODIN, Allan B. and J. Ian MUNRO (1975), The Computational Complexity of Algebraic and
Numeric Problems, American Elsevier.
BORUVKA, Otokar (1926), "O jistemproblemu minimilnim", PrficeMoravskUPfirodoved Spolec-
nosti, vol. 3, pp. 37-58.
BRASSARD, Gilles (1979), "A note on the complexity of cryptography", IEEE Transactions on
Information Theory, vol. IT-25, no. 2, pp. 232-233.
BRASSARD, Gilles (1985), "Crusade for a better notation", ACM Sigact News, vol. 17, no. 1,
pp. 60-64.
BRASSARD, Gilles (1988), Modern Cryptology: A Tutorial, Lecture Notes in Computer Science,
vol. 325, Springer-Verlag.
BRASSARD, Gilles (1994), "Cryptology column -Quantum computing: The end of classical
cryptography?", ACM Sigact News, vol. 25, no. 4, pp. 15-21.
BRASSARD, Gilles and Paul BRATLEY (1988), Algorithmics: Theory and Practice, Prentice-Hall.
BRASSARD, Gilles, Sophie MONET and Daniel ZUFFELLATO (1986), "L'arithmetique des tres
grands entiers", TSI: Technique et Science Informatiques, vol. 5, no. 2, pp. 89-102.
BRATLEY, Paul, Bennett L. Fox and Linus E. SCHRAGE (1983), A Guide to Simulation, Springer-
Verlag; 2nd edition, 1987.
BRENT, Richard P. (1974), "The parallel evaluation of general arithmetic expressions", Journal
of the ACM, vol. 21, no. 2, pp. 201-206.
BRESSOUD, David M. (1989), Factorizationand Primality Testing, Springer-Verlag.
BRIGHAM, E. Oran (1974), The Fast FourierTransform, Prentice-Hall.
504 References
BROWN, Mark R. (1978), "Implementation and analysis of binomial queue algorithms", SIAM
Journalon Computing, vol. 7, no. 3, pp. 298-319.
BUNCH, James R. and John E. HOPCROFT (1974), "Triangular factorization and inversion by
fast matrix multiplication", Mathematics of Computation, vol. 28, no. 125, pp. 231-236.
BUNEMAN, Peter and Leon LEVY (1980), "The Towers of Hanoi problem", Information Pro-
cessing Letters, vol. 10, nos. 4-5, pp. 243-244.
CALINGER, Ronald (ed.) (1982), Classics of Mathematics, Moore Publishing Co.
CARASSO, Claude (1971), Analyse numerique, Lidec.
CARLSSON, Svante (1986), Heaps, doctoral dissertation, Department of Computer Science,
Lund University, Sweden.
CARLSSON, Svante (1987a), "Average case results on heapsort", BIT, vol. 27, pp. 2-17.
CARLSSON, Svante (1987b), "The deap-A double-ended heap to implement double-ended
priority queues", Information ProcessingLetters, vol. 26, no. 1, pp. 33-36.
CARTER, J. Larry and Mark N. WEGMAN (1979), "Universal classes of hash functions", Journal
of Computer and System Sciences, vol. 18, no. 2, pp. 143-154.
CELIS, Pedro, Per-Ake LARSON and J. Ian MUNRO (1985), "Robin Hood hashing", Proceedings
of the 26th Annual Symposium on Foundationsof Computer Science, pp. 281-288.
CHANG, Lena and James F. KORsH (1976), "Canonical coin changing and greedy solutions",
Journalof the ACM, vol. 23, no. 3, pp. 418-422.
CHERITON, David and Robert E. TARJAN (1976), "Finding minimum spanning trees", SIAM
Journalon Computing, vol. 5, no. 4, pp. 724-742.
CHIN, Francis Y., John LAM and I-Ngo CHEN (1982), "Efficient parallel algorithms for some
graph problems", Communications of the ACM, vol. 25, no. 9, pp. 659-665.
CHRISTOFIDES, Nicos (1975), Graph Theory: An Algorithmic Approach, Academic Press.
CHRISTOFIDES, Nicos (1976), "Worst-case analysis of a new heuristic for the traveling sales-
man problem", Research Report no. 388, Management Sciences, Carnegie-Mellon Univer-
sity, Pittsburgh, PA.
COBHAM, Alan (1964), "The intrinsic computational difficulty of functions", Proceedings of
the 1964 Congress on Logic, Mathematics and the Methodology of Science, North-Holland,
pp. 24-3 0.
COHEN, Henri and Arjen K. LENSTRA (1987), "Implementation of a new primality test",
Mathematics of Computation, vol. 48, no. 177, pp. 103-121.
COLE, Richard (1988), "Parallel merge sort", SIAM Journal on Computing, vol. 17, no. 4,
pp. 770-785.
COOK, Steven A. (1971), "The complexity of theorem-proving procedures", Proceedingsof the
3rd Annual ACM Symposium on Theory of Computing, pp. 151-158.
COOK, Steven A. and Staal 0. AANDERAA (1969), "On the minimum complexity of functions",
Transactionsof the American Mathematical Society, vol. 142, pp. 291-314.
COOLEY, James W., Peter A. W. LEWIS and Peter D. WELCH (1967), "History of the fast Fourier
transform", Proceedings of the IEEE, vol. 55, pp. 1675-1679.
COOLEY, James W. and John W. TUKEY (1965), "An algorithm for the machine calculation of
complex Fourier series", Mathematics of Computation, vol. 19, no. 90, pp. 297-301.
COPPERSMITH, Don and Shmuel WINOGRAD (1990), "Matrix multiplication via arithmetic
progressions", Journalof Symbolic Computation, vol. 9, pp. 251-280.
CORMEN, Thomas H., Charles E. LEISERSON, and Ronald L. RIVEST (1990), Introduction to
Algorithms, MIT Press and McGraw-Hill.
References 505
COUVREUR, Chantal and Jean-Jacques QUISQUATER (1982), "An introduction to fast genera-
tion of large prime numbers", PhilipsJournalof Research,vol. 37, pp. 231-264; errata (1983),
ibid., vol. 38, p. 77.
CURTISS, John H. (1956), "A theoretical comparison of the efficiencies of two classical methods
and a Monte Carlo method for computing one component of the solution of a set of linear
algebraic equations", in Symposium on Monte Carlo Methods, H. A. Meyer (ed.), Wiley,
pp. 191-233.
DAMGARD, Ivan B., Peter LANDROCK and Carl POMERANCE (1993), "Average case error esti-
mates for the strong probable prime test", Mathematics of Computatio n, vol. 61, no. 203,
pp. 177-194.
DANIELSON, G. C. and C. LANCZOS (1942), "Some improvements in practical Fourier analysis
and their application to X-ray scattering from liquids", Journal of the Franklin Institute,
vol. 233, pp. 365-380,435-452.
DE BRUIJN, Nicolaas G. (1961), Asymptotic Methods in Analysis, North-Holland.
DENNING, Dorothy E. R. (1983), Cryptography and Data Security, Addison-Wesley.
DEUTSCH, David (1985), "Quantum theory, the Church-Turing principle and the universal
quantum computer", Proceedings of the Royal Society, London, vol. A400, pp. 97-117.
DEUTSCH, David and Richard JOZSA (1992), "Rapid solution of problems by quantum com-
putation", Proceedingsof the Royal Society, London, vol. A439, pp. 553-558.
DEVROYE, Luc (1986), Non-Uniform Random Variate Generation, Springer-Verlag.
DEWDNEY, Alexander K. (1984), "Computer recreations - Yin and yang: Recursion and
iteration, the Tower of Hanoi and the Chinese rings", Scientific American, vol. 251, no. 5,
pp. 19-28.
DEYONG, Lewis (1977), Playboy's Book of Backgammon, Playboy Press.
DIFFIE, Whitfield and Martin E. HELLMAN (1976), "New directions in cryptography", IEEE
Transactions on Information Theory, vol. IT-22, no. 6, pp. 644-654.
DIJKsTRA, Edsger W. (1959), "A note on two problems in connexion with graphs", Numerische
Mathematik, vol. 1, pp. 269-271.
DIXON, John D. (1981), "Asymptotically fast factorization of integers", Mathematics of Com-
putation, vol. 36, no. 153, pp. 255-260.
DROMEY, R. G. (1982), How to Solve It by Computer, Prentice-Hall.
DUNCAN, Ralph (1990), "A survey of parallel computer architectures", Computer, vol. 23,
no. 2, pp. 5-16.
EDMONDS, Jack (1965), "Paths, trees, and flowers", CanadianJournal of Mathematics, vol. 17,
no. 3, pp. 449-467.
EDMONDS, Jack (1971), "Matroids and the greedy algorithm", Mathematical Programming,
vol. 1, pp. 127-136.
ELKIES, Noam D. (1988), "On A4 +B4 + C4 = D4 ", Mathematics of Computation,vol. 51, no. 184,
pp. 825-835.
ERDOSs, Paul and Carl POMERANCE (1986), "On the number of false witnesses for a composite
number", Mathematics of Computation, vol. 46, no. 173, pp. 259-279.
EVEN, Shimon (1980), Graph Algorithms, Computer Science Press.
EVES, Howard (1983), An Introduction to the History of Mathematics, 5th edition, Saunders
College Publishing.
FEYNMAN, Richard (1982), "Simulating physics with computers", International Journal of
Theoretical Physics, vol. 21, nos. 6/7, pp. 467-488.
506 References
FEYNMAN, Richard (1986), "Quantum mechanical computers", Foundationsof Physics, vol. 16,
no. 6, pp. 507-531; originally appeared in Optics News, February 1985.
FISCHER, Michael J. and Albert R. MEYER (1971), "Boolean matrix multiplication and transitive
closure", Proceedingsof the 12th Annual IEEE Symposium on SwitchingandAutomata Theory,
pp. 129-131.
FLAJOLET, Philippe (1985), "Approximate counting: A detailed analysis", BIT, vol. 25,
pp. 113-134.
FLAJOLET, Philippe and G. Nigel MARTIN (1985), "Probabilistic counting algorithms for data
base applications", Journalof Computer and System Sciences, vol. 31, no. 2, pp. 182-209.
FLOYD, Robert W. (1962), "Algorithm 97: Shortest path", Communications of the ACM, vol. 5,
no. 6, p. 345.
Fox, Bennett L. (1986), "Algorithm 647: Implementation and relative efficiency of quasir-
andom sequence generators", ACM Transactions on Mathematical Software, vol. 12, no. 4,
pp. 362-376.
FREDMAN, Michael L. (1976), "New bounds on the complexity of the shortest path problem",
SIAM Journal on Computing, vol. 5, no. 1, pp. 83-89.
FREDMAN, Michael L. and Robert E. TARJAN (1987), "Fibonacci heaps and their use in im-
proved network optimization algorithms", Journalof the ACM, vol. 34, no. 3, pp. 596-615.
FREDMAN, Michael L. and Dan E. WILLARD (1990), "BLASTING through the information theo-
retic barrier with FUSION TREES", Proceedingsof the 22nd Annual ACM Symposium on Theory
of Computing, pp. 1-7.
FREIVALDS, Rusini§ (1977), "Probabilistic machines can use less running time", Proceedings of
Information Processing'77, pp. 839-842.
FREIVALDS, Rfisinr(1979), "Fast probabilistic algorithms", Proceedings of the 8th Symposium
on the MathematicalFoundations of Computer Science, Lecture Notes in Computer Science,
vol. 74, Springer-Verlag.
FURMAN, M. E. (1970), "Application of a method of fast multiplication of matrices in the
problem of finding the transitive closure of a graph" (in Russian), Doklady Akademii Nauk
SSSR, vol. 194, p. 524.
GALIL, Zvi and Giuseppe F. ITALIANO (1991), "Data structures and algorithms for disjoint set
union problems", Computing Surveys, vol. 23, no. 3, pp. 319-344.
GARDNER, Martin (1959), The Scientific American Book of Mathematical Puzzles and Diversions,
Simon and Schuster.
GARDNER, Martin (1977), "Mathematical games: A new kind of cipher that would take mil-
lions of years to break", Scientific American, vol. 237, no. 2, pp. 120-124.
GARDNER, Martin and Charles H. BENNETT (1979), "Mathematical games: The random num-
ber omega bids fair to hold the mysteries of the universe", Scientific American, vol. 241,
no. 5, pp. 20-34.
CAREY, Michael R., Ronald L. GRAHAM and David S. JOHNSON (1977), "The complexity of
computing Steiner minimal trees", SIAM Journal on Applied Mathematics, vol. 32,
pp. 835-859.
GAREY, Michael R. and David S. JOHNSON (1976), "Approximation algorithms for combina-
torial problems: An annotated bibliography", in Algorithms and Complexity: Recent Results
and Newr Directions, J. E Traub (ed.), Academic Press, pp. 41-52.
GAREY, Michael R. and David S. JOHNSON (1979), Computers and Intractability:A Guide to the
Theory of NP-Completeness,W. H. Freeman.
References 507
GIBBONS, Alan and Wojciech RYTTER (1988), Efficient ParallelAlgorithms, Cambridge Univer-
sity Press.
GILBERT, Edgard N. and Edward F. MOORE (1959), "Variable length encodings", Bell System
Technical Journal, vol. 38, no. 4, pp. 933-968.
GILL, John (1977), "Computational complexity of probabilistic Turing machines", SIAM Jour-
nal on Computing, vol. 6, pp. 675-695.
GODBOLE, Sadashiva S. (1973), "On efficient computation of matrix chain products", IEEE
Transactionson Computers, vol. C-22, no. 9, pp. 864-866.
GOEMANS, Michel X. and David P. WILLIAMSON (1994), ".878-Approximation algorithms for
MAX CUT and MAX 2SAT", Proceedings of the 26th Annual ACM Symposium on Theory of
Computing, pp. 422-431.
GOLDREICH, Oded, Silvio MICALI and Avi WIGDERSON (1991), "Proofs that yield nothing but
their validity, or all languages in NP have zero-knowledge proof systems", Journalof the
ACM, vol. 38, pp. 691-729.
GOLDWASSER, Shafi and Joe KILIAN (1986), "Almost all primes can be quickly certified",
Proceedings of the 18th Annual ACM Symposium on Theory of Computing, pp. 316-329.
GOLDWASSER, Shafi, Silvio MICALI and Charles RACKOFF (1989), "The knowledge complexity
of interactive proof-systems", SIAM Journalon Computing, vol. 18, pp. 186-208.
GOLOMB, Solomon W. and Leonard D. BAUMERT (1965), "Backtrack programming", Journal
of the ACM, vol. 12, no. 4, pp. 516-524.
GONDRAN, Michel and Michel MINOUX (1979), Graphes et algorithmes,Eyrolles. Translated as
Graphs and Algorithms, Wiley, 1984.
GONNET, Gaston H. and Ricardo BAEZA-YATES (1984), Handbookof Algorithms and Data Struc-
tures, Addison-Wesley; 2nd edition, 1991.
GONNET, Gaston H. and J. Ian MUNRO (1986), "Heaps on heaps", SIAM Journalon Computing,
vol. 15, no. 4, pp. 964-971.
GOOD, Irving J. (1968), "A five-year plan for automatic chess", in Machine Intelligence 2,
E. Dale and D. Michie (eds), American Elsevier.
GRAHAM, Ronald L. and Pavol HELL (1985), "On the history of the minimum spanning tree
problem", Annals of the History of Computing, vol. 7, no. 1, pp. 43-57.
GREENE, Daniel H. and Donald E. KNUTH (1981), Mathematicsfor the Analysis of Algorithms,
Birkhauser.
GRIES, David (1981), The Science of Programming,Springer-Verlag.
GRIES, David and Gary LEVIN (1980), "Computing Fibonacci numbers (and similarly defined
functions) in log time", Information ProcessingLetters, vol. 11, no. 2, pp. 68-69.
GUIBAS, Leonidas J. and Robert SEDGEWICK (1978), "A dichromatic framework for balanced
trees", Proceedings of the 19th Annual Symposium on Foundations of Computer Science,
pp. 8-21.
GUY, Richard K. (1981), Unsolved Problems in Number Theory, Springer-Verlag.
HALL, A. (1873), "On an experimental determination of Tr", Messenger of Mathematics, vol. 2,
pp. 113-114.
HAMMERSLEY, John M. and David C. HANDSCOMB (1965), Monte Carlo Methods; reprinted by
Chapman and Hall, 1979.
HARDY, Godfrey H. and Edward M. WRIGHT (1938), An Introduction to the Theory of Numbers,
Oxford University Press; 5th edition, 1979.
HAREL, David (1987), Algorithmics: The Spirit of Computing, Addison-Wesley; 2nd edition,
1992.
508 References
LEWIS, Harry R. and Larry DENENBERG (1991), Data Structures & Their Algorithms, Harper
Collins Publishers.
LEWIS, Harry R. and Christos H. PAPADIMITRIOU (1978), "The efficiency of algorithms", Sci-
entific American, vol. 238, no. 1, pp. 96-109.
LLOYD, Seth (1993), "A potentially realizable quantum computer", Science, vol. 261, 17
September, pp. 1569-1571.
LUEKER, George S. (1980), -Some techniques for solving recurrences", Computing Surveys,
vol. 12, no. 4, pp. 419-436.
MANBER, Udi (1989), Introdution to Algorithms: A Creative Approach, Addison-Wesley
MARSH, D. (1970), "Memo functions, the graph traverser, and a simple control situation", in
Machine Intelligence 5, B. Meltzer and D. Michie (eds), American Elsevier and Edinburgh
University Press, pp. 281-300.
MAURER, Ueli M. (1995), "Fast generation of prime numbers and secure public-key crypto-
graphic parameters", J ournal of Cryptology, vol. 8, no. 3.
MCCARTY, Carl P. (1978), "Queen squares", The AmericanMathematicalMonthly, vol. 85,no. 7,
pp. 578-580.
MCDIARMID, Colin J. H. and Bruce A. REED (1989), "Building heaps fast", Journal of Algo-
rithms, vol. 10, no. 3, pp. 352-365.
MELHORN, Kurt (1984a), Data Structures and Algorithms 1: Sorting and Searching, Springer-
Verlag.
MELHORN, Kurt (1984b), Data Structures and Algorithms 2: Graph Algorithms and NP-Com-
pleteness, Springer-Verlag.
MELHORN, Kurt (1984c), Data Structures and Algorithms 3: Multi-Dimensional Searching and
Computational Geometry, Springer-Verlag.
MERKLE, Ralph C. (1978), "Secure communications over insecure channels", Communications
of the ACM, vol. 21, pp. 294-299.
METROPOLIS, I. Nicholas and Stanislaw ULAM (1949), "The Monte Carlo method", Journalof
the American StatisticalAssociation, vol. 44, no. 247, pp. 335-341.
MICHIE, Donald (1968), "'Memo' functions and machine learning", Nature, vol. 218,
pp. 19-22.
MILLER, Gary L. (1976), "Riemann's hypothesis and tests for primality", Journalof Computer
and System Sciences, vol. 13, no. 3, pp. 300-317.
MONIER, Louis (1980), "Evaluation and comparison of two efficient probabilistic primality
testing algorithms", Theoretical Computer Science, vol. 12, pp. 97-108.
MORET, Bernard M. E. and Henry D. SHAPIRO (1991), Algorithms from P to NP; Volume I:
Design & Efficiency, Benjamin/Cummings.
MORRIS, Robert (1978), "Counting large numbers of events in small registers", Communica-
tions of the ACM, vol. 21, no. 10, pp. 840-842.
NAIK, Ashish V., Mitsunori OGIWARA and Alan L. SELMAN (1993), "P-selective sets, and reduc-
ing search to decision vs. self-reducibility", Proceedings of the 8th Annual IEEE Conference
on Structure in Complexity Theory, pp. 52-64.
NELSON, C. Greg and Derek C. OPPEN (1980), "Fast decision procedures based on congruence
closure", Journal of the ACM, vol. 27, pp. 356-364.
NEMHAUSER, George (1966), Introduction to Dynamic Programming, Wiley.
NIEVERGELT, Jurg and Klaus HINRICHS (1993), Algorithms and Data Structures with Applica-
tions to Graphics and Geometry, Prentice-Hall.
512 References
REINGOLD, Edward M., Jurg NIEVERGELT and Narsingh DEO (1977), CombinatorialAlgorithms:
Theory and Practice,Prentice-Hall.
RICE, John A. (1988), Mathematical Statistics and Data Analysis, Duxbury Press; 2nd edition,
1995.
RIVEST, Ronald L. and Robert W. FLOYD (1973), "Bounds on the expected time for median
computations", in CombinatorialAlgorithms, R. Rustin (ed.), Algorithmics Press, pp. 69-76.
RiVEST, Ronald L., Adi SHAMIR and Leonard M. ADLEMAN (1978), "A method for obtaining
digital signatures and public-key cryptosystems", Communications of the ACM, vol. 21,
no. 2, pp. 120-126.
ROBSON, John M. (1973), "An improved algorithm for traversing binary trees without aux-
iliary stack", Information ProcessingLetters, vol. 2, no. 1, pp. 12-14.
ROSEN, Kenneth H. (1991), Discrete Mathematics and Its Applications, 2nd edition, McGraw-
Hill.
ROSENTHAL, Arnie and Anita GOLDNER (1977), "Smallest augmentation to biconnect a graph",
SIAM Journalon Computing, vol. 6, no. 1, pp. 55-66.
RUNGE, Carl D. T. and Hermann KONIG (1924), "Vorlesungen uber numerisches Rechnen",
Die Grundlehren der Mathematischen Wissenschaften, vol. 11, Springer-Verlag, Berlin,
pp. 211-237.
SAHNI, Sartaj (1975), "Approximate algorithms for the 0/1 knapsack problem", Journalof the
ACM, vol. 22, no. 1, pp. 115-124.
SAHNI, Sartaj and Ellis HOROWITZ (1978), "Combinatorial problems: Reducibility and ap-
proximation", OperationsResearch, vol. 26, no. 4, pp. 718-759.
SAVITCH, Walter J. (1970), "Relationship between nondeterministic and deterministic tape
classes", Journal of Computer and System Sciences, vol. 4, pp. 177-192.
SCHAFFER, Russel and Robert SEDGEWICK (1993), "The analysis of heapsort", JournalofAlgo-
rithms, vol. 15, no. 1, pp. 76-100.
SCHARLAU, Winfried and Hans OPOLKA (1985), From Fermat to Minkowski: Lectures on the
Theory of Numbers and Its HistoricalDevelopment, Springer-Verlag.
SCHNEIER, Bruce (1994), Applied Cryptography: Protocols, Algorithms, and Source Code in C,
Wiley.
SCHONHAGE, Arnold and Volker STRASSEN (1971), "Schnelle Multiplikation grosser Zahlen",
Computing, vol. 7, pp. 281-292.
SCHWARTZ, Eugene S. (1964), "An optimal encoding with minimum longest code and total
number of digits", Information and Control, vol. 7, no. 1, pp. 37-44.
SCHWARTZ, J. (1978), "Probabilistic algorithms for verification of polynomial identities",
Technical Report no. 604, Computer Science Department, Courant Institute, New York
University.
SEDGEWICK, Robert (1983), Algorithms, Addison-Wesley; 2nd edition, 1988.
SHALLIT, Jeffrey (1992), "Randomized algorithms in 'primitive' cultures, or what is the oracle
complexity of a dead chicken", ACM Sigact News, vol. 23, no. 4, pp. 77-80; see also ibid.
(1993), vol. 24, no. 1, pp. 1-2.
SHAMIR, Adi (1979), "Factoring numbers in 0 (log n) arithmetic steps", Information Processing
Letters, vol. 8, no. 1, pp. 28-31.
SHOR, Peter W. (1994), "Algorithms for quantum computation: Discrete logarithms and fac-
toring", Proceedings of the 35th Annual Symposium on Foundations of Computer Science,
pp. 124-134.
514 References
SIMMONS, Gustavus J. (ed.) (1992), Contemporary Cryptology: The Science of Information In-
tegrity, IEEE Press.
SIMON, Daniel R. (1994), "On the power of quantum computation", Proceedings of the 35th
Annual Symposium on Foundations of Computer Science, pp. 116-123.
SLEATOR, Daniel D. and Robert E. TARJAN (1985), "Self-adjusting binary search trees", Journal
of the ACM, vol. 32, pp. 652-686.
SLOANE, Neil J. A. (1973), A Handbook of Integer Sequences, Academic Press.
SOBOL', Il'ia M. (1974), The Monte Carlo Method, 2nd edition, University of Chicago Press.
SOLOMON, Herbert (1978), Geometric Probability,SIAM.
SOLOVAY, Robert and Volker STRASSEN (1977), "A fast Monte-Carlo test for primality", SIAM
Journal on Computing, vol. 6, no. 1, pp. 84-85; erratum (1978), ibid., vol. 7, no. 1, p. 118.
STANDISH, Thomas A. (1980), Data Structure Techniques, Addison-Wesley.
STINSON, Douglas R. (1985), An Introduction to the Design and Analysis of Algorithms, The
Charles Babbage Research Centre, St. Pierre, Manitoba; 2nd edition, 1987.
STINSON, Douglas R. (1995), Cryptography: Theory and Practice,CRC Press, Inc.
STOCKMEYER, Larry J. (1973), "Planar 3-colorability is polynomial complete", ACM Sigact
News, vol. 5, no. 3, pp. 19-25.
STOCKMEYER, Larry J. and Ashok K. CHANDRA (1979), "Intrinsically difficult problems",
Scientific American, vol. 240, no. 5, pp. 140-159.
STONE, Harold S. (1972), Introduction to Computer Organizationand Data Structures, McGraw-
Hill.
STRASSEN, Volker (1969), "Gaussian elimination is not optimal", Numerische Mathematik,
vol. 13, pp. 354-356.
TARJAN, Robert E. (1972), "Depth-first search and linear graph algorithms", SIAM Journal
on Computing, vol. 1, no. 2, pp. 146-160.
TARJAN, Robert E. (1975), "On the efficiency of a good but not linear set merging algorithm",
Journal of the ACM, vol. 28, no. 3, pp. 577-593.
TARIAN, Robert E. (1981), "A unified approach to path problems", Journalof the ACM, vol. 28,
no. 3, pp. 577-593.
TARJAN, Robert E. (1983), Data Structures and Network Algorithms, SIAM.
TARJAN, Robert E. (1985), "Amortized computational complexity", SIAM Journalon Algebraic
and Discrete Methods, vol. 6, no. 2, pp. 306-318.
TUCKER, Lewis W. and George G. ROBERTSON (1988), "Architecture and applications of the
connection machine", Computer, vol. 21, no. 8, pp. 26-38.
TURING, Alan M. (1936), "On computable numbers with an application to the Entschei-
dungsproblem", Proceedingsof the London MathematicalSociety, vol. 2, no. 42, pp. 230-265.
TURK, J. W. M. (1982), "Fast arithmetic operations on numbers and polynomials", in Lenstra
and Tijdeman (1982), pp. 43-54.
URBANEK, Friedrich J. (1980), "An 0 (log n) algorithm for computing the nth element of the
solution of a difference equation", Information ProcessingLetters, vol. 11, no. 2, pp. 66-67.
VAN LEEUWEN, Jan (ed.) (1990), Handbookof TheoreticalComputer Science; VolumeA: Algorithms
and Complexity, Elsevier and MIT Press.
VAZIRANI, Umesh V. (1986), Randomness, Adversaries, and Com mutation, doctoral dissertation,
Computer Science, University of California, Berkeley, CA.
VAZIRANI, Umesh V. (1987), "Efficiency considerations in using semi-random sources", Pro-
ceedings of the 19th Annual ACM Symposium on Theory of Computing, pp. 160-168.
References 515
VERMA, Rakesh M. (1994), "A general method and a master theorem for divide-and-conquer
recurrences with applications", Journalof Algorithms, vol. 16, pp. 67-79.
VICKERY, C. W. (1956), "Experimental determination of eigenvalues and dynamic influence
coefficients for complex structures such as airplanes", in Symposium on Monte CarloMeth-
ods, H. A. Meyer (ed.), Wiley, pp. 145-146.
VON NEUMANN, John (1951), "Various techniques used in connection with random digits",
Journal of Research of the National Bureau of Standards, Applied Mathematics Series, vol. 3,
pp. 36-38.
VUILLEMIN, Jean (1978), "A data structure for manipulating priority queues", Communica-
tions of the ACM, vol. 21, no. 4, pp. 309-315.
WAGNER, Robert A. and Michael J. FISCHER (1974), " The string-to-string correction problem",
Journalof the ACM, vol. 21, no. 1, pp. 168-173.
WARSHALL, Stephen (1962), "A theorem on Boolean matrices", Journal of the ACM, vol. 9,
no. 1, pp. 11-12.
WARUSFEL, Andre (1961), Les nombres et leurs myst~res, Editions du Seuil.
WEGMAN, Mark N. and J. Larry CARTER (1981), "New hash functions and their use in au-
thentication and set equality", Journal of Computer and System Sciences, vol. 22, no. 3,
pp. 265-279.
WILLIAMS, Hugh (1978), "Primality testing on a computer", Ars Combinatoria, vol. 5,
pp. 127-185.
WILLIAMS, John W. J. (1964), "Algorithm 232: Heapsort", Communicationsof the ACM, vol. 7,
no. 6, pp. 347-348.
WINOGRAD, Shmuel (1980), Arithmetic Complexity of Computations, SIAM.
WINTER, Pavel (1987), "Steiner problem in networks: A survey", Networks, vol. 17, no. 2,
pp. 129-167.
WOOD, Derick (1993), Data Structures, Algorithms, and Performance, Addison-Wesley.
WRIGHT, J. W. (1975), "The change-making problem", Journal of the ACM, vol. 22, no. 1,
pp. 125-128.
YAO, Andrew C.-C. (1975), "An O( EI log log (VI) algorithm for finding minimum spanning
trees", Information ProcessingLetters, vol. 4, no. 1, pp. 21-23.
YAO, Andrew C.-C. (1982), "Theory and applications of trapdoor functions", Proceedings of
the 23rd Annual Symposium on Foundations of Computer Science, pp. 80-91.
YAO, Frances F. (1980), "Efficient dynamic programming using quadrangle inequalities",
Proceedings of the 12th Annual ACM Symposium on Theory of Computing, pp. 429-435.
YOUNGER, Daniel H. (1967), "Recognition of context-free languages in time n 3 ", Information
and Control, vol. 10, no. 2, pp. 189-208.
ZIPPEL, Richard E. (1979), ProbabilisticAlgorithmsforSparse Polynomials,doctoral dissertation,
Massachusetts Institute of Technology, Cambridge, MA.
Index
517
518 Index