Intro To Graph Theory
Intro To Graph Theory
Contents
Acknowledgments
iv
.
.
.
.
.
.
.
.
1
2
10
17
22
27
36
38
45
.
.
.
.
.
.
.
.
52
53
58
69
72
76
77
84
86
.
.
.
.
.
.
.
104
104
111
115
126
133
139
144
.
.
.
.
.
.
152
153
154
164
172
179
189
.
.
.
.
.
.
.
.
2 Graph Algorithms
2.1 Representing graphs in a computer
2.2 Graph searching . . . . . . . . . . .
2.3 Weights and distances . . . . . . .
2.4 Dijkstras algorithm . . . . . . . . .
2.5 Bellman-Ford algorithm . . . . . .
2.6 Floyd-Roy-Warshall algorithm . . .
2.7 Johnsons algorithm . . . . . . . . .
2.8 Problems . . . . . . . . . . . . . . .
3 Trees and Forests
3.1 Definitions and examples
3.2 Properties of trees . . . .
3.3 Minimum spanning trees
3.4 Binary trees . . . . . . .
3.5 Huffman codes . . . . .
3.6 Tree traversals . . . . . .
3.7 Problems . . . . . . . . .
4 Tree Data Structures
4.1 Priority queues . .
4.2 Binary heaps . . .
4.3 Binomial heaps . .
4.4 Binary search trees
4.5 AVL trees . . . . .
4.6 Problems . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
i
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ii
5 Distance and Connectivity
5.1 Paths and distance . . . . . .
5.2 Vertex and edge connectivity .
5.3 Mengers theorem . . . . . . .
5.4 Whitneys Theorem . . . . . .
5.5 Centrality of a vertex . . . . .
5.6 Network reliability . . . . . .
5.7 Problems . . . . . . . . . . . .
Contents
.
.
.
.
.
.
.
196
196
201
206
209
210
211
211
.
.
.
.
215
215
215
215
216
7 Planar Graphs
7.1 Planarity and Eulers Formula . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Kuratowskis Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Planarity algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
217
217
217
219
8 Graph Coloring
8.1 Vertex coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Edge coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3 Applications of graph coloring . . . . . . . . . . . . . . . . . . . . . . . .
220
220
221
221
9 Network Flows
9.1 Flows and cuts . . . . . . . . . .
9.2 Ford-Fulkerson theorem . . . . .
9.3 Edmonds and Karps algorithm .
9.4 Goldberg and Tarjans algorithm
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
223
223
223
228
228
10 Random Graphs
10.1 Network statistics . . . . . . .
10.2 Binomial random graph model
10.3 Erdos-Renyi model . . . . . .
10.4 Small-world networks . . . . .
10.5 Scale-free networks . . . . . .
10.6 Problems . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
229
229
234
240
241
247
252
Formulations
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
.
.
.
.
.
.
258
258
259
261
262
264
266
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A Asymptotic Growth
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
270
Contents
B GNU Free Documentation License
1. APPLICABILITY AND DEFINITIONS . . . . . . . . .
2. VERBATIM COPYING . . . . . . . . . . . . . . . . .
3. COPYING IN QUANTITY . . . . . . . . . . . . . . . .
4. MODIFICATIONS . . . . . . . . . . . . . . . . . . . .
5. COMBINING DOCUMENTS . . . . . . . . . . . . . .
6. COLLECTIONS OF DOCUMENTS . . . . . . . . . . .
7. AGGREGATION WITH INDEPENDENT WORKS . .
8. TRANSLATION . . . . . . . . . . . . . . . . . . . . . .
9. TERMINATION . . . . . . . . . . . . . . . . . . . . . .
10. FUTURE REVISIONS OF THIS LICENSE . . . . . .
11. RELICENSING . . . . . . . . . . . . . . . . . . . . . .
ADDENDUM: How to use this License for your documents
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
271
271
273
273
274
275
276
276
276
277
277
277
278
Bibliography
279
Index
288
Acknowledgments
Fidel Barrera-Cruz: reported typos in Chapter 3. See changeset 101. Suggested
making a note about disregarding the direction of edges in undirected graphs. See
changeset 277.
Daniel Black: reported a typo in Chapter 1. See changeset 61.
Kevin Brintnall: reported typos in the definition of iadj(v) oadj(v); see changesets 240 and 242. Solution to Example 1.12(2); see changeset 246.
Aaron Dutle: reported a typo in Figure 1.14. See changeset 125.
Peter L. Erd
os (http://www.renyi.hu/~elp) for informing us of the reference [71]
on the Havel-Hakimi theorem for directed graphs.
Noel Markham: reported a typo in Algorithm 2.5. See changeset 131 and Issue 2.
Caroline Melles: clarify definitions of various graph types (weighted graphs, multigraphs, and weighted multigraphs); clarify definitions of degree, isolated vertices,
and pendant and using the butterfly graph with 5 vertices (see Figure 1.9) to
illustrate these definitions; clarify definitions of trails, closed paths, and cycles;
see changeset 448. Some rearrangements of materials in Chapter 1 to make the
reading flow better and a few additions of missing definitions; see changeset 584.
Clarifications about unweighted and weighted degree of a vertex in a multigraph;
notational convention about a graph being simple unless otherwise stated; an example on graph minor; see changeset 617.
Pravin Paratey: simplify the sentence formation in the definition of digraphs; see
changeset 714 and Issue 7.
The world map in Figure 2.15 was adapted from an SVG image file from Wikipedia.
The original SVG file was accessed on 2010-10-01 at http://en.wikipedia.org/
wiki/File:Worldmap_location_NED_50m.svg.
iv
Chapter 1
Introduction to Graph Theory
Our journey into graph theory starts with a puzzle that was solved over 250 years ago by
Leonhard Euler (17071783). The Pregel River flowed through the town of Konigsberg,
which is present day Kaliningrad in Russia. Two islands protruded from the river.
On either side of the mainland, two bridges joined one side of the mainland with one
island and a third bridge joined the same side of the mainland with the other island. A
bridge connected the two islands. In total, seven bridges connected the two islands with
both sides of the mainland. A popular exercise among the citizens of Konigsberg was
determining if it was possible to cross each bridge exactly once during a single walk. For
historical perspectives on this puzzle and Eulers solution, see Gribkovskaia et al. [87]
and Hopkins and Wilson [100].
To visualize this puzzle in a slightly different way, consider Figure 1.1. Imagine that
points a and c are either sides of the mainland, with points b and d being the two islands.
Place the tip of your pencil on any of the points a, b, c, d. Can you trace all the lines
in the figure exactly once, without lifting your pencil? Known as the seven bridges of
Konigsberg puzzle, Euler solved this problem in 1735 and with his solution he laid the
foundation of what is now known as graph theory.
1.1
f (x)
0.5
0.5
0.5
1
6
0
x
0
x
(b) A scatterplot.
Everyone knows what a curve is, until he has studied enough mathematics to become confused
through the countless number of possible exceptions.
We start by calling a graph what some call an unweighted, undirected graph without
multiple edges.
Definition 1.1. A graph G = (V, E) is an ordered pair of finite sets. Elements of V are
called vertices or nodes, and elements of E V (2) are called edges or arcs. We refer to
V as the vertex set of G, with E being the edge set. The cardinality of V is called the
order of G, and |E| is called the size of G. We usually disregard any direction of the
edges and consider (u, v) and (v, u) as one and the same edge in G. In that case, G is
referred to as an undirected graph.
One can label a graph by attaching labels to its vertices. If (v1 , v2 ) E is an edge of
a graph G = (V, E), we say that v1 and v2 are adjacent vertices. For ease of notation, we
write the edge (v1 , v2 ) as v1 v2 . The edge v1 v2 is also said to be incident with the vertices
v1 and v2 .
a
(1.1)
sage : G = Graph ({ " a " :[ " b " ," e " ] , " b " :[ " a " ," c " ," e " ] , " c " :[ " b " ," d " ] ,
...
" d " :[ " c " ," e " ] , " e " :[ " a " ," b " ," d " ]})
sage : G
Graph on 5 vertices
sage : G . vertices ()
[ a , b , c , d , e ]
sage : G . edges ( labels = False )
[( a , b ) , ( a , e ) , ( b , e ) , ( c , b ) , ( c , d ) , ( e , d )]
The graph G is undirected, meaning that we do not impose direction on any edges.
Without any direction on the edges, the edge ab is the same as the edge ba. That is why
G.edges() returns six edges instead of the 12 edges listed in (1.1).
(2) Let adj(v) be the set of all vertices that are adjacent to v. Then we have
adj(a) = {b, e}
adj(b) = {a, c, e}
adj(c) = {b, d}
adj(d) = {c, e}
adj(e) = {a, b, d}.
The vertices adjacent to v are also referred to as its neighbors. We can use the function
G.neighbors() to list all the neighbors of each vertex.
sage :
[ b ,
sage :
[ a ,
sage :
[ b ,
sage :
[ c ,
sage :
[ a ,
(3) Taking the cardinalities of the above five sets, we get |adj(a)| = |adj(c)| =
|adj(d)| = 2 and |adj(b)| = |adj(e)| = 3. Thus a, c and d have the smallest number
of adjacent vertices, while b and e have the largest number of adjacent vertices.
(4) If all the edges in G are removed, the result is still a graph, although one without
any edges. By definition, the edge set of any graph is a subset of V (2) . Removing all
edges of G leaves us with the empty set , which is a subset of every set.
(5) Say we remove all of the vertices from the graph in Figure 1.3 and in the process
all edges are removed as well. The result is that both of the vertex and edge sets are
empty. This is a special graph known as the empty or null graph.
Example 1.3. Consider the illustration in Figure 1.4. Does Figure 1.4 represent a
graph? Why or why not?
Solution. If V = {a, b, c} and E = {aa, bc}, it is clear that E V (2) . Then (V, E) is a
graph. The edge aa is called a self-loop of the graph. In general, any edge of the form
vv is a self-loop.
In Figure 1.3, the edges ae and ea represent one and the same edge. If we do not
consider the direction of the edges in the graph of Figure 1.3, then the graph has six
edges. However, if the direction of each edge is taken into account, then there are 12 edges
as listed in (1.1). The following definition captures the situation where the direction of
the edges are taken into account.
A directed edge is an edge such that one vertex incident with it is designated as
the head vertex and the other incident vertex is designated as the tail vertex. In this
5
a
1.1.1
Multigraphs
This subsection presents a larger class of graphs. For simplicity of presentation, in this
book we shall assume usually that a graph is not a multigraph. In other words, when you
read a property of graphs later in the book, it will be assumed (unless stated explicitly
otherwise) that the graph is not a multigraph. However, as multigraphs and weighted
graphs are very important in many applications, we will try to keep them in the back
of our mind. When appropriate, we will add as a remark how an interesting property of
ordinary graphs extends to the multigraph or weighted graph case.
An important class of graphs consist of those graphs having multiple edges between
pairs of vertices. A multigraph is a graph in which there are multiple edges between a
pair of vertices. A multi-undirected graph is a multigraph that is undirected. Similarly,
a multidigraph is a directed multigraph.
As we indicated above, a graph may have weighted edges.
Definition 1.4. A weighted graph is a graph G = (V, E) where each set V and E is a
pair consisting of a vertex and a real number called the weight.
The illustration in Figure 1.1 is actually a multigraph, a graph with multiple edges,
called the Konigsberg graph.
i : E V (2) .
(1.2)
(1.3)
where h(e) i(e) for all e E. The element v = h(e) is called the head of i(e). If G has
no self-loops, then i(e) is a set having exactly two elements denoted i(e) = {h(e), t(e)}.
The element v = t(e) is called the tail of i(e). For self-loops, we set t(e) = h(e). A
multigraph with an orientation can therefore be described as the 4-tuple (V, E, i, h).
In other words, G = (V, E, i, h) is a multidigraph. Figure 1.5 illustrates a weighted
multigraph.
1
v3
v4
2
3
1
1
v5
3
3
v2
v1
However, we always assume that E R V (2) , where the R-component is called the weight of the
edge.
The unweighted degree deg(v) of a vertex v of a weighted multigraph is the sum of the
unweighted indegree and the unweighted outdegree of v:
deg(v) = deg+ (v) + deg (v).
Loops are counted twice.
Similarly, there is the set of in-neighbors
iadj(v) = {w V | for some e E, i(e) = {v, w}, h(e) = v}
and the set of out-neighbors
oadj(v) = {w V | for some e E, i(e) = {v, w}, h(e) = w}.
Define the adjacency of v to be the union of these:
adj(v) = iadj(v) oadj(v).
It is clear that deg+ (v) = | iadj(v)| and deg (v) = | oadj(v)|.
The weighted indegree of a vertex v V counts the weights of edges going into v:
X
wv .
wdeg + (v) =
eE
h(e)=v
The weighted outdegree of a vertex v V counts the weights of edges going out of v:
X
wv .
wdeg (v) =
eE
vi(e)={v,v 0 }
h(e)=v 0
The weighted degree of a vertex of a weighted multigraph is the sum of the weighted
indegree and the weighted outdegree of that vertex. In other words, it is the sum of
the weights of the edges incident to that vertex, regarding the graph as an undirected
weighted graph.
1.1.2
Simple graphs
Our life is frittered away by detail. . . . Simplify, simplify. Instead of three meals a day, if
it be necessary eat but one; instead of a hundred dishes, five; and reduce other things in
proportion.
Henry David Thoreau, Walden, 1854, Chapter 2: Where I Lived, and What I Lived For
A simple graph is a graph with no self-loops and no multiple edges. Figure 1.6 illustrates
a simple graph and its digraph version, together with a multidigraph version of the
Konigsberg graph. The edges of a digraph can be visually represented as directed arrows,
similar to the digraph in Figure 1.6(b) and the multidigraph in Figure 1.6(c). The digraph
in Figure 1.6(b) has the vertex set {a, b, c} and the edge set {ab, bc, ca}. There is an arrow
from vertex a to vertex b, hence ab is in the edge set. However, there is no arrow from
b to a, so ba is not in the edge set of the graph in Figure 1.6(b). The family Sh(n) of
Shannon multigraphs is illustrated in Figure 1.7 for integers 2 n 7. These graphs
are named after Claude E. Shannon (19162001). Each Shannon multigraph consists of
three vertices, giving rise to a total of three distinct unordered pairs. Two of these pairs
are connected by bn/2c edges and the third pair of vertices is connected by b(n + 1)/2c
edges.
(b) Digraph.
(c) Multidigraph.
(a) Sh(2)
(b) Sh(3)
(c) Sh(4)
(d) Sh(5)
(e) Sh(6)
(f) Sh(7)
Notational convention Unless stated otherwise, all graphs are simple graphs in the
remainder of this book.
Definition 1.6. For any vertex v in a graph G = (V, E), the cardinality of adj(v) is
called the degree of v and written as deg(v) = | adj(v)|. The degree of v counts the
number of vertices in G that are adjacent to v. If deg(v) = 0, then v is not incident to
any edge and we say that v is an isolated vertex. If G has no loops and deg(v) = 1, then
v is called a pendant.
Some examples would put the above definition in concrete terms. Consider again
the graph in Figure 1.4. Note that no vertices are isolated. Even though vertex a is
not incident to any vertex other than a itself, note that deg(a) = 2 and so by definition
a is not isolated. Furthermore, each of b and c is a pendant. For the house graph in
Figure 1.3, we have deg(b) = 3. For the graph in Figure 1.6(b), we have deg(b) = 2.
If V 6= and E = , then G is a graph consisting entirely of isolated vertices. From
Example 1.2 we know that the vertices a, c, d in Figure 1.3 have the smallest degree in
the graph of that figure, while b, e have the largest degree.
The minimum degree among all vertices in G is denoted (G), whereas the maximum
degree is written as (G).
Thus, if G denotes the graph in Figure 1.3 then we have (G) = 2 and (G) = 3. In
the following Sage session, we construct the digraph in Figure 1.6(b) and computes its
maximum and minimum number of degrees.
sage : G = DiGraph ({ " a " : " b " , " b " : " c " , " c " : " a " })
sage : G
Digraph on 3 vertices
sage : G . degree ( " a " )
2
sage : G . degree ( " b " )
2
sage : G . degree ( " c " )
2
10
hand is counted as well, since the sum in the theorem is over all vertices). To interpret
Theorem 1.7 in a slightly different way within the context of the same room of people,
there is an even number of people who shook hands with an odd number of other people.
This consequence of Theorem 1.7 is recorded in the following corollary.
Corollary 1.8. A graph G = (V, E) contains an even number of vertices with odd
degrees.
Proof. Partition V into two disjoint subsets: Ve is the subset of V that contains only
vertices with even degrees; and Vo is the subset of V with only vertices of odd degrees.
That is, V = Ve Vo and Ve Vo = . From Theorem 1.7, we have
X
X
X
deg(v) =
deg(v) +
deg(v) = 2|E|
vV
vVe
vVo
As
vV
deg(v) and
vVe
deg(v) =
vVo
X
vV
deg(v)
deg(v).
vVe
As E V (2) , then E can be the empty set, in which case the total degree of G =
(V, E) is zero. Where E 6= , then the total degree of G is greater than zero. By
Theorem 1.7, the total degree of G is nonnegative and even. This result is an immediate
consequence of Theorem 1.7 and is captured in the following corollary.
Corollary 1.9. If G is a graph, then the sum of its degrees is nonnegative and even.
If G = (V, E) is an r-regular graph with n vertices and m edges, it is clear by definition
of r-regular graphs that the total degree of G is rn. By Theorem 1.7 we have 2m = rn
and therefore m = rn/2. This result is captured in the following corollary.
Corollary 1.10. If G = (V, E) is an r-regular graph having n vertices and m edges,
then m = rn/2.
1.2
We now consider several common types of graphs. Along the way, we also present basic
properties of graphs that could be used to distinguish different types of graphs.
Let G be a multigraph as in Definition 1.5, with vertex set V (G) and edge set E(G).
Consider a graph H such that V (H) V (G) and E(H) E(G). Furthermore, if
e E(H) and i(e) = {u, v}, then u, v V (H). Under these conditions, H is called a
subgraph of G.
1.2.1
I like long walks, especially when they are taken by people who annoy me.
Noel Coward
11
If u and v are two vertices in a graph G, a u-v walk is an alternating sequence of vertices
and edges starting with u and ending at v. Consecutive vertices and edges are incident.
Formally, a walk W of length n 0 can be defined as
W : v0 , e1 , v1 , e2 , v2 , . . . , vn1 , en , vn
where each edge ei = vi1 vi and the length n refers to the number of (not necessarily
distinct) edges in the walk. The vertex v0 is the starting vertex of the walk and vn is
the end vertex, so we refer to W as a v0 -vn walk. The trivial walk is the walk of length
n = 0 in which the start and end vertices are one and the same vertex. If the graph has
no multiple edges then, for brevity, we omit the edges in a walk and usually write the
walk as the following sequence of vertices:
W : v0 , v1 , v2 , . . . , vn1 , vn .
For the graph in Figure 1.8, an example of a walk is an a-e walk: a, b, c, b, e. In other
words, we start at vertex a and travel to vertex b. From b, we go to c and then back to
b again. Then we end our journey at e. Notice that consecutive vertices in a walk are
adjacent to each other. One can think of vertices as destinations and edges as footpaths,
say. We are allowed to have repeated vertices and edges in a walk. The number of edges
in a walk is called its length. For instance, the walk a, b, c, b, e has length 4.
a
c
g
12
Remove from E1 all v1 vk such that vj1 6= vk . This results in a reduced edge set E2 of
n2 n (n 2) elements and we now have the path P1 : v1 , vj1 of length 1. Repeat the
same process for vj1 vj2 E2 to obtain a reduced edge set E3 of n2 n 2(n 2) elements
and a path P2 : v1 , vj1 , vj2 of length 2. In general, let Pr : v1 , vj1 , vj2 , . . . , vjr be a path of
length r < n and let Er+1 be our reduced edge set of n2 n r(n 2) elements. Repeat
the above process until we have constructed a path Pn1 : v1 , vj1 , vj2 , . . . , vjn1 of length
n 1 with reduced edge set En of n2 n (n 1)(n 2) elements. Adding another
vertex to Pn1 means going back to a vertex that was previously visited, because Pn1
already contains all vertices of V .
A walk of length n 3 whose start and end vertices are the same is called a closed
walk . A trail of length n 3 whose start and end vertices are the same is called a closed
trail . A path of length n 3 whose start and end vertices are the same is called a closed
path or a cycle (with apologies for slightly abusing terminology).3 For example, the
walk a, b, c, e, a in Figure 1.8 is a closed path. A path whose length is odd is called odd ,
otherwise it is referred to as even. Thus the walk a, b, e, a in Figure 1.8 is a cycle. It is
easy to see that if you remove any edge from a cycle, then the resulting walk contains no
closed walks. An Euler subgraph of a graph G is either a cycle or an edge-disjoint union
of cycles in G. An example of a closed walk which is not a cycle is given in Figure 1.9.
0
A cycle in a graph is sometimes also called a circuit. Since that terminology unfortunately
conflicts with the closely related notion of a circuit of a matroid, we do not use it here.
13
g = Graph ({ " a " :[ " b " ," e " ] , " b " :[ " a " ," g " ," e " ," c " ] , \
" c " :[ " b " ," e " ," d " ] , " d " :[ " c " ," f " ] , " e " :[ " f " ," a " ," b " ," c " ] , \
" f " :[ " g " ," d " ," e " ] , " g " :[ " b " ," f " ]})
g . is_connected ()
g . shortest_path ( " g " , " d " )
f , d ]
This shows that g, f, d is a shortest path from g to d. In fact, any other g-d path has
length greater than 2, so we can say that g, f, d is the shortest path between g and d.
14
Remark 1.16. We will explain Dijkstras algorithm in Chapter 2, which gives one of
the best algorithms for finding shortest paths between two vertices in a connected graph.
What is very remarkable is that, at the present state of knowledge, finding the shortest
path from a vertex v to a particular (but arbitrarily given) vertex w appears to be as
hard as finding the shortest path from a vertex v to all other vertices in the graph!
Trees are a special type of graphs that are used in modelling structures that have
some form of hierarchy. For example, the hierarchy within an organization can be drawn
as a tree structure, similar to the family tree in Figure 1.10. Formally, a tree is an
undirected graph that is connected and has no cycles. If one vertex of a tree is specially
designated as the root vertex , then the tree is called a rooted tree. Chapter 3 covers trees
in more details.
grandma
mum
me
sister
brother
uncle
aunt
cousin1
cousin2
1.2.2
Let G be a graph with vertex set V (G) and edge set E(G). Suppose we have a graph
H such that V (H) V (G) and E(H) E(G). Furthermore, suppose the incidence
function i of G, when restricted to E(H), has image in V (H)(2) . Then H is a subgraph
of G. In this situation, G is referred to as a supergraph of H.
Starting from G, one can obtain its subgraph H by deleting edges and/or vertices
from G. Note that when a vertex v is removed from G, then all edges incident with
v are also removed. If V (H) = V (G), then H is called a spanning subgraph of G. In
Figure 1.11, let G be the left-hand side graph and let H be the right-hand side graph.
Then it is clear that H is a spanning subgraph of G. To obtain a spanning subgraph
from a given graph, we delete edges from the given graph.
(a)
(b)
15
(1.4)
Thus for any simple graph G with n vertices, its total number of edges |E(G)| is bounded
above by
n(n 1)
|E(G)|
.
(1.5)
2
Figure 1.12 shows complete graphs each of whose total number of vertices is bounded by
1 n 5. The complete graph K1 has one vertex with no edges. It is also called the
trivial graph.
(a) K5
(b) K4
(c) K3
(d) K2
(e) K1
n2i
X
ni
2
X
(k 1) 2
ni k .
(1.6)
(This
result holds true for any nonempty but finite set of positive integers.) Note that
P
ni = n and by (1.5) each component i has at most 21 ni (ni 1) edges. Apply (1.6) to
get
X ni (ni 1)
2
as required.
1X 2 1X
ni
ni
2
2
1
1
(n2 2nk + k 2 + 2n k) n
2
2
(n k)(n k + 1)
=
2
=
16
(a) C6
(b) C5
(c) C4
(d) C3
17
called the star graph. Figure 1.14 shows a bipartite graph together with the complete
bipartite graphs K4,3 and K3,3 , and the star graph K1,4 .
(a) bipartite
(b) K4,3
(c) K3,3
(d) K1,4
1.3
A=
. . . . . . . . . . . . . . . . . . . .
am1 am2 amn
The positive integers m and n are the row and column dimensions of A, respectively.
The entry in row i column j is denoted aij . Where the dimensions of A are clear from
context, A is also written as A = [aij ].
Representing a graph as a matrix is very inefficient in some cases and not so in
other cases. Imagine you walk into a large room full of people and you consider the
handshaking graph discussed in connection with Theorem 1.7. If not many people
shake hands in the room, it is a waste of time recording all the handshakes and also all
the non-handshakes. This is basically what the adjacency matrix does. In this kind
of sparse graph situation, it would be much easier to simply record the handshakes as
a Python dictionary.4 This section requires some concepts and techniques from linear
algebra, especially matrix theory. See introductory texts on linear algebra and matrix
theory [19] for coverage of such concepts and techniques.
4
A Python dictionary is basically an indexed set. See the reference manual at http://www.python.
org for further details.
18
1.3.1
Adjacency matrix
Let G be an undirected graph with vertices V = {v1 , . . . , vn } and edge set E. The
adjacency matrix of G is the n n matrix A = [aij ] defined by
(
1, if vi vj E,
aij =
0, otherwise.
As G is an undirected graph, then A is a symmetric matrix. That is, A is a square
matrix such that aij = aji .
Now let G be a directed graph with vertices V = {v1 , . . . , vn } and edge set E. The
(0, 1, 1)-adjacency matrix of G is the n n matrix A = [aij ] defined by
if vi vj E,
1,
aij = 1, if vj vi E,
0,
otherwise.
3
(a)
(b)
19
In general, the adjacency matrix of a digraph is not symmetric, while that of an undirected graph is symmetric.
More generally, if G is an undirected multigraph with edge eij = vi vj having multiplicity wij , or a weighted graph with edge eij = vi vj having weight wij , then we can
define the (weighted) adjacency matrix A = [aij ] by
aij =
(
wij , if vi vj E,
0,
otherwise.
For example, Sage allows you to easily compute a weighted adjacency matrix.
sage : G = Graph ( sparse = True , weighted = True )
sage : G . add_edges ([(0 ,1 ,1) , (1 ,2 ,2) , (0 ,2 ,3) , (0 ,3 ,4)])
sage : M = G . w e i g h t e d _ a d ja c e n c y _ m a t r i x (); M
[0 1 3 4]
[1 0 2 0]
[3 2 0 0]
[4 0 0 0]
Bipartite case
Suppose G = (V, E) is an undirected bipartite graph and V = V1 V2 is the partition
of the vertices into n1 vertices in V1 and n2 vertices in V2 , so |V | = n1+ n2 . Then
the
A1 0
adjacency matrix A of G can be realized as a block diagonal matrix A =
, where
0 A2
A1 is an n1 n2 matrix and A2 is an n2 n1 matrix. Since G is undirected, A2 = AT1 .
The matrix is called a reduced adjacency matrix or a bi-adjacency matrix (the literature
also uses the terms transfer matrix or the ambiguous term adjacency matrix).
Tanner graphs
If H is an m n (0, 1)-matrix, then the Tanner graph of H is the bipartite graph
G = (V, E) whose set of vertices V = V1 V2 is partitioned into two sets: V1 corresponding
to the m rows of H and V2 corresponding to the n columns of H. For any i, j with
1 i m and 1 j n, there is an edge ij E if and only if the (i, j)-th entry of
H is 1. This matrix H is sometimes called the reduced adjacency matrix or the check
matrix of the Tanner graph. Tanner graphs are used in the theory of error-correcting
codes. For example, Sage allows you to easily compute such a bipartite graph from its
matrix.
sage : H = Matrix ([(1 ,1 ,1 ,0 ,0) , (0 ,0 ,1 ,0 ,1) , (1 ,0 ,0 ,1 ,1)])
sage : B = BipartiteGraph ( H )
sage : B . r ed u c e d_ a d j ac e n c y_ m a t ri x ()
[1 1 1 0 0]
[0 0 1 0 1]
[1 0 0 1 1]
sage : B . plot ( graph_border = True )
20
p
X
air brj
r=1
for i, j = 1, 2, . . . , p. Note that air is the number of edges from vi to vr , and brj is the
number of vr -vj walks of length k. Any edge from vi to vr can be joined with any vr -vj
walk to create a walk vi , vr , . . . , vj of length k + 1. Then for each r = 1, 2, . . . , p, the
value air brj counts the number of vi -vj walks of length k + 1 with vr being the second
vertex in the walk. Thus cij counts the total number of vi -vj walks of length k + 1.
1.3.2
Incidence matrix
The relationship between edges and vertices provides a very strong constraint on the
data structure, much like the relationship between points and blocks in a combinatorial
design or points and lines in a finite plane geometry. This incidence structure gives rise
to another way to describe a graph using a matrix.
Let G be a digraph with edge set E = {e1 , . . . , em } and vertex set V = {v1 , . . . , vn }.
The incidence matrix of G is the n m matrix B = [bij ] defined by
1, if vi is the tail of ej ,
1,
if vi is the head of ej ,
(1.7)
bij =
2,
if ej is a self-loop at vi ,
0,
otherwise.
Each column of B corresponds to an edge and each row corresponds to a vertex. The
definition of incidence matrix of a digraph as contained in expression (1.7) is applicable
to digraphs with self-loops as well as multidigraphs.
For the undirected case, let G be an undirected graph with edge set E = {e1 , . . . , em }
and vertex set V = {v1 , . . . , vn }. The unoriented incidence matrix of G is the n m
21
1, if vi is incident to ej ,
bij = 2, if ej is a self-loop at vi ,
0, otherwise.
1.3.3
Laplacian matrix
1, if i 6= j and vi vj E,
`ij = di , if i = j,
0,
otherwise,
where di = deg(vi ) is the degree of vertex vi .
Sage allows you to compute the Laplacian matrix of a graph:
sage : G = Graph ({1:[2 ,4] , 2:[1 ,4] , 3:[2 ,6] , 4:[1 ,3] , 5:[4 ,2] , 6:[3 ,1]})
sage : G . laplacian_matrix ()
[ 3 -1 0 -1 0 -1]
[ -1 4 -1 -1 -1 0]
[ 0 -1 3 -1 0 -1]
[ -1 -1 -1 4 -1 0]
[ 0 -1 0 -1 2 0]
[ -1 0 -1 0 0 2]
There are many remarkable properties of the Laplacian matrix. It shall be discussed
further in Chapter 5.
1.3.4
Distance matrix
Recall that the distance (or geodesic distance) d(v, w) between two vertices v, w V in a
connected graph G = (V, E) is the number of edges in a shortest path connecting them.
The n n matrix [d(vi , vj )] is the distance matrix of G. Sage helps you to compute the
distance matrix of a graph:
sage : G = Graph ({1:[2 ,4] , 2:[1 ,4] , 3:[2 ,6] , 4:[1 ,3] , 5:[4 ,2] , 6:[3 ,1]})
sage : d = [[ G . distance (i , j ) for i in range (1 ,7)] for j in range (1 ,7)]
sage : matrix ( d )
[0 1 2 1 2 1]
[1 0 1 1 1 2]
[2 1 0 1 2 1]
[1 1 1 0 1 2]
[2 1 2 1 0 3]
[1 2 1 2 3 0]
22
The distance matrix is an important quantity which allows one to better understand
the connectivity of a graph. Distance and connectivity will be discussed in more detail
in Chapters 5 and 10.
1.4
Isomorphic graphs
Determining whether or not two graphs are, in some sense, the same is a hard but
important problem. Two graphs G and H are isomorphic if there is a bijection f :
V (G) V (H) such that whenever uv E(G) then f (u)f (v) E(H). The function
f is an isomorphism between G and H. Otherwise, G and H are non-isomorphic. If G
and H are isomorphic, we write G
= H.
(a)
(b)
(a) C6
(b) G1
(c) G2
23
Example 1.21. Consider the graphs in Figure 1.18. Which pair of graphs are isomorphic, and which two graphs are non-isomorphic?
Solution. If G is a Sage graph, one can use the method G.is_isomorphic() to determine
whether or not the graph G is isomorphic to another graph. The following Sage session
illustrates how to use G.is_isomorphic().
sage :
...
sage :
...
sage :
...
sage :
True
sage :
False
sage :
False
C6 = Graph ({ " a " :[ " b " ," c " ] , " b " :[ " a " ," d " ] , " c " :[ " a " ," e " ] , \
" d " :[ " b " ," f " ] , " e " :[ " c " ," f " ] , " f " :[ " d " ," e " ]})
G1 = Graph ({1:[2 ,4] , 2:[1 ,3] , 3:[2 ,6] , 4:[1 ,5] , \
5:[4 ,6] , 6:[3 ,5]})
G2 = Graph ({ " a " :[ " d " ," e " ] , " b " :[ " c " ," f " ] , " c " :[ " b " ," f " ] , \
" d " :[ " a " ," e " ] , " e " :[ " a " ," d " ] , " f " :[ " b " ," c " ]})
C6 . is_isomorphic ( G1 )
C6 . is_isomorphic ( G2 )
G1 . is_isomorphic ( G2 )
Thus, for the graphs C6 , G1 and G2 in Figure 1.18, C6 and G1 are isomorphic, but G1
and G2 are not isomorphic.
An important notion in graph theory is the idea of an invariant. An invariant is
an object f = f (G) associated to a graph G which has the property
G
= H = f (G) = f (H).
For example, the number of vertices of a graph, f (G) = |V (G)|, is an invariant.
1.4.1
Adjacency matrices
24
for i 1, 2 do
Ai adjacency matrix of Gi
pi permutation equivalence class of Ai
A0i lexicographically maximal element of pi
if A01 = A02 then
return True
return False
1.4.2
Degree sequence
Let G be a graph with n vertices. The degree sequence of G is the ordered n-tuple of the
vertex degrees of G arranged in non-increasing order.
The degree sequence of G may contain the same degrees, repeated as often as they
occur. For example, the degree sequence of C6 is 2, 2, 2, 2, 2, 2 and the degree sequence
of the house graph in Figure 1.3 is 3, 3, 2, 2, 2. If n 3 then the cycle graph Cn has the
degree sequence
2, 2, 2, . . . , 2 .
{z
}
|
n copies of 2
2, 2, 2, . . . , 2, 1, 1 .
|
{z
}
n2 copies of 2
For positive integer values of n and m, the complete graph Kn has the degree sequence
n 1, n 1, n 1, . . . , n 1
|
{z
}
n copies of n1
and the complete bipartite graph Km,n has the degree sequence
n, n, n, . . . , n, m, m, m, . . . , m .
|
{z
}|
{z
}
m copies of n
n copies of m
25
j=k+1
for all 1 k n 1.
As noted above, Theorem 1.23 is an existence result showing that something exists without providing a construction of the object under consideration. Havel [95] and
Hakimi [93, 94] independently provided an algorithmic approach that allows for constructing a simple graph with a given degree sequence. See Sierksma and Hoogeveen [171]
for a coverage of seven criteria for a sequence of integers to be graphic. See Erdos et al. [71]
for an extension of the Havel-Hakimi theorem to digraphs.
Theorem 1.24. Havel 1955 [95] & Hakimi 19623 [93, 94]. Consider the nonincreasing sequence S1 = (d1 , d2 , . . . , dn ) of nonnegative integers, where n 2 and d1 1.
Then S1 is graphical if and only if the sequence
S2 = (d2 1, d3 1, . . . , dd1 +1 1, dd1 +2 , . . . , dn )
is graphical.
Proof. Suppose S2 is graphical. Let G2 = (V2 , E2 ) be a graph of order n 1 with vertex
set V2 = {v2 , v3 , . . . , vn } such that
(
di 1, if 2 i d1 + 1,
deg(vi ) =
di ,
if d1 + 2 i n.
Construct a new graph G1 with degree sequence S1 as follows. Add another vertex v1
to V2 and add to E2 the edges v1 vi for 2 i d1 + 1. It is clear that deg(v1 ) = d1 and
deg(vi ) = di for 2 i n. Thus G1 has the degree sequence S1 .
On the other hand, suppose S1 is graphical and let G1 be a graph with degree sequence
S1 such that
(i) The graph G1 has the vertex set V (G1 ) = {v1 , v2 , . . . , vn } and deg(vi ) = di for
i = 1, . . . , n.
(ii) The degree sum of all vertices adjacent to v1 is a maximum.
To obtain a contradiction, suppose v1 is not adjacent to vertices having degrees
d2 , d3 , . . . , dd1 +1 .
Then there exist vertices vi and vj with dj > di such that v1 vi E(G1 ) but v1 vj 6 E(G1 ).
As dj > di , there is a vertex vk such that vj vk E(G1 ) but vi vk 6 E(G1 ). Replacing the
edges v1 vi and vj vk with v1 vj and vi vk , respectively, results in a new graph H whose degree
sequence is S1 . However, the graph H is such that the degree sum of vertices adjacent to
v1 is greater than the corresponding degree sum in G1 , contradicting property (ii) in our
choice of G1 . Consequently, v1 is adjacent to d1 other vertices of largest degree. Then
S2 is graphical because G1 v1 has degree sequence S2 .
26
The proof of Theorem 1.24 can be adapted into an algorithm to determine whether
or not a sequence of nonnegative integers can be realized by a simple graph. If G is
a simple graph, the degree of any vertex in V (G) cannot exceed the order of G. By
the handshaking lemma (Theorem 1.7), the sum of all terms in the sequence cannot be
odd. Once the sequence passes these two preliminary tests, we then adapt the proof of
Theorem 1.24 to successively reduce the original sequence to a smaller sequence. These
ideas are summarized in Algorithm 1.2.
Algorithm 1.2: Havel-Hakimi test for sequences realizable by simple graphs.
Input: A nonincreasing sequence S = (d1 , d2 , . . . , dn ) of nonnegative integers,
where n 2.
Output: True if S is realizable by a simple graph; False otherwise.
P
1 if
i di is odd then
2
return False
3 while True do
4
if min(S) < 0 then
5
return False
6
if max(S) = 0 then
7
return True
8
if max(S) > length(S) 1 then
9
return False
10
S (d2 1, d3 1, . . . , dd1 +1 1, dd1 +2 , . . . , dlength(S) )
11
sort S in nonincreasing order
We now show that Algorithm 1.2 determines whether or not a sequence of integers
is realizable by a simple graph. Our input is a sequence S = (d1 , d2 , . . . , dn ) arranged
in non-increasing order, where each di 0. The first test as contained in the if block,
otherwise known as a conditional, on line 1 uses the handshaking lemma (Theorem 1.7).
During the first run of the while loop, the conditional on line 4 ensures that the sequence
S only consists of nonnegative integers. At the conditional on line 6, we know that S
is arranged in non-increasing order and has nonnegative integers. If this conditional
holds true, then S is a sequence of zeros and it is realizable by a graph with only isolated
vertices. Such a graph is simple by definition. The conditional on line 8 uses the following
property of simple graphs: If G is a simple graph, then the degree of each vertex of G
is less than the order of G. By the time we reach line 10, we know that S has n terms,
max(S) > 0, and 0 di n 1 for all i = 1, 2, . . . , n. After applying line 10, S is now a
sequence of n 1 terms with max(S) > 0 and 0 di n 2 for all i = 1, 2, . . . , n 1. In
general, after k rounds of the while loop, S is a sequence of n k terms with max(S) > 0
and 0 di n k 1 for all i = 1, 2, . . . , n k. And after n 1 rounds of the while
loop, the resulting sequence has one term whose value is zero. In other words, eventually
Algorithm 1.2 produces a sequence with a negative term or a sequence of zeros.
1.4.3
Invariants revisited
In some cases, one can distinguish non-isomorphic graphs by considering graph invariants.
For instance, the graphs C6 and G1 in Figure 1.18 are isomorphic so they have the same
number of vertices and edges. Also, G1 and G2 in Figure 1.18 are non-isomorphic because
27
the former is connected, while the latter is not connected. To prove that two graphs
are non-isomorphic, one could show that they have different values for a given graph
invariant. The following list contains some items to check off when showing that two
graphs are non-isomorphic:
1. the number of vertices,
2. the number of edges,
3. the degree sequence,
4. the length of a geodesic path,
5. the length of the longest path,
6. the number of connected components of a graph.
1.5
This section provides a brief survey of operations on graphs to obtain new graphs from
old graphs. Such graph operations include unions, products, edge addition, edge deletion,
vertex addition, and vertex deletion. Several of these are briefly described below.
1.5.1
The disjoint union of graphs is defined as follows. For two graphs G1 = (V1 , E1 ) and
G2 = (V2 , E2 ) with disjoint vertex sets, their disjoint union is the graph
G1 G2 = (V1 V2 , E1 E2 ).
For example, Figure 1.19 shows the vertex disjoint union of the complete bipartite graph
K1,5 with the wheel graph W4 . The adjacency matrix A of the disjoint union of two
graphs G1 and G2 is the diagonal block matrix obtained from the adjacency matrices A1
and A2 , respectively. Namely,
A1 0
A=
.
0 A2
Sage can compute graph unions, as the following example shows.
sage : G1 = Graph ({1:[2 ,4] , 2:[1 ,3] , 3:[2 ,6] , 4:[1 ,5] , 5:[4 ,6] , 6:[3 ,5]})
sage : G2 = Graph ({7:[8 ,10] , 8:[7 ,10] , 9:[8 ,12] , 10:[7 ,9] , 11:[10 ,8] , 12:[9 ,7]})
sage : G1u2 = G1 . union ( G2 )
sage : G1u2 . adjacency_matrix ()
[0 1 0 1 0 0 0 0 0 0 0 0]
[1 0 1 0 0 0 0 0 0 0 0 0]
[0 1 0 0 0 1 0 0 0 0 0 0]
[1 0 0 0 1 0 0 0 0 0 0 0]
[0 0 0 1 0 1 0 0 0 0 0 0]
[0 0 1 0 1 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 1 0 1 0 1]
[0 0 0 0 0 0 1 0 1 1 1 0]
[0 0 0 0 0 0 0 1 0 1 0 1]
[0 0 0 0 0 0 1 1 1 0 1 0]
[0 0 0 0 0 0 0 1 0 1 0 0]
[0 0 0 0 0 0 1 0 1 0 0 0]
28
In the case where V1 = V2 , then G1 G2 is simply the graph consisting of all edges in G1
or in G2 . In general, the union of two graphs G1 = (V1 , E1 ) and G2 = (V2 , E2 ) is defined
as
G1 G2 = (V1 V2 , E1 E2 )
3
1
4
1
4
2
(a) G1
(b) G2
(c) G1 G2
(d) G1 G2
Figure 1.20: The union and intersection of graphs with overlapping vertex sets.
The intersection of graphs is defined as follows. For two graphs G1 = (V1 , E1 ) and
G2 = (V2 , E2 ), their intersection is the graph
G1 G2 = (V1 V2 , E1 E2 ).
Figure 1.20(d) illustrates the intersection of two graphs whose vertex sets overlap.
The symmetric difference of graphs is defined as follows. For two graphs G1 = (V1 , E1 )
and G2 = (V2 , E2 ), their symmetric difference is the graph
G1 G2 = (V, E)
where V = V1 V2 and the edge set is given by
E = (E1 E2 )\{uv | u V1 V2
or v V1 V2 }.
29
5
5
3
7
4
1
(a) G1
(b) G2
(c) G1 G2
(a) W4
(b) W5
(c) W6
(d) W7
(e) W8
(f) W9
30
1.5.2
(b) G {b}
(a) G
c
(c) G {a, b}
(d) G {a, b, e}
(e) G {a, b, c, d, e}
31
Figure 1.24 shows a sequence of graphs resulting from edge deletion. Unlike vertex
deletion, when an edge is deleted the vertices incident on that edge are left intact.
b
(a) G
(b) G {ac}
32
Proof. First, assume that e = uv is a bridge of G. Suppose for contradiction that e lies
on a cycle
C : u, v, w1 , w2 , . . . , wk , u.
Then G e contains a u-v path u, wk , . . . , w2 , w1 , v. Let u1 , v1 be any two vertices in
G e. By hypothesis, G is connected so there is a u1 -v1 path P in G. If e does not lie
on P , then P is also a path in G e so that u1 , v1 are connected, which contradicts our
assumption of e being a bridge. On the other hand, if e lies on P , then express P as
u1 , . . . , u, v, . . . , v1
or u1 , . . . , v, u, . . . , v1 .
Now
u1 , . . . , u, wk , . . . , w2 , w1 , v, . . . , v1
or u1 , . . . , v, w1 , w2 , . . . , wk , u, . . . , v1
E1 = wx E | {w, x} {u, v} =
E2 = ve w | uw E\{e} or vw E\{e} .
Let G be the wheel graph W6 in Figure 1.25(a) and consider the edge contraction G/ab,
where ab is the gray colored edge in that figure. Then the edge set E1 denotes all those
edges in G each of which is not incident on a, b, or both a and b. These are precisely
those edges that are colored red. The edge set E2 means that we consider those edges in
G each of which is incident on exactly one of a or b, but not both. The blue colored edges
in Figure 1.25(a) are precisely those edges that E2 suggests for consideration. The result
of the edge contraction G/ab is the wheel graph W5 in Figure 1.25(b). Figures 1.25(a)
to 1.25(f) present a sequence of edge contractions that starts with W6 and repeatedly
contracts it to the trivial graph K1 .
33
d
d
d
f
e
c
f
vab = g
(a) G1
vcg = h
(b) G2 = G1 /ab
(c) G3 = G2 /cg
vdh = i
(d) G4 = G3 /dh
vf i = j
(e) G5 = G4 /f i
vej
(f) G6 = G5 /ej
1.5.3
Complements
The complement of a simple graph has the same vertices, but exactly those edges that
are not in the original graph. In other words, if Gc = (V, E c ) is the complement of
G = (V, E), then two distinct vertices v, w V are adjacent in Gc if and only if they are
not adjacent in G. We also write the complement of G as G. The sum of the adjacency
matrix of G and that of Gc is the matrix with 1s everywhere, except for 0s on the
main diagonal. A simple graph that is isomorphic to its complement is called a selfcomplementary graph. Let H be a subgraph of G. The relative complement of G and H
is the edge deletion subgraph G E(H). That is, we delete from G all edges in H. Sage
can compute edge complements, as the following example shows.
sage : G = Graph ({1:[2 ,4] , 2:[1 ,4] , 3:[2 ,6] , 4:[1 ,3] , 5:[4 ,2] , 6:[3 ,1]})
sage : Gc = G . complement ()
sage : EG = Set ( G . edges ( labels = False )); EG
{(1 , 2) , (4 , 5) , (1 , 4) , (2 , 3) , (3 , 6) , (1 , 6) , (2 , 5) , (3 , 4) , (2 , 4)}
sage : EGc = Set ( Gc . edges ( labels = False )); EGc
{(1 , 5) , (2 , 6) , (4 , 6) , (1 , 3) , (5 , 6) , (3 , 5)}
sage : EG . difference ( EGc ) == EG
True
sage : EGc . difference ( EG ) == EGc
True
sage : EG . intersection ( EGc )
{}
n(n 1)
1 n(n 1)
=
.
2
2
4
34
Then n | n(n 1), with one of n and n 1 being even and the other odd. If n is even,
n 1 is odd so gcd(4, n 1) = 1, hence by [170, Theorem 1.9] we have 4 | n and so
n = 4k for some nonnegative k Z. If n 1 is even, use a similar argument to conclude
that n = 4k + 1 for some nonnegative k Z.
Theorem 1.27. A graph and its complement cannot be both disconnected.
Proof. If G is connected, then we are done. Without loss of generality, assume that G
is disconnected and let G be the complement of G. Let u, v be vertices in G. If u, v
are in different components of G, then they are adjacent in G. If both u, v belong to
some component Ci of G, let w be a vertex in a different component Cj of G. Then u, w
are adjacent in G, and similarly for v and w. That is, u and v are connected in G and
therefore G is connected.
1.5.4
Cartesian product
The Cartesian product GH of graphs G and H is a graph such that the vertex set of
GH is the Cartesian product
V (GH) = V (G) V (H).
Any two vertices (u, u0 ) and (v, v 0 ) are adjacent in GH if and only if either
1. u = v and u0 is adjacent with v 0 in H; or
2. u0 = v 0 and u is adjacent with v in G.
The vertex set of GH is V (GH) and the edge set of GH is
E(GH) = V (G) E(H) E(G) V (H) .
35
(a) K3
(b) P3
(c) K3 P3
(a) Q1
(b) Q2
(c) Q3
(d) Q4
36
(a) M (3, 4)
(b) M (3, 2, 3)
Figure 1.28: The 2-mesh M (3, 4) and the 3-mesh M (3, 2, 3).
1.5.5
Graph minors
1.6
Common applications
(a) 2,4,4-trimethylheptane
(b) naphthalene
37
is called a molecular graph; two such examples are illustrated in Figure 1.29. Below we
list a few common problems arising in applications of graph theory. See Foulds [77] and
Walther [187] for surveys of applications of graph theory in science, engineering, social
sciences, economics, and operation research.
If the edge weights are all nonnegative, find a cheapest closed path which contains
all the vertices. This is related to the famous traveling salesman problem and is
further discussed in Chapters 2 and 6.
Find a walk that visits each vertex, but contains as few edges as possible and
contains no cycles. This type of problem is related to spanning trees and is discussed
in further details in Chapter 3.
Determine which vertices are more central than others. This is connected with
various applications to social network analysis and is covered in more details in
Chapters 5 and 10. An example of a social network is shown in Figure 1.30, which
illustrates the marriage ties among Renaissance Florentine families [29]. Note that
one family has been removed because its inclusion would create a disconnected
graph.
Pazzi
Ginori
Salviati
Albizzi
Medici
Acciaiuol
Lambertes
Barbadori
Ridolfi
Strozzi
Castellan
Bischeri
Peruzzi
Tornabuon
Guadagni
38
(a)
(b)
1.7
39
20
50
0
20
50
40
70
20
60
90
40
50
70 $1
80 $1
60
70
90 $1
80 $1 $1
90 $1 $1
$1 $1 $1
Table 1.1: Transition table of a simple vending machine.
1.7.1
40
20
20
40
60
20
20
50
50
80
20
70
20
20
50
90
50
20
50
50
50
50
$1
50
20, 50
(1, b) = 2,
(2, a) = 2,
(2, b) = 2.
41
Example 1.34. Let A = (Q, , , Q0 , F ) be defined by Q = {1, 2}, = {a, b}, Q0 = {1},
F = {2}, and the transition function given by
(1, a) = 1,
(1, a) = 2,
(2, a) = 2,
(2, b) = 2.
Figure 1.34 shows a digraph representation of A. Note that (1, a) = 1 and (1, a) = 2.
It follows by definition that A is an NFA.
The special language L(A) is also referred to as a regular language. Referring back to
example 1.33, any string accepted by A has zero or more a, followed by exactly one b,
and finally zero or more occurrences of a or b. We can describe this language using the
regular expression a b(a|b) .
For NFAs, we can similarly define a transition function operating on finite strings.
Each input is a string over and the transition function returns a subset of Q. Formally,
our transition function for NFAs operating on finite strings is the map
: Q 2Q .
Let q Q and let w = xa, where x and a . The input symbol a can be
interpreted as being the very last symbol in the string w. Then x is interpreted as being
the substring of w excluding the symbol a. In the case of the empty string, we have
) = {q}. For the inductive case, assume that (q,
x) = {p1 , p2 , . . . , pk } where each
(q,
k
[
(pi , a).
i=1
It may happen that for some state pi , there are no transitions from pi with input a. We
cater for this possibility by writing (pi , a) = .
42
1.7.2
Any NFA can be simulated by a DFA. One way of accomplishing this is to allow the DFA
to keep track of all the states that the NFA can be in after reading an input symbol.
The formal proof depends on this construction of an equivalent DFA and then showing
that the language of the DFA is the same as that of the NFA.
Theorem 1.35. Determinize an NFA. If A is a nondeterministic finite-state automaton, then there exists a deterministic finite-state automaton A0 such that L(A) =
L(A0 ).
({q1 , q2 , . . . , qi }, s) =
i
[
(qk , s) = {p1 , p2 , . . . , pj }.
(1.10)
k=1
For any input string w, we now show by induction on the length of w that
0 (q00 , w) = [q1 , q2 , . . . , qi ]
(Q0 , w) = {q1 , q2 , . . . , qi }.
(1.11)
For the basis step, let |w| = 0 so that w = . Then it is clear that
0 (q00 , w) = 0 (q00 , ) = [q00 ]
(Q0 , w) = (Q0 , ) = Q0 .
Next, assume for induction that statement (1.11) holds for all strings of length less than
or equal to m > 0. Let w be a string of length m and let a so that |wa| = m + 1.
Then 0 (q00 , wa) = 0 0 (q00 , w), a . By our inductive hypothesis, we have
0 (q00 , w) = [p1 , p2 , . . . , pj ]
(Q0 , w) = {p1 , p2 , . . . , pj }
43
Theorem 1.35 tells us that any NFA corresponds to some DFA that accepts the
same language. For this reason, the theorem is said to provide us with a procedure
for determinizing NFAs. The actual procedure itself is contained in the proof of the
theorem, although it must be noted that the procedure is inefficient since it potentially
yields transitions from states that are unreachable from the initial state. If q Q0 is
a state of A0 that is unreachable from q00 , then there are no input strings w such that
0 (q00 , w) = q. Such unreachable states are redundant insofar as they do not affect L(A0 ).
Another inefficiency of the procedure in the proof of Theorem 1.35 is the problem of
state space explosion. As Q0 = 2Q is the power set of Q, the resulting DFA can potentially
have exponentially more states than the NFA it is simulating. In the worse case, each
element of Q0 is a state of the resulting DFA that is reachable from q00 = [Q0 ]. The
best-case scenario is when each state of the DFA is a singleton, hence the DFA has the
same number of states as its corresponding NFA. However, according to the procedure
in the proof of Theorem 1.35, we generate all the possible 2n states of the DFA, where
n = |Q|. After considering all the transitions whose starting states are singletons, we
then consider all transitions starting from each of the remaining 2n n elements in Q0 .
In the best-case, none of those remaining 2n n states are reachable from q00 , hence it is
redundant to generate transitions starting at each of those 2n n states. Example 1.36
concretizes our discussion.
Example 1.36. Use the procedure in Theorem 1.35 to determinize the NFA in Figure 1.35.
a
b
a
c
3
a
1 {1, 2}
{2}
c
{3}
{3}
44
we apply (1.10) to construct all the possible transitions of A0 . These transitions are
contained in Table 1.3. Using those transitions, we obtain the digraph representation in
Figure 1.36, from which it is clear that the states [1], [2], [3], and [1, 2] are the only states
0
[1]
a
[1, 2]
[2]
[1, 2]
[1, 2]
[2]
[3]
[1, 3]
[1, 2]
[3]
[2, 3]
[2]
[3]
[2]
[3]
[3]
[1, 2, 3] [1, 2]
c
[3]
[2]
[3]
Table 1.3: Transition table of a deterministic version of the NFA in Figure 1.35.
reachable from the initial state q00 = [1]. The remaining states [1, 3], [2, 3], and [1, 2, 3]
are not reachable from q00 = [1]. In other words, starting at q00 = [1] there are no input
strings that would result in a transition to any of [1, 3], [2, 3], and [1, 2, 3]. Therefore these
states, and the transitions starting from them, can be deleted from Figure 1.36 without
affecting the language of A0 . Figure 1.37 shows an equivalent DFA with redundant states
removed.
a
a
c
1, 2
3
c
1, 2, 3
a
b
b
2, 3
1, 3
Figure 1.36: A DFA accepting the same language as the NFA in Figure 1.35.
1.8. Problems
45
a
c
a
1, 2
3
c
Figure 1.37: A DFA equivalent to that in Figure 1.36, with redundant states removed.
1.8
Problems
A problem left to itself dries up or goes rotten. But fertilize a problem with a solution
youll hatch out dozens.
N. F. Simpson, A Resounding Tinkle, 1958
Bob
Carol
vV
46
Karlsruhe
197
54
Augsburg
Mannheim
57
72
Munich
383
Kassel
149
97
Nuremberg
157
Stuttgart
Frankfurt
145
90
W
urzburg
154
Erfurt
1.8. Problems
47
1.5. If G is a simple graph of order n > 0, show that deg(v) < n for all v V (G).
1.6. Let G be a graph of order n and size m. Then G is called an overfull graph if
m > (G) bn/2c. If m = (G) bn/2c + 1, then G is said to be just overfull.
It can be shown that overfull graphs have odd order. Equivalently, let G be of
odd order. We can define G to be overfull if m > (G) (n 1)/2, and G is just
overfull if m = (G) (n 1)/2 + 1. Find an overfull graph and a graph that is
just overfull. Some basic results on overfull graphs are presented in Chetwynd and
Hilton [49].
1.7. Fix a positive integer n and denote by (n) the number of simple graphs on n
vertices. Show that
n
(n) = 2( 2 ) = 2n(n1)/2 .
1.8. Let G be an undirected graph whose unoriented incidence matrix is Mu and whose
oriented incidence matrix is Mo .
(a) Show that the sum of the entries in any row of Mu is the degree of the
corresponding vertex.
(b) Show that the sum of the entries in any column of Mu is equal to 2.
(c) If G has no self-loops, show that each column of Mo sums to zero.
1.9. Let G be a loopless digraph and let M be its incidence matrix.
(a) If r is a row of M , show that the number of occurrences of 1 in r counts
the outdegree of the vertex corresponding to r. Show that the number of
occurrences of 1 in r counts the indegree of the vertex corresponding to r.
(b) Show that each column of M sums to 0.
1.10. Let G be a digraph and let M be its incidence matrix. For any row r of M , let m
be the frequency of 1 in r, let p be the frequency of 1 in r, and let t be twice the
frequency of 2 in r. If v is the vertex corresponding to r, show that the degree of
v is deg(v) = m + p + t.
1.11. Let G be an undirected graph without self-loops and let M and its oriented incidence matrix. Show that the Laplacian matrix L of G satisfies L = M M T ,
where M T is the transpose of M .
1.12. Let J1 denote the incidence matrix of G1 and let J2 denote the incidence matrix of
G2 . Find matrix theoretic criteria on J1 and J2 which hold if and only if G1
= G2 .
In other words, find the analog of Theorem 1.22 for incidence matrices.
1.13. Show that the complement of an edgeless graph is a complete graph.
1.14. Let GH be the Cartesian product of two graphs G and H. Show that |E(GH)| =
|V (G)| |E(H)| + |E(G)| |V (H)|.
1.15. In 1751, Leonhard Euler posed a problem to Christian Goldbach, a problem that
now bears the name Eulers polygon division problem. Given a plane convex
polygon having n sides, how many ways are there to divide the polygon into triangles using only diagonals? For our purposes, we consider only regular polygons
48
4k 2
Ck1 .
k+1
1.16. A graph is said to be planar if it can be drawn on the plane in such a way that
no two edges cross each other. For example, the complete graph Kn is planar for
n = 1, 2, 3, 4, but K5 is not planar (see Figure 1.12). Draw a planar version of K4 as
1.8. Problems
49
modulus,
0<m
a,
multiplier,
c,
increment,
0a<m
X0 ,
seed,
0c<m
0 X0 < m
where the value X0 is also referred to as the starting value. Then iterate the
relation
Xn+1 = (aXn + c) mod m,
n0
and halt when the relation produces the seed X0 or when it produces an integer
Xk such that Xk = Xi for some 0 i < k. The resulting sequence
S = (X0 , X1 , . . . , Xn )
is called a linear congruential sequence. Define a graph theoretic representation
of S as follows: let the vertex set be V = {X0 , X1 , . . . , Xn } and let the edge set
be E = {Xi Xi+1 | 0 i < n}. The resulting graph G = (V, E) is called the
linear congruential graph of the linear congruential sequence S. See chapter 3 of
Knuth [119] for other techniques for generating random numbers.
(a) Compute the linear congruential sequences Si with the following parameters:
(i)
(ii)
(iii)
(iv)
S1 :
S2 :
S3 :
S4 :
m = 10,
m = 10,
m = 10,
m = 10,
a = c = X0 = 7
a = 5, c = 7, X0 = 0
a = 3, c = 7, X0 = 2
a = 2, c = 5, X0 = 3
(b) Let Gi be the linear congruential graph of Si . Draw each of the graphs Gi .
Draw the graph resulting from the union
[
Gi .
i
50
1.19. We want to generate a random bipartite graph whose first and second partitions
have n1 and n2 vertices, respectively. Describe and present pseudocode to generate
the required random bipartite graph. What is the worst-case runtime of your
algorithm? Modify your algorithm to account for a third parameter m that specifies
the number of edges in the resulting bipartite graph.
1.20. Describe and present pseudocode to generate a random regular graph. What is the
worst-case runtime of your algorithm?
1.21. The Cantor-Schroder-Bernstein theorem states that if A, B are sets and we have
an injection f : A B and an injection g : B A, then there is a bijection
between A and B, thus proving that A and B have the same cardinality. Here
we use bipartite graphs and other graph theoretic concepts to prove the CantorSchroder-Bernstein theorem. The full proof can be found in Yegnanarayanan [199].
(a) Is it possible for A and B to be bipartitions of V and yet satisfy A B 6= ?
(b) Now assume that A B = and define a bipartite graph G = (V, E) with A
and B being the two partitions of V , where for any x A and y B we have
xy E if and only if either f (x) = y or g(y) = x. Show that deg(v) = 1 or
deg(v) = 2 for each v V .
(c) Let C be a component of G and let A0 A and B 0 B contain all vertices
in the component C. Show that |A0 | = |B 0 |.
1.22. Fermats little theorem states that if p is prime and a is an integer not divisible
by p, then p divides ap a. Here we cast the problem within the context of graph
theory and prove it using graph theoretic concepts. The full proof can be found in
Heinrich and Horak [96] and Yegnanarayanan [199].
(a) Let G = (V, E) be a graph with V being the set of all sequences (a1 , a2 , . . . , ap )
of integers 1 ai a and aj 6= ak for some j 6= k. Show that G has ap a
vertices.
(b) Define the edge set of G as follows. If u, v V such that u = (u1 , u2 , . . . , up )
and v = (up , u1 , . . . , up1 ), then uv E. Show that each component of G is a
cycle of length p.
(c) Show that G has (ap a)/p components.
1.23. For the finite automaton in Figure 1.32, identify the following:
(a) The states set Q.
(b) The alphabet set .
(c) The transition function : Q Q.
1.24. The cycle graph Cn is a 2-regular graph. If 2 < r < n/2, unlike the cycle graph
there are various realizations of an r-regular graph; see Figure 1.41 for the case of
r = 3 and n = 10. The k-circulant graph on n vertices can be considered as an
intermediate graph between Cn and a k-regular graph. Let k and n be positive
1.8. Problems
51
3
1
9
7
(a)
3
1
9
7
9
7
(b)
(c)
(a) k = 4
(b) k = 6
(c) k = 8
Chapter 2
Graph Algorithms
Graph algorithms have many applications. Suppose you are a salesman with a product
you would like to sell in several cities. To determine the cheapest travel route from cityto-city, you must effectively search a graph having weighted edges for the cheapest
route visiting each city once. Each vertex denotes a city you must visit and each edge
has a weight indicating either the distance from one city to another or the cost to travel
from one city to another.
Shortest path algorithms are some of the most important algorithms in algorithmic
graph theory. In this chapter, we first examine several common graph traversal algorithms and some basic data structures underlying these algorithms. A data structure is
a combination of methods for structuring a collection of data (e.g. vertices and edges)
and protocols for accessing the data. We then consider a number of common shortest
path algorithms, which rely in one way or another on graph traversal techniques and
basic data structures for organizing and managing vertices and edges.
52
2.1
53
In section 1.3, we discussed how to use matrices for representing graphs and digraphs. If
A = [aij ] is an mn matrix, the adjacency matrix representation of a graph would require
representing all the mn entries of A. Alternative graph representations exist that are
much more efficient than representing all entries of a matrix. The graph representation
used can be influenced by the size of a graph or the purpose of the representation. Section 2.1.1 discusses the adjacency list representation that can result in less storage space
requirement than the adjacency matrix representation. The graph6 format discussed in
section 2.1.3 provides a compact means of storing graphs for archival purposes.
2.1.1
Adjacency lists
A list is a sequence of objects. Unlike sets, a list may contain multiple copies of the same
object. Each object in a list is referred to as an element of the list. A list L of n 0
elements is written as L = [a1 , a2 , . . . , an ], where the i-th element ai can be indexed
as L[i]. In case n = 0, the list L = [ ] is referred to as the empty list. Two lists are
equivalent if they both contain the same elements at exactly the same positions.
Define the adjacency lists of a graph as follows. Let G be a graph with vertex set
V = {v1 , v2 , . . . , vn }. Assign to each vertex vi a list Li containing all the vertices that
are adjacent to vi . The list Li associated with vi is referred to as the adjacency list of
vi . Then Li = [ ] if and only if vi is an isolated vertex. We say that Li is the adjacency
list of vi because any permutation of the elements of Li results in a list that contains
the same vertices adjacent to vi . We are mainly concerned with the neighbors of vi , but
disregard the position where each neighbor is located in Li . If each adjacency list Li
contains si elements where 0 si n, we say that Li has length
si . The adjacency
P
list representation of the graph G requires that we represent i si = 2 |E(G)| n2
elements in a computers memory, since each edge appears twice in the adjacency list
representation. An adjacency list is explicit about which vertices are adjacent to a vertex
and implicit about which vertices are not adjacent to that same vertex. Without knowing
the graph G, given the adjacency lists L1 , L2 , . . . , Ln , we can reconstruct G. For example,
Figure 2.1 shows a graph and its adjacency list representation.
4
3
7
L1 = [2, 8]
L5 = [6, 8]
L2 = [1, 6]
L6 = [2, 5, 8]
L3 = [4]
L7 = [ ]
L4 = [3]
L8 = [1, 5, 6]
54
vertices are adjacent if their corresponding sets are disjoint. Draw the (5, 2)-Kneser
graph and find its order and adjacency lists. In general, if n and k are positive, what is
the order of the (n, k)-Kneser graph?
Solution. The (5, 2)-Kneser graph is the graph whose vertices are the 2-subsets
{1, 2}, {1, 3}, {1, 4}, {1, 5}, {2, 3}, {2, 4}, {2, 5}, {3, 4}, {3, 5}, {4, 5}
of {1, 2, 3, 4, 5}. That is, each vertex of the (5, 2)-Kneser graph
a 2-combination of the
is54
5
set {1, 2, 3, 4, 5} and therefore the graph itself has order 2 = 2! = 10. The edges of
this graph are
({1, 3}, {2, 4}), ({2, 4}, {1, 5}), ({2, 4}, {3, 5}), ({1, 3}, {4, 5}), ({1, 3}, {2, 5})
({3, 5}, {1, 4}), ({3, 5}, {1, 2}), ({1, 4}, {2, 3}), ({1, 4}, {2, 5}), ({4, 5}, {2, 3})
({4, 5}, {1, 2}), ({1, 5}, {2, 3}), ({1, 5}, {3, 4}), ({3, 4}, {1, 2}), ({3, 4}, {2, 5})
from which we obtain the following adjacency lists:
L{1,2} = [{3, 4}, {3, 5}, {4, 5}],
The (5, 2)-Kneser graph itself is shown in Figure 2.2. Using Sage, we have
sage : K = graphs . KneserGraph (5 , 2); K
Kneser graph with parameters 5 ,2: Graph on 10 vertices
sage : for v in K . vertices ():
...
print (v , K . neighbors ( v ))
...
({4 , 5} , [{1 , 3} , {1 , 2} , {2 , 3}])
({1 , 3} , [{2 , 4} , {2 , 5} , {4 , 5}])
({2 , 5} , [{1 , 3} , {3 , 4} , {1 , 4}])
({2 , 3} , [{1 , 5} , {1 , 4} , {4 , 5}])
({3 , 4} , [{1 , 2} , {1 , 5} , {2 , 5}])
({3 , 5} , [{2 , 4} , {1 , 2} , {1 , 4}])
({1 , 4} , [{2 , 3} , {3 , 5} , {2 , 5}])
({1 , 5} , [{2 , 4} , {3 , 4} , {2 , 3}])
({1 , 2} , [{3 , 4} , {3 , 5} , {4 , 5}])
({2 , 4} , [{1 , 3} , {1 , 5} , {3 , 5}])
If n and k are positive integers, then the (n, k)-Kneser graph has
n
n(n 1) (n k + 1)
=
k!
k
vertices.
We can categorize a graph G = (V, E) as dense or sparse
based upon its size. A dense
2
2
graph has size |E| that is close to |V | , i.e. |E| = |V | , in which case it is feasible to
2
represent G as an
adjacency matrix. The size of a sparse graph is much less than |V | ,
i.e. |E| = |V | , which renders the adjacency matrix representation as unsuitable. For
a sparse graph, an adjacency list representation can require less storage space than an
adjacency matrix representation of the same graph.
55
{3, 4}
{1, 2}
{1, 5}
{2, 5}
{2, 3}
{3, 5}
{4, 5}
{1, 4}
{2, 4}
{1, 3}
2.1.2
Edge lists
Lists can also be used to store the edges of a graph. To create an edge list L for a graph
G, if uv is an edge of G then we let uv or the ordered pair (u, v) be an element of L. In
general, let
v0 v1 , v2 v3 , . . . , vk vk+1
be all the edges of G, where k is even. Then the edge list of G is given by
L = [v0 v1 , v2 v3 , . . . , vk vk+1 ].
In some cases, it is desirable to have the edges of G be in contiguous list representation.
If the edge list L of G is as given above, the contiguous edge list representation of the
edges of G is
[v0 , v1 , v2 , v3 , . . . , vk , vk+1 ].
That is, if 0 i k is even then vi vi+1 is an edge of G.
2.1.3
The graph formats graph6 and sparse6 were developed by Brendan McKay [139] at
The Australian National University as a compact way to represent graphs. These two
formats use bit vectors and printable characters of the American Standard Code for
Information Interchange (ASCII) encoding scheme. The 64 printable ASCII characters
used in graph6 and sparse6 are those ASCII characters with decimal codes from 63 to
126, inclusive, as shown in Table 2.1. This section shall only cover the graph6 format.
For full specification on both of the graph6 and sparse6 formats, see McKay [139].
Bit vectors
Before discussing how graph6 and sparse6 represent graphs using printable ASCII characters, we first present encoding schemes used by these two formats. A bit vector is, as
56
binary decimal
0111111
63
1000000
64
1000001
65
1000010
66
1000011
67
1000100
68
1000101
69
1000110
70
1000111
71
1001000
72
1001001
73
1001010
74
1001011
75
1001100
76
1001101
77
1001110
78
1001111
79
1010000
80
1010001
81
1010010
82
1010011
83
1010100
84
1010101
85
1010110
86
1010111
87
1011000
88
1011001
89
1011010
90
1011011
91
1011100
92
1011101
93
1011110
94
glyph
?
@
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
[
\
]
^
binary decimal
1011111
95
1100000
96
1100001
97
1100010
98
1100011
99
1100100
100
1100101
101
1100110
102
1100111
103
1101000
104
1101001
105
1101010
106
1101011
107
1101100
108
1101101
109
1101110
110
1101111
111
1110000
112
1110001
113
1110010
114
1110011
115
1110100
116
1110101
117
1110110
118
1110111
119
1111000
120
1111001
121
1111010
122
1111011
123
1111100
124
1111101
125
1111110
126
glyph
_
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
{
|
}
~
57
its name suggests, a vector whose elements are 1s and 0s. It can be represented as a list
of bits, e.g. E can be represented as the ASCII bit vector [1, 0, 0, 0, 1, 0, 1]. For brevity,
we write a bit vector in a compact form such as 1000101. The length of a bit vector
is its number of bits. The most significant bit of a bit vector v is the bit position with
the largest value among all the bit positions in v. Similarly, the least significant bit is
the bit position in v having the least value among all the bit positions in v. The least
significant bit of v is usually called the parity bit because when v is interpreted as an
integer the parity bit determines whether the integer is even or odd. Reading 1000101
from left to right, the first bit 1 is the most significant bit, followed by the second bit 0
which is the second most significant bit, and so on all the way down to the seventh bit
1 which is the least significant bit.
The order in which we process the bits of a bit vector
v = bn1 bn2 b0
(2.1)
n1
X
i=0
(2.2)
at x = 2. See problem 2.2 for discussion of an efficient method to compute the integer
representation of a bit vector.
position
bit value
position value
0
1
20
1
0
21
2
0
22
3
0
23
4
1
24
5
0
25
6
1
26
58
0
1
26
1
0
25
2
0
24
3
0
23
4
1
22
5
0
21
6
1
20
n + 63,
if 0 n 62,
c2
c3
ci
cn
where ci denotes the entries a0,i a1,i ai1,i in column i of M . Then the graph6 representation of G is N (n)R(v), where R(v) and N (n) are as in (2.3) and (2.4), respectively.
That is, N (n) encodes the order of G and R(v) encodes the edges of G.
2.2
Graph searching
Errors, like straws, upon the surface flow;
He who would search for pearls must dive below.
John Dryden, All for Love, 1678
59
This section discusses two fundamental algorithms for graph traversal: breadth-first
search and depth-first search. The word search used in describing these two algorithms
is rather misleading. It would be more accurate to describe them as algorithms for
constructing trees using the adjacency information of a given graph. However, the names
breadth-first search and depth-first search are entrenched in literature on graph
theory and computer science. From hereon, we use these two names as given above,
bearing in mind their intended purposes.
2.2.1
Breadth-first search
Breadth-first search (BFS) is a strategy for running through the vertices of a graph. It
was presented by Moore [146] in 1959 within the context of traversing mazes. Lee [130]
independently discovered the same algorithm in 1961 in his work on routing wires on
circuit boards. In the physics literature, BFS is also known as a burning algorithm in
view of the analogy of a fire burning and spreading through an area, a piece of paper,
fabric, etc.
The basic BFS algorithm can be described as follows. Starting from a given vertex
v of a graph G, we first explore the neighborhood of v by visiting all vertices that are
adjacent to v. We then apply the same strategy to each of the neighbors of v. The
strategy of exploring the neighborhood of a vertex is applied to all vertices of G. The
result is a tree rooted at v and this tree is a subgraph of G. Algorithm 2.1 presents a
general template for the BFS strategy. The tree resulting from the BFS algorithm is
called a breadth-first search tree.
Algorithm 2.1: A general breadth-first search template.
Input: A directed or undirected graph G = (V, E) of order n > 0. A vertex s
from which to start the search. The vertices are numbered from 1 to
n = |V |, i.e. V = {1, 2, . . . , n}.
Output: A list D of distances of all vertices from s. A tree T rooted at s.
1
2
3
4
5
6
7
8
9
10
11
12
Q [s]
/* queue of nodes to visit */
D [, , . . . , ]
/* n copies of */
D[s] 0
T []
while length(Q) > 0 do
v dequeue(Q)
for each w adj(v) do
if D[w] = then
D[w] D[v] + 1
enqueue(Q, w)
append(T, vw)
return (D, T )
The breadth-first search algorithm makes use of a special type of list called a queue.
This is analogous to a queue of people waiting in line to be served. A person may enter
the queue by joining the rear of the queue. The person who is in the queue the longest
amount of time is served first, followed by the person who has waited the second longest
time, and so on. Formally, a queue Q is a list of elements. At any time, we only have
60
access to the first element of Q, known as the front or start of the queue. We insert
a new element into Q by appending the new element to the rear or end of the queue.
The operation of removing the front of Q is referred to as dequeue, while the operation
of appending to the rear of Q is called enqueue. That is, a queue implements a first-in
first-out (FIFO) protocol for adding and removing elements. As with lists, the length of
a queue is its total number of elements.
1
7
4
6
7
4
6
2
2
3
7
4
6
61
7
4
6
7
4
6
2
3
3
7
2
4
6
62
The i-th element D[i] counts the number of edges in T between the vertices s and vi . In
other words, D[i] is the length of the s-vi path in T . It can be shown that D[i] = if
and only if G is disconnected. After one application of Algorithm 2.1, it may happen that
D[i] = for at least one vertex vi V . To traverse those vertices that are unreachable
from s, again we apply Algorithm 2.1 on G with starting vertex vi . Repeat this algorithm
as often as necessary until all vertices of G are visited. The result may be a tree that
contains all the vertices of G or a collection of trees, each of which contains a subset of
V (G). Figures 2.3 and 2.4 present BFS trees resulting from applying Algorithm 2.1 on
an undirected graph and a digraph, respectively.
Theorem 2.2. The worst-case time complexity of Algorithm 2.1 is O(|V | + |E|).
Proof. Without loss of generality, we can assume that G = (V, E) is connected. The
initialization steps in lines 1 to 4 take O(|V |) time. After initialization, all but one
vertex are labelled . Line 8 ensures that each vertex is enqueued at most once and
hence dequeued at most once. Each of enqueuing and dequeuing takes constant time.
The total time devoted to queue operations is O(|V |). The adjacency list of a vertex
is scanned after dequeuing that vertex, so each adjacency list is scanned at most once.
Summing the lengths of the adjacency lists, we have (|E|) and therefore we require
O(|E|) time to scan the adjacency lists. After the adjacency list of a vertex is scanned,
at most k edges are added to the list T , where k is the length of the adjacency list under
consideration. Like queue operations, appending to a list takes constant time, hence we
require O(|E|) time to build the list T . Therefore, BFS runs in O(|V | + |E|) time.
Theorem 2.3. For the list D resulting from Algorithm 2.1, let s be a starting vertex
and let v be a vertex such that D[v] 6= . Then D[v] is the length of any shortest path
from s to v.
Proof. It is clear that D[v] = if and only if there are no paths from s to v. Let v be
a vertex such that D[v] 6= . As v can be reached from s by a path of length D[v], the
length d(s, v) of any shortest s-v path satisfies d(s, v) D[v]. Use induction on d(s, v) to
show that equality holds. For the base case s = v, we have d(s, v) = D[v] = 0 since the
trivial path has length zero. Assume for induction that if d(s, v) = k, then d(s, v) = D[v].
Let d(s, u) = k + 1 with the corresponding shortest s-u path being (s, v1 , v2 , . . . , vk , u).
By our induction hypothesis, (s, v1 , v2 , . . . , vk ) is a shortest path from s to vk of length
d(s, vk ) = D[vk ] = k. In other words, D[vk ] < D[u] and the while loop spanning lines 5
to 11 processes vk before processing u. The graph under consideration has the edge vk u.
When examining the adjacency list of vk , BFS reaches u (if u is not reached earlier) and
so D[u] k + 1. Hence, D[u] = k + 1 and therefore d(s, u) = D[u] = k + 1.
In the proof of Theorem 2.3, we used d(u, v) to denote the length of the shortest path
from u to v. This shortest path length is also known as the distance from u to v, and
will be discussed in further details in section 2.3 and Chapter 5. The diameter diam(G)
of a graph G = (V, E) is defined as
diam(G) = max d(u, v).
u,vV
u6=v
(2.5)
Using the above definition, to find the diameter we first determine the distance between
each pair of distinct vertices, then we compute the maximum of all such distances.
63
Breadth-first search is a useful technique for finding the diameter: we simply run breadthfirst search from each vertex. An interesting application of the diameter appears in the
small-world phenomenon [117, 144, 192], which contends that a certain special class of
sparse graphs have low diameter.
2.2.2
Depth-first search
64
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0m0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
65
path, hoping to make further progress. Repeat this strategy until a tour is found or until
we have exhausted all possible moves. The above strategy for finding a knights tour
is an example of depth-first search, sometimes called backtracking. Figure 2.5(b) shows
a knights tour with the starting position as shown in Figure 2.5(a); and Figure 2.5(c)
is a graph representation of this tour. The black-filled nodes indicate the endpoints
of the tour. A more interesting question is: What is the number of knights tours
on an 8 8 chessboard? Loebbing and Wegener [136] announced in 1996 that this
number is 33,439,123,484,294. The answer was later corrected by McKay [140] to be
13,267,364,410,532. See [66] for a discussion of the knights tour and its relationship to
mathematics.
Algorithm 2.2: A general depth-first search template.
Input: A directed or undirected graph G = (V, E) of order n > 0. A vertex s
from which to start the search. The vertices are numbered from 1 to
n = |V |, i.e. V = {1, 2, . . . , n}.
Output: A list D of distances of all vertices from s. A tree T rooted at s.
1
2
3
4
5
6
7
8
9
10
11
12
S [s]
/* stack of nodes to visit */
D [, , . . . , ]
/* n copies of */
D[s] 0
T []
while length(S) > 0 do
v pop(S)
for each w adj(v) do
if D[w] = then
D[w] D[v] + 1
push(S, w)
append(T, vw)
return (D, T )
Algorithm 2.2 formalizes the above description of depth-first search. The tree resulting from applying DFS on a graph is called a depth-first search tree. The general
structure of this algorithm bears close resemblance to Algorithm 2.1. A significant difference is that instead of using a queue to structure and organize vertices to be visited,
DFS uses another special type of list called a stack . To understand how elements of a
stack are organized, we use the analogy of a stack of cards. A new card is added to
the stack by placing it on top of the stack. Any time we want to remove a card, we
are only allowed to remove the top-most card that is on the top of the stack. A list
L = [a1 , a2 , . . . , ak ] of k elements is a stack when we impose the same rules for element
insertion and removal. The top and bottom of the stack are L[k] and L[1], respectively.
The operation of removing the top element of the stack is referred to as popping the
element off the stack. Inserting an element into the stack is called pushing the element
onto the stack. In other words, a stack implements a last-in first-out (LIFO) protocol
for element insertion and removal, in contrast to the FIFO policy of a queue. We also
use the term length to refer to the number of elements in the stack.
The depth-first search Algorithm 2.2 can be analyzed similar to how we analyzed
Algorithm 2.3. Just as BFS is applicable to both directed and undirected graphs, we
can also have undirected graphs and digraphs as input to DFS. For the case of an
66
7
4
6
7
4
6
67
3
7
3
7
4
6
3
7
3
7
4
6
2
3
3
7
5
4
6
68
undirected graph, line 7 of Algorithm 2.2 considers all vertices adjacent to the current
vertex v. In case the input graph is directed, we replace w adj(v) on line 7 with
w oadj(v) to signify that we only want to consider the out-neighbors of v. If any
neighbors (respectively, out-neighbors) of v are labelled as , we know that we have
not explored any paths starting from any of those vertices. So we label each of those
unexplored vertices with a positive integer and push them onto the stack S, where
they will wait for later processing. We also record the paths leading from v to each of
those unvisited neighbors, i.e. the edges vw for each vertex w adj(v) (respectively,
w oadj(v)) are appended to the list T . The test on line 8 ensures that we do not push
onto S any vertices on the path that lead to v. When we resume another round of the
while loop that starts on line 5, the previous vertex v have been popped off S and the
neighbors (respectively, out-neighbors) of v have been pushed onto S. To explore a path
starting at v, we choose any unexplored neighbors of v by popping an element off S and
repeat the for loop starting on line 7. Repeat the DFS algorithm as often as required in
order to traverse all vertices of the input graph. The output of DFS consists of two lists
D and T : T is a tree rooted at the starting vertex s; and each D[i] counts the length
of the s-vi path in T . Figures 2.6 and 2.7 show the DFS trees resulting from running
Algorithm 2.2 on an undirected graph and a digraph, respectively. The worst-case time
complexity of DFS can be analyzed using an argument similar to that in Theorem 2.2.
Arguing along the same lines as in the proof of Theorem 2.3, we can also show that the
list D returned by DFS contains lengths of any shortest paths from the starting vertex
s to any other vertex in the tree T .
0
5
1
4
6
Example 2.4. In 1898, Julius Petersen published [157] a graph that now bears his name:
the Petersen graph shown in Figure 2.8. Compare the search trees resulting from running
breadth- and depth-first searches on the Petersen graph with starting vertex 0.
Solution. The Petersen graph in Figure 2.8 can be constructed and searched as follows.
sage : g = graphs . PetersenGraph (); g
Petersen graph : Graph on 10 vertices
sage : list ( g . breadth_first_search (0))
[0 , 1 , 4 , 5 , 2 , 6 , 3 , 9 , 7 , 8]
sage : list ( g . depth_first_search (0))
[0 , 5 , 8 , 6 , 9 , 7 , 2 , 3 , 4 , 1]
69
From the above Sage session, we see that starting from vertex 0 breadth-first search
yields the edge list
[01, 04, 05, 12, 16, 43, 49, 57, 58]
and depth-first search produces the corresponding edge list
[05, 58, 86, 69, 97, 72, 23, 34, 01].
Our results are illustrated in Figure 2.9.
0
4
6
2.2.3
Connectivity of a graph
Both BFS and DFS can be used to determine if an undirected graph is connected. Let
G = (V, E) be an undirected graph of order n > 0 and let s be an arbitrary vertex
of G. We initialize a counter c 1 to mean that we are starting our exploration at
s, hence we have already visited one vertex, i.e. s. We apply either BFS or DFS,
treating G and s as input to any of these algorithms. Each time we visit a vertex that
was previously unvisited, we increment the counter c. At the end of the algorithm, we
compare c with n. If c = n, we know that we have visited all vertices of G and conclude
that G is connected. Otherwise, we conclude that G is disconnected. This procedure is
summarized in Algorithm 2.3.
Note that Algorithm 2.3 uses the BFS template of Algorithm 2.1, with some minor
changes. Instead of initializing the list D with n = |V | copies of , we use n copies of
0. Each time we have visited a vertex w, we make the assignment D[w] 1, instead
of incrementing the value D[v] of ws parent vertex and
Passign that value to D[w]. At
the end of the while loop, we have the equality c =
dD d. The value of this sum
could be used in the test starting from line 12. However, the value of the counter c
is incremented immediately after we have visited an unvisited vertex. An advantage is
that we do not need to perform a separate summation outside of the while loop. To
use the DFS template for determining graph connectivity, we simply replace the queue
implementation in Algorithm 2.3 with a stack implementation (see problem 2.19).
70
Q [s]
/* queue of nodes to visit */
D [0, 0, . . . , 0]
/* n copies of 0 */
D[s] 1
c1
while length(Q) > 0 do
v dequeue(Q)
for each w adj(v) do
if D[w] = 0 then
D[w] 1
cc+1
enqueue(Q, w)
if c = |V | then
return True
return False
2.3
71
than or equal to the distance from a to c. The latter principle is known as the triangle
inequality. In summary, given three vertices u, v, w in a graph G, the distance function
d on G satisfies the following property.
Lemma 2.5. Path distance as metric function. Let G = (V, E) be a graph with
weight function w : E R. Define a distance function d : V V R given by
(
,
if there are no paths from u to v,
d(u, v) =
min{w(W ) | W is a u-v walk}, otherwise.
Then d is a metric on V if it satisfies the following properties:
1. Nonnegativity: d(u, v) 0 with d(u, v) = 0 if and only if u = v.
2. Symmetry: d(u, v) = d(v, u).
3. Triangle inequality: d(u, v) + d(v, w) d(u, w).
The pair (V, d) is called a metric space, where the word metric refers to the distance
function d. Any graphs we consider are assumed to have finite sets of vertices. For this
reason, (V, d) is also known as a finite metric space. The distance matrix D = [d(vi , vj )]
of a connected graph is the distance matrix of its finite metric space. The topic of metric
space is covered in further details in topology texts such as Runde [164] and Shirali and
Vasudeva [169]. See Buckley and Harary [40] for an in-depth coverage of the distance
concept in graph theory.
Many different algorithms exist for computing a shortest path in a weighted graph.
Some only work if the graph has no negative weight cycles. Some assume that there is a
single start or source vertex. Some compute the shortest paths from any vertex to any
other and also detect if the graph has a negative weight cycle. No matter what algorithm
is used for the special case of nonnegative weights, the length of the shortest path can
neither equal nor exceed the order of the graph.
Lemma 2.6. Fix a vertex v in a connected graph G = (V, E) of order n = |V |. If there
are no negative weight cycles in G, then there exists a shortest path from v to any other
vertex w V that uses at most n 1 edges.
Proof. Suppose that G contains no negative weight cycles. Observe that at most n 1
edges are required to construct a path from v to any vertex w (Proposition 1.11). Let P
denote such a path:
P : v0 = v, v1 , v2 , . . . , vk = w.
Since G has no negative weight cycles, the weight of P is no less than the weight of
P 0 , where P 0 is the same as P except that all cycles have been removed. Thus, we can
remove all cycles from P and obtain a v-w path P 0 of lower weight. Since the final path
is acyclic, it must have no more than n 1 edges.
Having defined weights and distances, we are now ready to discuss shortest path
algorithms for weighted graphs. The breadth-first search Algorithm 2.1 can be applied
where each edge has unit weight. Moving on to the general case of graphs with positive
edge weights, algorithms for determining shortest paths in such graphs can be classified
as weight-setting or weight-correcting [81]. A weight-setting method traverses a graph
72
D [, , . . . , ]
/* n copies of */
C list of candidate vertices to visit
while length(C) > 0 do
select v C
C remove(C, v)
for each u adj(v) do
if D[u] > D[v] + w(vu) then
D[u] D[v] + w(vu)
P [u] v
if u
/ C then
add u to C
return (D, P )
and assigns weights that, once assigned, remain unchanged for the duration of the algorithm. Weight-setting algorithms cannot deal with negative weights. On the other
hand, a weight-correcting method is able to change the value of a weight many times
while traversing a graph. In contrast to a weight-setting algorithm, a weight-correcting
algorithm is able to deal with negative weights, provided that the weight sum of any
cycle is nonnegative. The term negative cycle refers to the weight sum s of a cycle such
that s < 0. Some algorithms halt upon detecting a negative cycle; examples of such
algorithms include the Bellman-Ford and Johnsons algorithms.
Algorithm 2.4 is a general template for many shortest path algorithms. With a tweak
here and there, one could modify it to suit the problem at hand. Note that w(vu) is the
weight of the edge vu. If the input graph is undirected, line 6 considers all the neighbors
of v. For digraphs, we are interested in out-neighbors of v and accordingly we replace
u adj(v) in line 6 with u oadj(v). The general flow of Algorithm 2.4 follows the
same pattern as depth-first and breadth-first searches.
2.4
Dijkstras algorithm
73
Dijkstras algorithm [58], discovered by E. W. Dijkstra in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with nonnegative
edge weights. The algorithm is a generalization of breadth-first search. Imagine that
the vertices of a weighted graph represent cities and edge weights represent distances
between pairs of cities connected by a direct road. Dijkstras algorithm can be used to
find a shortest route from a fixed city to any other city.
Let G = (V, E) be a (di)graph with nonnegative edge weights. Fix a start or source
vertex s V . Dijkstras Algorithm 2.5 performs a number of steps, basically one step
for each vertex in V . First, we initialize a list D with n copies of and then assign 0 to
D[s]. The purpose of the symbol is to denote the largest possible value. The list D is
to store the distances of all shortest paths from s to any other vertices in G, where we
take the distance of s to itself to be zero. The list P of parent vertices is initially empty
and the queue Q is initialized to all vertices in G. We now consider each vertex in Q,
removing any vertex after we have visited it. The while loop starting on line 5 runs until
we have visited all vertices. Line 6 chooses which vertex to visit, preferring a vertex v
whose distance value D[v] from s is minimal. After we have determined such a vertex v,
we remove it from the queue Q to signify that we have visited v. The for loop starting
on line 8 adjusts the distance values of each neighbor u of v such that u is also in Q. If
G is directed, we only consider out-neighbors of v that are also in Q. The conditional
starting on line 9 is where the adjustment takes place. The expression D[v] + w(vu)
sums the distance from s to v and the distance from v to u. If this total sum is less than
the distance D[u] from s to u, we assign this lesser distance to D[u] and let v be the
parent vertex of u. In this way, we are choosing a neighbor vertex that results in minimal
distance from s. Each pass through the while loop decreases the number of elements in
Q by one without adding any elements to Q. Eventually, we would exit the while loop
and the algorithm returns the lists D and P .
Algorithm 2.5: A general template for Dijkstras algorithm.
Input: An undirected or directed graph G = (V, E) that is weighted and has no
self-loops. The order of G is n > 0. A vertex s V from which to start
the search. Vertices are numbered from 1 to n, i.e. V = {1, 2, . . . , n}.
Output: A list D of distances such that D[v] is the distance of a shortest path
from s to v. A list P of vertex parents such that P [v] is the parent of v,
i.e. v is adjacent from P [v].
1
2
3
4
5
6
7
8
9
10
11
12
D [, , . . . , ]
/* n copies of */
D[s] 0
P []
QV
/* list of nodes to visit */
while length(Q) > 0 do
find v Q such that D[v] is minimal
Q remove(Q, v)
for each u adj(v) Q do
if D[u] > D[v] + w(vu) then
D[u] D[v] + w(vu)
P [u] v
return (D, P )
74
v2
v4
v2
10
v4
1
10
v1
v3
v5
v1
v3
v5
v2
v4
v2
10
v4
1
10
v1
v3
v5
v1
9
v3
v5
2
v2
v4
v2
10
v4
7
8
4
v1
9
v3
4
v5
v1
v3
3
v5
2
v1
(0, )
v2
v3
(, ) (, )
(10, v1 ) (3, v1 )
(7, v3 )
v4
v5
(, ) (, )
(11, v3 ) (5, v3 )
(9, v2 )
75
Example 2.7. Apply Dijkstras algorithm to the graph in Figure 2.10(a), with starting
vertex v1 .
Solution. Dijkstras Algorithm 2.5 applied to the graph in Figure 2.10(a) yields the
sequence of intermediary graphs shown in Figure 2.10, culminating in the final shortest
paths graph of Figure 2.10(f) and Table 2.4. For any column vi in the table, each 2-tuple
represents the distance and parent vertex of vi . As we move along the graph, processing
vertices according to Dijkstras algorithm, the distance and parent vertex of a column
are updated. The underlined 2-tuple represents the final distance and parent vertex
produced by Dijkstras algorithm. From Table 2.4, we have the following shortest paths
and distances:
v1 -v2 : v1 , v3 , v2
d(v1 , v2 ) = 7
v1 -v3 : v1 , v3
d(v1 , v3 ) = 3
v1 -v4 : v1 , v3 , v2 , v4
d(v1 , v4 ) = 9
v1 -v5 : v1 , v3 , v5
d(v1 , v5 ) = 5
Intermediary vertices for a u-v path are obtained by starting from v and work backward
using the parent of v, then the parent of the parent, and so on.
Dijkstras algorithm is an example of a greedy algorithm. Whenever it tries to find the
next vertex, it chooses only that vertex that minimizes the total weight so far. Greedy
algorithms may not produce the best possible result. However, as the following theorem
shows, Dijkstras algorithm does indeed produce shortest paths.
Theorem 2.8. Correctness of Algorithm 2.5. Let G = (V, E) be a weighted
(di)graph with a nonnegative weight function w. When Dijkstras algorithm is applied to
G with source vertex s V , the algorithm terminates with D[u] = d(s, u) for all u V .
Furthermore, if D[v] 6= and v 6= s, then s = u1 , u2 , . . . , uk = v is a shortest s-v path
such that ui1 = P [ui ] for i = 2, 3, . . . , k.
Proof. If G is disconnected, then any v V that cannot be reached from s has distance
D[v] = upon algorithm termination. Hence, it suffices to consider the case where G
is connected. Let V = {s = v1 , v2 , . . . , vn } and use induction on i to show that after
visiting vi we have
D[v] = d(s, v)
(2.6)
For i = 1, equality holds. Assume for induction that (2.6) holds for some 1 i n 1,
so that now our task is to show that (2.6) holds for i + 1. To verify D[vi+1 ] = d(s, vi+1 ),
note that by our inductive hypothesis,
D[vi+1 ] = min {d(s, v) + w(vu) | v Vi and u adj(v) (Q\Vi )}
and respectively
D[vi+1 ] = min {d(s, v) + w(vu) | v Vi and u oadj(v) (Q\Vi )}
if G is directed. Therefore, D[vi+1 ] = d(s, vi+1 ).
Let v V such that D[v] 6= and v 6= s. We now construct an s-v path. When
Algorithm 2.5 terminates, we have D[v] = D[v1 ] + w(v1 v), where P [v] = v1 and d(s, v) =
d(s, v1 ) + w(v1 v). This means that v1 is the second-to-last vertex in a shortest s-v path.
Repeated application of this process using the parent list P , we eventually produce a
shortest s-v path s = vm , vm1 , . . . , v1 , v, where P [vi ] = vi+1 for i = 1, 2, . . . , m 1.
76
To analyze the worst case time complexity of Algorithm 2.5, note that initializing D
takes O(n + 1) and initializing Q takes O(n), for a total of O(n) devoted to initialization.
Each extraction of a vertex v with minimal D[v] requires O(n) since we search through
the entire list Q to determine the minimum value, for a total of O(n2 ). Each insertion
into D requires constant time and the same holds for insertion into P . Thus, insertion
into D and P takes O(|E| + |E|) = O(|E|), which require at most O(n) time. In the
worst case, Dijkstras Algorithm 2.5 has running time O(n2 + n) = O(n2 ).
Can we improve the run time of Dijkstras algorithm? The time complexity of Dijkstras algorithm depends on its implementation. With a simple list implementation as
presented in Algorithm 2.5, we have a worst case time complexity of O(n2 ), where n is
the order of the graph under consideration. Let m be the size of the graph. Table 2.5
presents time complexities of Dijkstras algorithm for various implementations. Out of
all the four implementations in this table, the heap implementations are much more
efficient than the list implementation presented in Algorithm 2.5. A heap is a type of
tree, a topic which will be covered in Chapter 3. Of all the heap implementations in
Table 2.5, the Fibonacci heap implementation [80] yields the best runtime. Chapter 4
discusses how to use trees for efficient implementations of priority queues via heaps.
Implementation Time complexity
list
O(n2 )
binary heap
O (n + m) ln n
n
k-ary heap
O (kn + m) ln
ln k
Fibonacci heap
O(n ln n + m)
2.5
Bellman-Ford algorithm
A disadvantage of Dijkstras Algorithm 2.5 is that it cannot handle graphs with negative
edge weights. The Bellman-Ford algorithm computes single-source shortest paths in
a weighted graph or digraph, where some of the edge weights may be negative. This
algorithm is a modification of the one published in 1957 by Richard E. Bellman [21] and
that by Lester Randolph Ford, Jr. [76] in 1956. Shimbel [168] independently discovered
the same method in 1955, and Moore [146] in 1959. In contrast to the greedy approach
that Dijkstras algorithm takes, i.e. searching for the cheapest path, the Bellman-Ford
algorithm searches over all edges and keeps track of the shortest one found as it searches.
The Bellman-Ford Algorithm 2.6 runs in time O(mn), where m and n are the size
and order of an input graph, respectively. To see this, note that the initialization on
77
D [, , . . . , ]
D[s] 0
P []
for i 1, 2, . . . , n 1 do
for each edge uv E do
if D[v] > D[u] + w(uv) then
D[v] D[u] + w(uv)
P [v] u
for each edge uv E do
if D[v] > D[u] + w(uv) then
return False
return (D, P )
/* n copies of */
lines 1 to 3 takes O(n). Each of the n 1 rounds of the for loop starting on line 4 takes
O(m), for a total of O(mn) time. Finally, the for loop starting on line 9 takes O(m).
The loop starting on line 4 performs at most n 1 updates of the distance D[v] of
each head of an edge. Many graphs have sizes that are less then n 1, resulting in
a number of redundant rounds of updates. To avoid such redundancy, we could add
an extra check in the outer loop spanning lines 4 to 8 to immediately terminate that
outer loop after any round that did not result in an update of any D[v]. Algorithm 2.7
presents a modification of the Bellman-Ford Algorithm 2.6 that avoids redundant rounds
of updates.
2.6
Floyd-Roy-Warshall algorithm
The shortest distance between two points is not a very interesting journey.
R. Goldberg
Let D be a weighted digraph of order n and size m. Dijkstras Algorithm 2.5 and
the Bellman-Ford Algorithm 2.6 can be used to determine shortest paths from a single
source vertex to all other vertices of D. To determine a shortest path between each pair
of distinct vertices in D, we repeatedly apply either of these algorithms to each vertex
of D. Such repeated application of Dijkstras and the Bellman-Ford algorithms results
in algorithms that run in time O(n3 ) and O(n2 m), respectively.
The Floyd-Roy-Warshall algorithm (FRW), or the Floyd-Warshall algorithm, is an
algorithm for finding shortest paths in a weighted, directed graph. Like the BellmanFord algorithm, it allows for negative edge weights and detects a negative weight cycle
if one exists. Assuming that there are no negative weight cycles, a single execution of
78
Algorithm 2.7: The Bellman-Ford algorithm with checks for redundant updates.
Input: An undirected or directed graph G = (V, E) that is weighted and has no
self-loops. Negative edge weights are allowed. The order of G is n > 0. A
vertex s V from which to start the search. Vertices are numbered from
1 to n, i.e. V = {1, 2, . . . , n}.
Output: A list D of distances such that D[v] is the distance of a shortest path
from s to v. A list P of vertex parents such that P [v] is the parent of v,
i.e. v is adjacent from P [v]. If G has negative-weight cycles, then return
False. Otherwise, return D and P .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
D [, , . . . , ]
D[s] 0
P []
for i 1, 2, . . . , n 1 do
updated False
for each edge uv E do
if D[v] > D[u] + w(uv) then
D[v] D[u] + w(uv)
P [v] u
updated True
if updated = False then
exit the loop
for each edge uv E do
if D[v] > D[u] + w(uv) then
return False
return (D, P )
/* n copies of */
79
the FRW algorithm will find the shortest paths between all pairs of vertices. It was
discovered independently by Bernard Roy [163] in 1959, Robert Floyd [75] in 1962, and
by Stephen Warshall [188] in 1962.
In some sense, the FRW algorithm is an example of dynamic programming, which
allows one to break the computation into simpler steps using some sort of recursive
procedure. The rough idea is as follows. Temporarily label the vertices of a weighted
digraph G as V = {1, 2, . . . , n} with n = |V (G)|. Let W = [w(i, j)] be the weight matrix
of G where
w(ij), if ij E(G),
w(i, j) = 0,
(2.7)
if i = j,
,
otherwise.
Let Pk (i, j) be a shortest path from i to j such that its intermediate vertices are in
{1, 2, . . . , k}. Let Dk (i, j) be the weight (or distance) of Pk (i, j). If no shortest i-j
paths exist, define Pk (i, j) = and Dk (i, j) = for all k {1, 2, . . . , n}. If k = 0,
then P0 (i, j) : i, j since no intermediate vertices are allowed in the path and hence
D0 (i, j) = w(i, j). In other words, if i and j are adjacent, a shortest i-j path is the
edge ij itself and the weight of this path is simply the weight of ij. Now consider
Pk (i, j) for k > 0. Either Pk (i, j) passes through k or it does not. If k is not on the
path Pk (i, j), then the intermediate vertices of Pk (i, j) are in {1, 2, . . . , k 1}, as are
the vertices of Pk1 (i, j). In case Pk (i, j) contains the vertex k, then Pk (i, j) traverses
k exactly once by the definition of path. The i-k subpath in Pk (i, j) is a shortest i-k
path whose intermediate vertices are drawn from {1, 2, . . . , k 1}, which is also the set
of intermediate vertices for the k-j subpath in Pk (i, j). That is, to obtain Pk (i, j), we
take the union of the paths Pk1 (i, k) and Pk1 (k, j). We compute the weight Dk (i, j)
of Pk (i, j) using the expression
(
w(i, j),
if k = 0,
Dk (i, j) =
(2.8)
min{Dk1 (i, j), Dk1 (i, k) + Dk1 (k, j)}, if k > 0.
The key to the Floyd-Roy-Warshall algorithm lies in exploiting expression (2.8). If
n = |V |, then this is a O(n3 ) time algorithm. For comparison, the Bellman-Ford algorithm has complexity O(|V | |E|), which is O(n3 ) time for dense graphs. However,
Bellman-Ford only yields the shortest paths emanating from a single vertex. To achieve
comparable output, we would need to iterate Bellman-Ford over all vertices, which would
be an O(n4 ) time algorithm for dense graphs. Except possibly for sparse graphs, FloydRoy-Warshall is better than an iterated implementation of Bellman-Ford. Note that
Pk (i, k) = Pk1 (i, k) and Pk (k, i) = Pk1 (k, i), consequently Dk (i, k) = Dk1 (i, k) and
Dk (k, i) = Dk1 (k, i). This observation allows us to replace Pk (i, j) with P (i, j) for
k = 1, 2, . . . , n. The final results of P (i, j) and D(i, k) are the same as Pn (i, j) and
Dn (i, j), respectively. Algorithm 2.8 summarizes the above discussion into an algorithmic presentation.
Like the Bellman-Ford algorithm, the Floyd-Roy-Warshall algorithm can also detect
the presence of negative weight cycles. If G is a weighted digraph without self-loops,
by (2.7) we have D(i, i) = 0 for i = 1, 2, . . . , n. Any path p starting and ending at i
could only improve upon the initial weight of 0 if the weight sum of p is less than zero, i.e.
a negative weight cycle. Upon termination of Algorithm 2.8, if D(i, i) < 0, we conclude
that there is a path starting and ending at i whose weight sum is negative.
80
n |V |
P [aij ] an n n zero matrix
D[aij ] W [w(i, j)]
for k 1, 2, . . . , n do
for i 1, 2, . . . , n do
for j 1, 2, . . . , n do
if D[i, j] > D[i, k] + D[k, j] then
P [i, j] k
D[i, j] D[i, k] + D[k, j]
return (P, D)
81
The plot of this weighted digraph with four vertices appears in Figure 2.11.
1
The plot of this weighted digraph with four vertices appears in Figure 2.12.
Example 2.9. Section 1.6 briefly presented the concept of molecular graphs in chemistry. The Wiener number of a molecular graph was first published in 1947 by Harold
Wiener [195] who used it in chemistry to study properties of alkanes. Other applications [92] of the Wiener number to chemistry are now known. If G = (V, E) is a
connected graph with vertex set V = {v1 , v2 , . . . , vn }, then the Wiener number W of G is
defined by
X
W (G) =
d(vi , vj )
(2.9)
i<j
where d(vi , vj ) is the distance from vi to vj . What is the Wiener number of the molecular
graph in Figure 2.13?
Solution. Consider the molecular graph in Figure 2.13 as directed with unit weight.
To compute the Wiener number of this graph, use the Floyd-Roy-Warshall algorithm to
obtain a distance matrix D = [di,j ], where di,j is the distance from vi to vj , and apply the
definition (2.9). The distance matrix resulting from the Floyd-Roy-Warshall algorithm
82
0.5
1
3
0
2
M =
2
3
2
4
83
2
0
1
2
3
2
4
1
1
0
1
2
1
3
2
2
1
0
1
2
2
3
3
2
1
0
1
1
2
2
1
2
1
0
2
4
4
2
.
1
2
0
Sum all entries in the upper (or lower) triangular of M to obtain the Wiener number
W = 42. Using Sage, we have
sage :
sage :
sage :
sage :
sage :
sage :
...
...
sage :
42
G =
D =
M =
M =
W =
for
which verifies our computation above. See Gutman et al. [92] for a survey of some results
concerning the Wiener number.
2.6.1
Transitive closure
transitive closure G answers an important question about G: If u and v are two distinct
vertices of G, are they connected by a path with length 1?
To compute the transitive closure of G, we let each edge of G be of unit weight and
apply the Floyd-Roy-Warshall Algorithm 2.8 on G. By Proposition 1.11, for any i-j path
in G we have D[i, j] < n, and if there are no paths from i to j in G, we have D[i, j] = .
This procedure for computing transitive closure runs in time O(n3 ).
Modifying the Floyd-Roy-Warshall algorithm slightly, we obtain an algorithm for
computing transitive closure that, in practice, is more efficient than Algorithm 2.8 in
terms of time and space. Instead of using the operations min and + as is the case in the
Floyd-Roy-Warshall algorithm, we replace these operations with the logical operations
(logical OR) and (logical AND), respectively. For i, j, k = 1, 2, . . . , n, define Tk (i, j) = 1
if there is an i-j path in G with all intermediate vertices belonging to {1, 2, . . . , k}, and
Tk (i, j) = 0 otherwise. Thus, the edge ij belongs to the transitive closure G if and only
if Tk (i, j) = 1. The definition of Tk (i, j) can be cast in the form of a recursive definition
as follows. For k = 0, we have
(
0, if i 6= j and ij
/ E,
T0 (i, j) =
1, if i = j or ij E
and for k > 0, we have
Tk (i, j) = Tk1 (i, j) Tk1 (i, k) Tk1 (k, j) .
We need not use the subscript k at all and instead let T be a boolean matrix such that
T [i, j] = 1 if and only if there is an i-j path in G, and T [i, j] = 0 otherwise. Using
84
n |V |
T adjacency matrix of G
for k 1, 2, . . . , n do
for i 1, 2, . . . , n do
for j 1, 2, . . . , n do
T [i, j] T [i, j] T [i, k] T [k, j]
return T
2.7
Johnsons algorithm
The shortest distance between two points is under construction.
Noelie Altito
Let G = (V, E) be a sparse digraph with edge weights but no negative cycles. Johnsons
algorithm [109] finds a shortest path between each pair of vertices in G. First published
in 1977 by Donald B. Johnson, the main insight of Johnsons algorithm is to combine
the technique of edge reweighting with the Bellman-Ford and Dijkstras algorithms. The
Bellman-Ford algorithm is first used to ensure that G has no negative cycles. Next,
we reweight edges in such a manner as to preserve shortest paths. The final stage
makes use of Dijkstras algorithm for computing shortest paths between all vertex pairs.
Pseudocode for Johnsons algorithm is presented in Algorithm 2.10. With a Fibonacci
heap implementation of the minimum-priority queue, the time complexity for sparse
graphs is O(|V |2 log |V | + |V | |E|), where n = |V | is the number of vertices of the
original graph G.
To prove the correctness of Algorithm 2.10, we need to show that the new set of edge
weights produced by w must satisfy two properties:
1. The reweighted edges preserve shortest paths. That is, let p be a u-v path for
u, v V . Then p is a shortest weighted path using weight function w if and only
if p is also a shortest weighted path using weight function w.
0.
85
s vertex not in V
V 0 V {s}
E 0 E {sv | v V }
G0 digraph (V 0 , E 0 ) with weight w(sv) = 0 for all v V
if BellmanFord(G0 , w, s) = False then
return False
d distance list returned by BellmanFord(G0 , w, s)
for each edge uv E 0 do
w(uv)
P [u] P
for each v V do
+ d[v] d[u]
D[u, v] [v]
return (D, P )
2. The graph G has a negative cycle using weight function w if and only if G has a
negative cycle using w.
0 for all uv E.
Proof. Write and for the shortest path weights derived from w and w,
respectively.
0 , vk ).
To prove part 1, we need to show that w(p) = (v0 , vk ) if and only if w(p)
= (v
86
k
X
i=1
k
X
i=1
k
X
i=1
k
X
i=1
w(v
i1 vi )
w(vi1 vi ) + h(vi1 ) h(vi )
w(vi1 vi ) +
k
X
i=1
h(vi1 ) h(vi )
= (v0 , vk ).
To prove part 2, consider any cycle c : v0 , v1 , . . . , vk where v0 = vk . Using the proof
of part 1, we have
w(c)
= w(c) + h(v0 ) h(vk )
= w(c)
thus showing that c is a negative cycle using w if and only if it is a negative cycle using
w.
2.8
Problems
I believe that a scientist looking at nonscientific problems is just as dumb as the next guy.
Richard Feynman
2.1. The Euclidean algorithm is one of the oldest known algorithms. Given two positive
integers a and b with a b, let a mod b be the remainder obtained upon dividing
a by b. The Euclidean algorithm determines the greatest common divisor gcd(a, b)
of a and b. The procedure is summarized in Algorithm 2.11. Refer to Chabert [45]
for a history of algorithms from ancient to modern times.
(a) Implement Algorithm 2.11 in Sage and use your implementation to compute
the greatest common divisors of various pairs of integers. Use the built-in
Sage command gcd to verify your answer.
2.8. Problems
87
xa
yb
while y 6= 0 do
r x mod y
xy
yr
return x
(b) Modify Algorithm 2.11 to compute the greatest common divisor of any pair
of integers.
2.2. Given a polynomial p(x) = an xn + an1 xn1 + + a1 x + a0 of degree n, we can use
Horners method [101] to efficiently evaluate p at a specific value x = x0 . Horners
method evaluates p(x) by expressing the polynomial as
p(x) =
n
X
i=0
ai xi = ( (an x + an1 )x + )x + a0
b an
for i n 1, n 2, . . . , 0 do
b bx0 + ai
return b
2.3. Let G = (V, E) be an undirected graph, let s V , and D is the list of distances
resulting from running Algorithm 2.1 with G and s as input. Show that G is
connected if and only if D[v] is defined for each v V .
2.4. Show that the worst-case time complexity of depth-first search Algorithm 2.2 is
O(|V | + |E|).
88
2.5. Let D be the list of distances returned by Algorithm 2.2, let s be a starting vertex,
and let v be a vertex such that D[v] 6= . Show that D[v] is the length of any
shortest path from s to v.
2.6. Consider the graph in Figure 2.10 as undirected. Run this undirected version
through Dijkstras algorithm with starting vertex v1 .
1
v3
v4
2
3
1
1
v5
3
3
v2
v1
2.8. Problems
89
for each e L do
if E = e then
return True
return False
low 0
high |L| 1
while low high do
mid b low +2 high c
if i = L[mid] then
return True
if i < L[mid] then
high mid 1
else
low mid + 1
return False
90
2.11. Let G be a simple undirected graph having distance matrix D = [d(vi , vj )], where
d(vi , vj ) R denotes the shortest distance from vi V (G) to vj V (G). If
vi = vj , we set d(vi , vj ) = 0. For each pair of distinct vertices (vi , vj ), we have
d(vi , vj ) = d(vj , vi ). The i-j entry of D is also written as di,j and denotes the entry
in row i and column j.
(a) The total distance td(u) of a fixed vertex u V (G) is the sum of distances
from u to each vertex in G:
X
td(u) =
d(u, v).
vV (G)
2.8. Problems
91
(c) Would equations (2.10) and (2.11) hold if G is not connected or directed?
2.12. The following result is from Yeh and Gutman [200]. Let G1 and G2 be graphs with
orders ni = |V (Gi )| and sizes mi = |E(Gi )|, respectively.
(a) If each of G1 and G2 is connected, show that the Wiener number of the
Cartesian product G1 G2 is
W (G1 G2 ) = n22 W (G1 ) + n21 W (G2 ).
(b) If G1 and G2 are arbitrary graphs, show that the Wiener number of the join
G1 + G2 is
W (G1 + G2 ) = n21 n1 + n22 n2 + n1 n2 m1 m2 .
2.13. The following results originally appeared in Entringer et al. [67] and independently
rediscovered many times since.
(a) If Pn is the path graph on n 0 vertices, show that the Wiener number of
Pn is W (Pn ) = 61 n(n2 1).
(b) If Cn is the cycle graph on n 0 vertices, show that the Wiener number of
Cn is
(1
n(n2 1), if n is odd,
8
W (Cn ) =
1 3
n,
if n is even.
8
(c) If Kn is the complete graph on n vertices, show that its Wiener number is
W (Kn ) = 21 n(n 1).
(d) Show that the Wiener number of the complete bipartite graph Km,n is
W (Km,n ) = mn + m(m 1) + n(n 1).
2.14. Consider the world map of major capital cities in Figure 2.15.
(a) Run breadth- and depth-first searches over the graph in Figure 2.15 and compare your results.
(b) Convert the graph in Figure 2.15 to a digraph as follows. Let 0 1 be a
fixed threshold probability and let V = {v1 , . . . , vn } be the vertex set of the
graph. For each edge vi vj , let 0 p 1 be its orientation probability and
define the directedness dir(vi , vj ) by
dir(vi , vj ) =
(
vi vj , if p ,
vj vi , otherwise.
That is, dir(vi , vj ) takes the endpoints of an undirected edge vi vj and returns
a directed version of this edge. The result is either the directed edge vi vj
or the directed edge vj vi . Use the above procedure to convert the graph of
Figure 2.15 to a digraph, and run breadth- and depth-first searches over the
resulting digraph.
92
Lima
Washington DC
Ottawa
Pretoria
Berlin
New Delhi
Moscow
Buenos Aires
Brasilia
Madrid
London
Bangkok
Beijing
Sydney
Tokyo
2.8. Problems
93
3154
Brasilia
Buenos Aires
Washington DC
Tokyo
Sydney
Pretoria
Ottawa
7540
1619
2103
6764
7906
3154
Lima
5915
5379
1261
929
London
8033
7288
1261
1866
Madrid
1619
5807
Moscow
7288
5796
3784
New Delhi
2908
734
5379
Ottawa
8033
7906
Pretoria
5796
Moscow
3784
1866
Madrid
New Delhi
929
London
Lima
2314
Berlin
2314
5807
Beijing
3282
Buenos Aires
2908
3282
Bangkok
Brasilia
Berlin
Beijing
Bangkok
Sydney
7540
2103
Tokyo
734
5915
6764
Washington DC
94
Chapter 2. Graph Algorithms
2.8. Problems
95
2.15. Various efficient search techniques exist that cater for special situations. Some of
these are covered in chapter 6 in Knuth [120] and chapters 1418 in Sedgewick [165].
Investigate an algorithm for and time complexity of trie search. Hashing techniques
can result in searches that run in O(1) time. Furthermore, hashing has important
applications outside of the searching problem, a case in point being cryptology.
Investigate how hashing can be used to speed up searches. For further information
on hashing and its application to cryptology, see Menezes et al. [142], Stinson [173],
or Trappe and Washington [179].
2.16. In addition to searching, there is the related problem of sorting a list according to
an ordering relation. If the given list L = [e1 , e2 , . . . , en ] consists of real numbers,
we want to order elements in nondecreasing order. Bubble sort is a basic sorting
algorithm that can be used to sort a list of real numbers, indeed any collection of
objects that can be ordered according to an ordering relation. During each pass
through the list L from left to right, we consider ei and its right neighbor ei+1 . If
ei ei+1 , then we move on to consider ei+1 and its right neighbor ei+2 . If ei > ei+1 ,
then we swap these two values around in the list and then move on to consider ei+1
and its right neighbor ei+2 . Each successive pass pushes to the right end an element
that is the next largest in comparison to the previous largest element pushed to
the right end. Hence the name bubble sort for the algorithm. Algorithm 2.15
summarizes our discussion.
Algorithm 2.15: Bubble sort.
Input: A list L of n > 1 elements that can be ordered using the less than or
equal to relation .
Output: The same list as L, but sorted in nondecreasing order.
1
2
3
4
5
for i n, n 1, . . . , 2 do
for j 2, 3, . . . , i do
if L[j 1] > L[j] then
swap the values of L[j 1] and L[j]
return L
96
ta
ab
bt
from left to right, among the elements L[2], . . . , L[n] we find the smallest element
and exchange it with L[1]. On the second scan, we find the smallest element
among L[3], . . . , L[n] and exchange that smallest element with L[2]. In general,
during the i-th scan we find the smallest element among L[i + 1], . . . , L[n] and
exchange that with L[i]. At the end of the i-th scan, the element L[i] is in its final
position and would not be processed again. When the index reaches i = n, the list
would have been sorted in nondecreasing order. The procedure is summarized in
Algorithm 2.17.
(a) Analyze the worst-case runtime of Algorithm 2.17 and compare your result to
the worst-case runtime of the bubble sort Algorithm 2.15.
(b) Modify Algorithm 2.17 to sort elements in nonincreasing order.
(c) Line 6 of Algorithm 2.17 assumes that among L[i+1], L[i+2], . . . , L[n] there is
a smallest element L[k] such that L[i] > L[k], hence we perform the swap. It is
possible that L[i] < L[k], obviating the need to carry out the value swapping.
Modify Algorithm 2.17 to take account of our discussion.
for i 1, 2, . . . , n 1 do
min i
for j i + 1, i + 2, . . . , n do
if L[j] < L[min] then
min j
swap the values of L[min] and L[i]
return L
2.18. In addition to bubble and selection sort, other algorithms exist whose runtime is
more efficient than these two basic sorting algorithms. Chapter 5 in Knuth [120] describes various efficient sorting techniques. See also chapters 813 in Sedgewick [165].
(a) Investigate and provide pseudocode for insertion sort and compare its runtime
efficiency with that of selection sort. Compare the similarities and differences
between insertion and selection sort.
(b) Shellsort is a variation on insertion sort that can speed up the runtime of
insertion sort. Describe and provide pseudocode for shellsort. Compare the
2.8. Problems
97
98
L []
T empty stack
c1
n |S|
for i 0, 1, . . . , n do
if S[i + 1] is a left bracket then
append(L, c)
push (S[i + 1], c) onto T
cc+1
if S[i + 1] is a right bracket then
if T is empty then
return
(left, d) pop(T )
if left matches S[i + 1] then
append(L, d)
else
return
if T is empty then
return L
return
2.8. Problems
99
pop is assigned to a. Compute the infix expression a x b and push the result onto
E. However, if x is an operand, we push x onto E. Iterate the above process until
P is empty, at which point the top of E contains the evaluation of A. Refer to
Algorithm 2.19 for pseudocode of the above discussion.
(a) Prove the correctness of Algorithm 2.19.
(b) What is the worst-case runtime of Algorithm 2.19?
(c) Modify Algorithm 2.19 to support the exponentiation operator.
Algorithm 2.19: Evaluate arithmetic expressions in reverse Polish notation.
Input: A Polish stack P containing an arithmetic expression in reverse Polish
notation.
Output: An evaluation of the arithmetic expression represented by P .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
E empty stack
v NULL
while P is not empty do
x pop(P )
if x is an operator then
b pop(E)
a pop(E)
if x is addition operator then
v a+b
else if x is subtraction operator then
v ab
else if x is multiplication operator then
v ab
else if x is division operator then
v a/b
else
exit algorithm with error
push(E, v)
else
push(E, x)
v pop(E)
return v
2.23. Figure 2.5 provides a knights tour for the knight piece with initial position as
in Figure 2.5(a). By rotating the chessboard in Figure 2.5(b) by 90n degrees for
positive integer values of n, we obtain another knights tour that, when represented
as a graph, is isomorphic to the graph in Figure 2.5(c).
(a) At the beginning of the 18th century, de Montmort and de Moivre provided
the following strategy [11, p.176] to solve the knights tour problem on an
8 8 chessboard. Divide the board into an inner 4 4 square and an outer
shell of two squares deep, as shown in Figure 2.16(a). Place a knight on a
square in the outer shell and move the knight piece around that shell, always
100
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0m
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
Figure 2.16: De Montmort and de Moivres solution strategy for the 8 8 knights tour
problem.
0Z0Z0Z
Z0Z0Z0
0Z0Z0Z
Z0Z0Z0
0Z0Z0Z
m0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0m0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
0Z0Z0Z0Z
Z0Z0Z0Z0
(a) A 6 6 chessboard.
(b) An 8 8 chessboard.
2.8. Problems
101
0L0Z
Z0ZQ
QZ0Z
Z0L0
0ZQZ0Z0Z
Z0Z0Z0ZQ
0Z0L0Z0Z
Z0Z0Z0L0
QZ0Z0Z0Z
Z0Z0ZQZ0
0L0Z0Z0Z
Z0Z0L0Z0
(a) n = 4
(b) n = 8
102
(a)
(b)
(a) 2 2
(b) 3 3
(c) 4 4
(2.12)
Each vertex (i, j) is adjacent to any of the following vertices provided that expression (2.12) is satisfied: the vertex (i 1, j) immediately to its left, the vertex
(i + 1, j) immediately to its right, the vertex (i, j + 1) immediately above it, or
the vertex (i, j 1) immediately below it. Figure 2.20 illustrates some examples
of grid graphs. The 1 1 grid graph is the trivial graph K1 .
(a) Fix a positive integer n > 1. Describe and provide pseudocode of an algorithm
to generate all nonisomorphic n n grid graphs. What is the worst-case
runtime of your algorithm?
(b) How many n n grid graphs are there? How many of those graphs are
nonisomorphic to each other?
2.8. Problems
103
Chapter 3
Trees and Forests
In section 1.2.1, we briefly touched upon trees and provided examples of how trees could
be used to model hierarchical structures. This chapter provides an in-depth study of
trees, their properties, and various applications. After defining trees and related concepts in section 3.1, we then present various basic properties of trees in section 3.2.
Each connected graph G has an underlying subgraph called a spanning tree that contains all the vertices of G. Spanning trees are discussed in section 3.3 together with
various common algorithms for finding spanning trees. We then discuss binary trees in
section 3.4, followed by an application of binary trees to coding theory in section 3.5.
Whereas breadth- and depth-first searches are general methods for traversing a graph,
trees require specialized techniques in order to visit their vertices, a topic that is taken
up in section 3.6.
3.1
Recall that a path in a graph G = (V, E) whose start and end vertices are the same is
called a cycle. We say G is acyclic, or a forest, if it has no cycles. In a forest, a vertex
of degree one is called an endpoint or a leaf . Any vertex that is not a leaf is called an
104
105
internal vertex. A connected forest is a tree. In other words, a tree is a graph without
cycles and each edge is a bridge. A forest can also be considered as a collection of trees.
A rooted tree T is a tree with a specified root vertex v0 , i.e. exactly one vertex has
been specially designated as the root of T . However, if G is a rooted tree with root
vertex v0 having degree one, then by convention we do not call v0 an endpoint or a leaf.
The depth depth(v) of a vertex v in T is its distance from the root. The height height(T )
of T is the length of a longest path starting from the root vertex, i.e. the height is the
maximum depth among all vertices of T . It follows by definition that depth(v) = 0 if and
only if v is the root of T , height(T ) = 0 if and only if T is the trivial graph, depth(v) 0
for all v V (T ), and height(T ) diam(T ).
The Unix, in particular Linux, filesystem hierarchy can be viewed as a tree (see
Figure 3.1). As shown in Figure 3.1, the root vertex is designated with the forward
slash, which is also referred to as the root directory. Other examples of trees include the
organism classification tree in Figure 3.2, the family tree in Figure 3.3, and the expression
tree in Figure 3.4.
A directed tree is a digraph which would be a tree if the directions on the edges
were ignored. A rooted tree can be regarded as a directed tree since we can imagine an
edge uv for u, v V being directed from u to v if and only if v is further away from v0
than u is. If uv is an edge in a rooted tree, then we call v a child vertex with parent u.
Directed trees are pervasive in theoretical computer science, as they are useful structures
for describing algorithms and relationships between objects in certain datasets.
/
bin
etc
home
lib
anne
sam
...
acyclic
diff
proc
opt
bin
dot
tmp
include
gc
neato
...
usr
local
share
src
...
...
106
organism
plant
animal
tree
flower
invertebrate
evergreen
deciduous
vetebrate
bird
finch
mammal
sparrow
rosella
dolphin
human
whale
Nikolaus senior
Jacob
Nicolaus
Johann
Nicolaus I
Nicolaus II
Daniel
Johann II
Johann III
Daniel II
Jakob II
107
(a)
(b)
(a) n = 1
(b) n = 2
(c) n = 3
(a)
(b)
108
(a)
(b)
(c)
(a)
(b)
(c)
(d)
(f)
(e)
109
T = DiGraph ({
" v " :[ " a " ," w " ] , " w " :[ " x " ," y " ] ,
" x " :[ " c " ," b " ] , " y " :[ " z " ," d " ] ,
" z " :[ " f " ," e " ]})
for v in T . vertex_iterator ():
print ( v ) ,
e d f w v y x z
for e in T . edge_iterator ():
print ( " % s % s " % ( e [0] , e [1])) ,
va vw yd yz xc xb ze zf
Each vertex in a binary tree has at most 2 children. Use this definition to test whether
or not a graph is a binary tree.
sage : T . is_tree ()
True
sage : def is_bintree1 ( G ):
...
for v in G . vertex_iterator ():
...
if len ( G . neighbors_out ( v )) > 2:
...
return False
...
return True
sage : is_bintree1 ( T )
True
Heres another way to test for binary trees. Let T be an undirected rooted tree. Each
vertex in a binary tree has a maximum degree of 3. If the root vertex is the only vertex
with degree 2, then T is a binary tree. (Problem 3.5 asks you to prove this result.) We
can use this test because the root vertex v of T is the only vertex with two children.
sage : def is_bintree2 ( G ):
...
if G . is_tree () and max ( G . degree ()) == 3 and G . degree (). count (2) == 1:
...
return True
...
return False
sage : is_bintree2 ( T . to_undirected ())
True
As x is the root vertex of the branch we want to cut off from T , we could use breadthor depth-first search to determine all the children of x. We then delete x and its children
from T .
sage :
sage :
sage :
[ x ,
sage :
sage :
...
a e d
sage :
...
wy va
sage :
sage :
[ x ,
sage :
sage :
...
a e d
sage :
...
wy va
T2 = copy ( T )
# using breadth - first search
V = list ( T . breadth_first_search ( " x " )); V
c , b ]
T . delete_vertices ( V )
for v in T . vertex_iterator ():
print ( v ) ,
f w v y z
for e in T . edge_iterator ():
print ( " % s % s " % ( e [0] , e [1])) ,
vw yd yz ze zf
# using depth - first search
V = list ( T2 . depth_first_search ( " x " )); V
b , c ]
T2 . delete_vertices ( V )
for v in T2 . vertex_iterator ():
print ( v ) ,
f w v y z
for e in T2 . edge_iterator ():
print ( " % s % s " % ( e [0] , e [1])) ,
vw yd yz ze zf
The resulting graph T is a binary tree because each vertex has at most two children.
sage : T
Digraph on 8 vertices
sage : is_bintree1 ( T )
True
Notice that the test defined in the function is_bintree2 can no longer be used to test
whether or not T is a binary tree, because T now has two vertices, i.e. v and w, each of
110
[
i
[
i
V (Ti )
{vvi } E(Ti ) .
The following game is a variant of the Shannon switching game, due to Edmonds and
Lehman. We follow the description in Oxleys survey [156]. Recall that a minimal edge
cut of a graph is also called a bond of the graph. The following two-person game is played
on a connected graph G = (V, E). Two players Alice and Bob alternately tag elements
of E. Alices goal is to tag the edges of a spanning tree, while Bobs goal is to tag the
edges of a bond. If we think of this game in terms of a communication network, then
Bobs goal is to separate the network into pieces that are no longer connected to each
other, while Alice is aiming to reinforce edges of the network to prevent their destruction.
Each move for Bob consists of destroying one edge, while each move for Alice involves
securing an edge against destruction. The next result characterizes winning strategies
on G. The full proof can be found in Oxley [156]. See Rasmussen [160] for optimization
algorithms for solving similar games.
111
v
...
T1
T2
Tn
3.2
Properties of trees
All theory, dear friend, is grey, but the golden tree of actual life springs ever green.
Johann Wolfgang von Goethe, Faust, part 1, 1808
By Theorem 1.25, each edge of a tree is a bridge. Removing any edge of a tree partitions
the tree into two components, each of which is a subtree of the original tree. The following
results provide further basic characterizations of trees.
Theorem 3.6. Any tree T = (V, E) has size |E| = |V | 1.
Proof. This follows by induction on the number of vertices. By definition, a tree has
no cycles. We need to show that any tree T = (V, E) has size |E| = |V | 1. For the
base case |V | = 1, there are no edges. Assume for induction that the result holds for
all integers less than or equal to k 2. Let T = (V, E) be a tree having k + 1 vertices.
Remove an edge from T , but not the vertices it is incident to. This disconnects T into
two components T1 = (V1 , E1 ) and T2 = (V2 , E2 ), where |E| = |E1 | + |E2 | + 1 and
|V | = |V1 | + |V2 | (and possibly one of the Ei is empty). Each Ti is a tree satisfying the
conditions of the induction hypothesis. Therefore,
|E| = |E1 | + |E2 | + 1
= |V1 | 1 + |V2 | 1 + 1
= |V | 1.
as required.
112
k
X
i=1
k
X
i=1
|Ei |
|Vi | k
= |V | k.
This contradicts part (2) unless k = 1. Therefore, T is connected.
(3) = (4): If removing an edge e E leaves T = (V, E) connected then T 0 =
(V, E 0 ) is a tree, where E 0 = Ee. However, this means that |E 0 | = |E|1 = |V |11 =
|V | 2, which contradicts part (3). Therefore e is a cut set.
(4) = (1): From part (2) we know that T has no cycles and from part (3) we
know that T is connected. Conclude by the definition of trees that T is a tree.
Theorem 3.8. Let T = (V, E) be a tree and let u, v V be distinct vertices. Then T
has exactly one u-v path.
Proof. Suppose for contradiction that
P : v0 = u, v1 , v2 , . . . , vk = v
and
Q : w0 = u, w1 , w2 , . . . , w` = v
are two distinct u-v paths. Then P and Q has a common vertex x, which is possibly
x = u. For some i 0 and some j 0 we have vi = x = wj , but vi+1 6= wj+1 . Let
y be the first vertex after x such that y belongs to both P and Q. (It is possible that
y = v.) We now have two distinct x-y paths that have only x and y in common. Taken
together, these two x-y paths result in a cycle, contradicting our hypothesis that T is a
tree. Therefore T has only one u-v path.
Theorem 3.9. If T = (V, E) is a graph then the following are equivalent:
113
1. T is a tree.
2. For any new edge e, the join T + e has exactly one cycle.
Proof. (1) = (2): Let e = uv be a new edge connecting u, v V . Suppose that
P : v0 = w, v1 , v2 , . . . , vk = w
and
P 0 : v00 = w, v10 , v20 , . . . , v`0 = w
are two cycles in T + e. If either P or P 0 does not contain e, say P does not contain e,
then P is a cycle in T . Let u = v0 and let v = v1 . The edge (v0 = w, v1 ) is a u-v path
and the sequence v = v1 , v2 , . . . , vk = w = u taken in reverse order is another u-v path.
This contradicts Theorem 3.8.
We may now suppose that P and P 0 both contain e. Then P contains a subpath
P0 = P e (which is not closed) that is the same as P except it lacks the edge from u
to v. Likewise, P 0 contains a subpath P00 = P 0 e (which is not closed) that is the same
as P 0 except it lacks the edge from u to v. By Theorem 3.8, these u-v paths P0 and P00
must be the same. This forces P and P 0 to be the same, which proves part (2).
(2) = (1): Part (2) implies that T is acyclic. (Otherwise, it is trivial to make
two cycles by adding an extra edge.) We must show T is connected. Suppose T is
disconnected. Let u be a vertex in one component, T1 say, of T and v a vertex in another
component, T2 say, of T . Adding the edge e = uv does not create a cycle (if it did then
T1 and T2 would not be disjoint), which contradicts part (2).
Taking together the results in this section, we have the following characterizations of
trees.
Theorem 3.10. Basic characterizations of trees. If T = (V, E) is a graph with n
vertices, then the following statements are equivalent:
1. T is a tree.
2. T contains no cycles and has n 1 edges.
3. T is connected and has n 1 edges.
4. Every edge of T is a cut set.
5. For any pair of distinct vertices u, v V , there is exactly one u-v path.
6. For any new edge e, the join T + e has exactly one cycle.
Let G = (V1 , E1 ) be a graph and T = (V2 , E2 ) a subgraph of G that is a tree. As in
part (6) of Theorem 3.10, we see that adding just one edge in E1 E2 to T will create
a unique cycle in G. Such a cycle is called a fundamental cycle of G. The set of such
fundamental cycles of G depends on T .
The following result essentially says that if a tree has at least one edge, then the tree
has at least two vertices each of which has degree one. In other words, each tree of order
2 has at least two pendants.
Theorem 3.11. Every nontrivial tree has at least two leaves.
114
Proof. Let T be a nontrivial tree of order m and size n. Consider the degree sequence
d1 , d2 , . . . , dm of T where d1 d2 dm . As T is nontrivial and connected, then
m 2 and di 1 for i = 1, 2, . . . , m. If T has less than two leaves, then d1 1 and
di 2 for 2 i m, hence
m
X
i=1
di 1 + 2(m 1) = 2m 1.
(3.1)
di = 2n = 2(m 1) = 2m 2
which contradicts inequality (3.1). Conclude that T has at least two leaves.
Theorem 3.12. If T is a tree of order m and G is a graph with minimum degree
(G) m 1, then T is isomorphic to a subgraph of G.
Proof. Use an inductive argument on the number of vertices. The result holds for m = 1
because K1 is a subgraph of every nontrivial graph. The result also holds for m = 2
since K2 is a subgraph of any graph with at least one edge.
Let m 3, let T1 be a tree of order m 1, and let H be a graph with (H) m 2.
Assume for induction that T1 is isomorphic to a subgraph of H. We need to show that
if T is a tree of order m and G is a graph with (G) m 1, then T is isomorphic to a
subgraph of G. Towards that end, consider a leaf v of T and let u be a vertex of T such
that u is adjacent to v. Then T v is a tree of order m 1 and (G) m 1 > m 2.
Apply the inductive hypothesis to see that T v is isomorphic to a subgraph T 0 of G.
Let u0 be the vertex of T 0 that corresponds to the vertex u of T under an isomorphism.
Since deg(u0 ) m 1 and T 0 has m 2 vertices distinct from u0 , it follows that u0 is
adjacent to some w V (G) such that w
/ V (T 0 ). Therefore T is isomorphic to the
graph obtained by adding the edge u0 w to T 0 .
Example 3.13. Consider a positive integer n. The Euler phi function (n) counts the
number of integers a, with 1 a n, such that gcd(a, n) = 1. The Euler phi sequence
of n is obtained by repeatedly iterating (n) with initial iteration value n. Continue
on iterating and stop when the output of (k ) is 1, for some positive integer k . The
number of terms generated by the iteration, including the initial iteration value n and
the final value of 1, is the length of (n).
(a) Let s0 = n, s1 , s2 , . . . , sk = 1 be the Euler phi sequence of n and produce a digraph G
of this sequence as follows. The vertex set of G is V = {s0 = n, s1 , s2 , . . . , sk = 1}
and the edge set of G is E = {si si+1 | 0 i < k}. Produce the digraphs of the Euler
phi sequences of 15, 22, 33, 35, 69, and 72. Construct the union of all such digraphs
and describe the resulting graph structure.
(b) For each n = 1, 2, . . . , 1000, compute the length of (n) and plot the pairs (n, (n))
on one set of axes.
Solution. The Euler phi sequence of 15 is
15,
(15) = 8,
(8) = 4,
(4) = 2,
(2) = 1.
115
The Euler phi sequences of 22, 33, 35, 69, and 72 can be similarly computed to obtain
their respective digraph representations. The union of all such digraphs is a directed tree
rooted at 1, as shown in Figure 3.11(a). Figure 3.11(b) shows a scatterplot of n versus
the length of (n).
12
4
15
10
20
33
24
44
35
22
72
length of (n)
10
8
6
4
2
0
200
400
800
1,000
69
(a)
600
(b)
3.3
116
the resulting edge-deletion subgraph connected. Thus eventually the above procedure
results in a spanning tree of G. Our discussion is summarized in Algorithm 3.1.
Algorithm 3.1: Randomized spanning tree construction.
Input: A connected graph G.
Output: A spanning tree of G.
1
2
3
4
5
6
T G
while T is not a tree do
e random edge of T
if T e is connected then
T T e
return T
3.3.1
Kruskals algorithm
In 1956, Joseph B. Kruskal published [123] a procedure for constructing a minimum spanning tree of a connected weighted graph G = (V, E). Now known as Kruskals
algorithm,
with a suitable implementation the procedure runs in O |E| log |E| time. Variants
of Kruskals algorithm include the algorithm by Prim [159] and that by Loberman and
Weinberger [135].
Kruskals algorithm belongs to the class of greedy algorithms. As will be explained
below, when constructing a minimum spanning tree Kruskals algorithm considers only
the edge having minimum weight among all available edges. Given a weighted nontrivial
graph G = (V, E) that is connected, let w : E R be the weight function of G. The
first stage is creating a skeleton of the tree T that is initially set to be a graph without
edges, i.e. T = (V, ). The next stage involves sorting the edges of G by weights in
nondecreasing order. In other words, we label the edges of G as follows:
E = {e1 , e2 , . . . , en }
where n = |E| and w(e1 ) w(e2 ) w(en ). Now consider each edge ei for
i = 1, 2, . . . , n. We add ei to the edge set of T provided that ei does not result in T
having a cycle. The only way adding ei = ui vi to T would create a cycle is if both ui and
vi were endpoints of edges (not necessarily distinct) in the same connected component
of T . As long as the acyclic condition holds with the addition of a new edge to T , we
117
add that new edge. Following the acyclic test, we also test that the (updated) graph
T is a tree of G. As G is a graph of order |V |, apply Theorem 3.10 to see that if T
has size |V | 1, then it is a spanning tree of G. Algorithm 3.2 provides pseudocode of
our discussion of Kruskals algorithm. When the algorithm halts, it returns a minimum
spanning tree of G. The correctness of Algorithm 3.2 is proven in Theorem 3.14.
Algorithm 3.2: Kruskals algorithm.
Input: A connected weighted graph G = (V, E) with weight function w.
Output: A minimum spanning tree of G.
1
2
3
4
5
6
7
8
m |V |
T
sort E = {e1 , e2 , . . . , en } by weights so that w(e1 ) w(w2 ) w(en )
for i 1, 2, . . . , n do
if ei
/ E(T ) and T {ei } is acyclic then
T T {ei }
if |T | = m 1 then
return T
m1
X
w(ei ).
i=1
Suppose for contradiction that T is not a minimum spanning tree of G. Among all the
minimum spanning trees of G, let H be a minimum spanning tree of G such that H has
the most number of edges in common with T . As T and H are distinct subgraphs of G,
then T has at least an edge not belonging to H. Let ei E(T ) be the first edge not in
H. Construct the graph G0 = H + ei obtained by adding the edge ei to H. Note that
G0 has exactly one cycle C. Since T is acyclic, there exists an edge e0 E(C) such that
e0 is not in T . Construct the graph T0 = G0 e0 obtained by deleting the edge e0 from
G0 . Then T0 is a spanning tree of G with
w(T0 ) = w(H) + w(ei ) w(e0 )
and w(H) w(T0 ) and hence w(e0 ) w(ei ). By Kruskals algorithm 3.2, ei is an edge of
minimum weight such that {e1 , e2 , . . . , ei1 } {ei } is acyclic. Furthermore, the subgraph
{e1 , e2 , . . . , ei1 , e0 } of H is acyclic. Thus we have w(ei ) = w(e0 ) and w(T0 ) = w(H) and
so T is a minimum spanning tree of G. By construction, T0 has more edges in common
with T than H has with T , in contradiction of our hypothesis.
118
def kruskal ( G ):
"""
Implements Kruskal s algorithm to compute a MST of a graph .
INPUT :
G - a connected edge - weighted graph or digraph
whose vertices are assumed to be 0 , 1 , .... , n -1.
OUTPUT :
T - a minimum weight spanning tree .
If G is not explicitly edge - weighted then the algorithm
assumes all edge weights are 1. The tree T returned is
a weighted graph , even if G is not .
EXAMPLES :
sage : A = matrix ([[0 ,1 ,2 ,3] ,[0 ,0 ,2 ,1] ,[0 ,0 ,0 ,3] ,[0 ,0 ,0 ,0]])
sage : G = DiGraph (A , format = " adjacency_matrix " , weighted = True )
sage : TE = kruskal ( G ); TE . edges ()
[(0 , 1 , 1) , (0 , 2 , 2) , (1 , 3 , 1)]
sage : G . edges ()
[(0 , 1 , 1) , (0 , 2 , 2) , (0 , 3 , 3) , (1 , 2 , 2) , (1 , 3 , 1) , (2 , 3 , 3)]
sage : G = graphs . PetersenGraph ()
sage : TE = kruskal ( G ); TE . edges ()
[(0 , 1 , 1) , (0 , 4 , 1) , (0 , 5 , 1) , (1 , 2 , 1) , (1 , 6 , 1) , (2 , 3 , 1) ,
(2 , 7 , 1) , (3 , 8 , 1) , (4 , 9 , 1)]
TODO :
Add verbose option to make steps more transparent .
( Useful for teachers and students .)
"""
T_vertices = G . vertices () # a list of the form range ( n )
T_edges = []
E = G . edges () # a list of triples
# start ugly hack
Er = [ list ( x ) for x in E ]
E0 = []
for x in Er :
x . reverse ()
E0 . append ( x )
E0 . sort ()
E = []
for x in E0 :
x . reverse ()
E . append ( tuple ( x ))
# end ugly hack to get E is sorted by weight
for x in E : # find edges of T
TV = flatten ( T_edges )
u = x [0]
v = x [1]
if not ( u in TV and v in TV ):
T_edges . append ([ u , v ])
# find adj mat of T
if G . weighted ():
AG = G . w e i g h t e d _ a dj a c e n c y _ m a t r i x ()
else :
AG = G . adjacency_matrix ()
GV = G . vertices ()
n = len ( GV )
AT = []
for i in GV :
rw = [0]* n
for j in GV :
if [i , j ] in T_edges :
rw [ j ] = AG [ i ][ j ]
AT . append ( rw )
AT = matrix ( AT )
return Graph ( AT , format = " adjacency_matrix " , weighted = True )
Here is an example. We start with the grid graph. This is implemented in Sage such
that the vertices are given by the coordinates of the grid the graph lies on, as opposed
to 0, 1, . . . , n 1. Since the above implementation of Kruskals algorithm assumes that
the vertices are V = {0, 1, . . . , n 1}, we first redefine the graph suitable for running
Kruskals algorithm on it.
sage : G = graphs . GridGraph ([4 ,4])
119
sage : A = G . adjacency_matrix ()
sage : G = Graph (A , format = " adjacency_matrix " , weighted = True )
sage : T = kruskal ( G ); T . edges ()
[(0 , 1 , 1) , (0 , 4 , 1) , (1 , 2 , 1) , (1 , 5 , 1) , (2 , 3 , 1) , (2 , 6 , 1) , (3 ,7 , 1) ,
(4 , 8 , 1) , (5 , 9 , 1) , (6 , 10 , 1) , (7 , 11 , 1) , (8 , 12 , 1) , (9 , 13 , 1) ,
(10 , 14 , 1) , (11 , 15 , 1)]
3.3.2
Prims algorithm
Like Kruskals algorithm, Prims algorithm uses a greedy approach to computing a minimum spanning tree of a connected weighted graph G = (V, E), where n = |V | and
m = |E|. The algorithm was developed in 1930 by Czech mathematician V. Jarnk [105]
and later independently by R. C. Prim [159] and E. W. Dijkstra [58]. However, Prim was
the first to present an implementation that runs in time O(n2 ). Using 2-heaps, the runtime can be reduced [115] to O(m log n). With a Fibonacci heap implementation [79, 80],
the runtime can be reduced even further to O(m + n log n).
Pseudocode of Prims algorithm is given in Algorithm 3.3. For each v V , cost[v]
denotes the minimum weight among all edges connecting v to a vertex in the tree T ,
and parent[v] denotes the parent of v in T . During the algorithms execution, vertices v
that are not in T are organized in the minimum-priority queue Q, prioritized according
to cost[v]. Lines 1 to 3 set each cost[v] to a number that is larger than any weight in
the graph G, usually written . The parent of each vertex is set to NULL because we
have not yet started constructing the MST T . In lines 4 to 6, we choose an arbitrary
vertex r from V and mark that vertex as the root of T . The minimum-priority queue
is set to be all vertices from V . We set cost[r] to zero, making r the only vertex so far
with a cost that is < . During the first execution of the while loop from lines 7 to 12,
r is the first vertex to be extracted from Q and processed. Line 8 extracts a vertex u
from Q based on the key cost, thus moving u to the vertex set of T . Line 9 considers all
vertices adjacent to u. In an undirected graph, these are the neighbors of u; in a digraph,
we replace adj(u) with the out-neighbors oadj(u). The while loop updates the cost and
parent fields of each vertex v adjacent to u that is not in T . If parent[v] 6= NULL, then
cost[v] < and cost[v] is the weight of an edge connecting v to some vertex already in T .
Lines 13 to 14 construct the edge set of the minimum spanning tree and return this edge
set. The proof of correctness of Algorithm 3.3 is similar to the proof of Theorem 3.14.
Figure 3.13 shows the minimum spanning tree rooted at vertex 1 as a result of running
120
Prims algorithm over a digraph; Figure 3.14 shows the corresponding tree rooted at
vertex 5 of an undirected graph.
Algorithm 3.3: Prims algorithm.
Input: A weighted connected graph G = (V, E) with weight function w.
Output: A minimum spanning tree T of G.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
for each v V do
cost[v]
parent[v] NULL
r arbitrary vertex of V
cost[r] 0
QV
while Q 6= do
u extractMin(Q)
for each v adj(u) do
if v Q and w(u, v) < cost[v] then
parent[v] u
cost[v] w(u, v)
T (v, parent[v]) | v V {r}
return T
def prim ( G ):
"""
Implements Prim s algorithm to compute a MST of a graph .
INPUT :
G - a connected graph .
OUTPUT :
T - a minimum weight spanning tree .
REFERENCES :
http :// en . wikipedia . org / wiki / Prim s_algorithm
"""
T_vertices = [0] # assumes G . vertices = range ( n )
T_edges = []
E = G . edges () # a list of triples
V = G . vertices ()
# start ugly hack to sort E
Er = [ list ( x ) for x in E ]
E0 = []
for x in Er :
x . reverse ()
E0 . append ( x )
E0 . sort ()
E = []
for x in E0 :
x . reverse ()
E . append ( tuple ( x ))
# end ugly hack to get E is sorted by weight
for x in E :
u = x [0]
v = x [1]
if u in T_vertices and not ( v in T_vertices ):
T_edges . append ([ u , v ])
T_vertices . append ( v )
# found T_vertices , T_edges
# find adj mat of T
if G . weighted ():
AG = G . w e i g h t e d _ a dj a c e n c y _ m a t r i x ()
else :
AG = G . adjacency_matrix ()
GV = G . vertices ()
n = len ( GV )
121
AT = []
for i in GV :
rw = [0]* n
for j in GV :
if [i , j ] in T_edges :
rw [ j ] = AG [ i ][ j ]
AT . append ( rw )
AT = matrix ( AT )
return Graph ( AT , format = " adjacency_matrix " , weighted = True )
sage : A = matrix ([[0 ,1 ,2 ,3] , [3 ,0 ,2 ,1] , [2 ,1 ,0 ,3] , [1 ,1 ,1 ,0]])
sage : G = DiGraph (A , format = " adjacency_matrix " , weighted = True )
sage : E = G . edges (); E
[(0 , 1 , 1) , (0 , 2 , 2) , (0 , 3 , 3) , (1 , 0 , 3) , (1 , 2 , 2) , (1 , 3 , 1) , (2 , 0 , 2) ,
(2 , 1 , 1) , (2 , 3 , 3) , (3 , 0 , 1) , (3 , 1 , 1) , (3 , 2 , 1)]
sage : prim ( G )
Multi - graph on 4 vertices
sage : prim ( G ). edges ()
[(0 , 1 , 1) , (0 , 2 , 2) , (1 , 3 , 1)]
2
3
2
3
3
3
2
3
1
2
2
3
1
3
122
3.3.3
Bor
uvkas algorithm
Bor
uvkas algorithm [31, 32] is a procedure for finding a minimum spanning tree in a
weighted connected graph G = (V, E) for which all edge weights are distinct. It was first
published in 1926 by Otakar Bor
uvka but subsequently rediscovered by many others,
including Choquet [50] and Florek et al. [74]. If G has order n = |V | and size m = |E|,
it can be shown that Bor
uvkas algorithm runs in time O(m log n).
Algorithm 3.4: Bor
uvkas algorithm.
Input: A weighted connected graph G = (V, E) with weight function w. All the
edge weights of G are distinct.
Output: A minimum spanning tree T of G.
1
2
3
4
5
6
7
n |V |
T Kn
while |E(T )| < n 1 do
for each component T 0 of T do
e0 edge of minimum weight that leaves T 0
E(T ) E(T ) e0
return T
123
11
11
9
5
8
6
8
6
4
5
15
7
4
5
15
2
9
5
9
5
1
7
1
7
11
11
9
5
8
6
8
6
4
5
15
7
4
5
15
2
9
5
9
5
1
7
1
7
11
5
8
6
15
7
9
5
1
7
7
0
124
8
7
10
4
9.5
2
5
6
9
11
0
6
8
10
4
10
125
The 0 - th element in
Lx = [ L . index ( S ) for S in L if x in S ]
almost works , but if the list is empty then Lx [0]
throws an exception .
EXAMPLES :
sage : L = [[1 ,2 ,3] ,[4 ,5] ,[6 ,7 ,8]]
sage : which_index (3 , L )
0
sage : which_index (4 , L )
1
sage : which_index (7 , L )
2
sage : which_index (9 , L )
sage : which_index (9 , L ) == None
True
"""
for S in L :
if x in S :
return L . index ( S )
return None
def boruvka ( G ):
"""
Implements Boruvka s algorithm to compute a MST of a graph .
INPUT :
G - a connected edge - weighted graph with distinct weights .
OUTPUT :
T - a minimum weight spanning tree .
REFERENCES :
http :// en . wikipedia . org / wiki / Boruvka s_algorithm
"""
T_vertices = [] # assumes G . vertices = range ( n )
T_edges = []
T = Graph ()
E = G . edges () # a list of triples
V = G . vertices ()
# start ugly hack to sort E
Er = [ list ( x ) for x in E ]
E0 = []
for x in Er :
x . reverse ()
E0 . append ( x )
E0 . sort ()
E = []
for x in E0 :
x . reverse ()
E . append ( tuple ( x ))
# end ugly hack to get E is sorted by weight
for e in E :
# create about | V |/2 edges of T " cheaply "
TV = T . vertices ()
if not ( e [0] in TV ) or not ( e [1] in TV ):
T . add_edge ( e )
for e in E :
# connect the " cheapest " components to get T
C = T . c o n n e c t e d _ c o m p o n e n t s _ s u b g r a p h s ()
VC = [ S . vertices () for S in C ]
if not ( e in T . edges ()) and ( which_index ( e [0] , VC ) != which_index ( e [1] , VC )):
if T . is_connected ():
break
T . add_edge ( e )
return T
126
3.4
Binary trees
A binary tree is a rooted tree with at most two children per parent. Each child is
designated as either a left-child or a right-child . Thus binary trees are also 2-ary trees.
Some examples of binary trees are illustrated in Figure 3.16. Given a vertex v in a
binary tree T of height h, the left subtree of v is comprised of the subtree that spans
the left-child of v and all of this childs descendants. The notion of a right-subtree of a
binary tree is similarly defined. Each of the left and right subtrees of v is itself a binary
tree with height h 1. If v is the root vertex, then each of its left and right subtrees
has height h 1, and at least one of these subtrees has height equal to h 1.
(a)
(b)
(c)
(d)
127
Theorem 3.16 provides a useful upper bound on the order of a binary tree of a given
height. This upper bound is stated in the following corollary.
Corollary 3.17. A binary tree of height h has at most 2h+1 1 vertices.
We now count the number of possible binary trees on n vertices. Let bn be the number
of binary trees of order n. For n = 0, we set b0 = 1. The trivial graph is the only binary
tree with one vertex, hence b1 = 1. Suppose n > 1 and let T be a binary tree on n
vertices. Then the left subtree of T has order 0 i n 1 and the right subtree has
n 1 i vertices. As there are bi possible left subtrees and bn1i possible right subtrees,
T has a total of bi bn1i different combinations of left and right subtrees. Summing from
i = 0 to i = n 1 and we have
bn =
n1
X
bi bn1i .
(3.2)
i=0
Expression (3.2) is known as the Catalan recursion and the number bn is the n-th Catalan
number, which we know from problem 1.15 can be expressed in the closed form
2n
1
.
(3.3)
bn =
n+1 n
Figures 3.17 to 3.19 enumerate all the different binary trees on 2, 3, and 4 vertices,
respectively.
(a)
(b)
(a)
(b)
(c)
(d)
(e)
b1 = 1,
b2 = 2,
b3 = 5,
b4 = 14
which are rather small and of manageable size if we want to explicitly enumerate all
different binary trees with the above orders. However, from n = 4 onwards the value
of bn increases very fast. Instead of enumerating all the bn different binary trees of a
specified order n, a related problem is generating a random binary tree of order n. That
is, we consider the set B as a sample space of bn different binary trees on n vertices,
and choose a random element from B. Such a random element can be generated using
Algorithm 3.5. The list parent holds all vertices with less than two children, each vertex
can be considered as a candidate parent to which we can add a child. An element of
parent is a two-tuple (v, k) where the vertex v currently has k children.
128
(a)
(b)
(f)
(c)
(g)
(k)
(d)
(h)
(l)
(e)
(i)
(m)
(j)
(n)
if n = 1 then
return K1
v0
T null graph
add v to T
parent [(v, 0)]
for i 1, 2, . . . , n 1 do
(v, k) remove random element from parent
if k < 1 then
add (v, k + 1) to parent
add edge (v, i) to T
add (i, 0) to parent
return T
3.4.1
129
Binary codes
What is a code?
A code is a rule for converting data in one format, or well-defined tangible representation,
into sequences of symbols in another format. The finite set of symbols used is called the
alphabet. We shall identify a code as a finite set of symbols which are the image of the
alphabet under this conversion rule. The elements of this set are referred to as codewords.
For example, using the ASCII code, the letters in the English alphabet get converted
into numbers in the set {0, 1, . . . , 255}. If these numbers are written in binary, then
each codeword of a letter has length 8, i.e. eight bits. In this way, we can reformat or
encode a string into a sequence of binary symbols, i.e. 0s and 1s. Encoding is the
conversion process one way. Decoding is the reverse process, converting these sequences
of code-symbols back into information in the original format.
Codes are used for:
Economy. Sometimes this is called entropy encoding since there is an entropy
function which describes how much information a channel (with a given error rate)
can carry and such codes are designed to maximize entropy as best as possible. In
this case, in addition to simply being given an alphabet A, one might be given a
weighted alphabet, i.e. an alphabet for which each symbol a A is associated with
a nonnegative number wa 0 (in practice, this number represents the probability
that the symbol a occurs in a typical word).
Reliability. Such codes are called error-correcting codes, since such codes are designed to communicate information over a noisy channel in such a way that the
errors in transmission are likely to be correctable.
Security. Such codes are called cryptosystems. In this case, the inverse of the
coding function c : A B is designed to be computationally infeasible. In other
words, the coding function c is designed to be a trapdoor function.
Other codes are merely simpler ways to communicate information (e.g. flag semaphores,
color codes, genetic codes, braille codes, musical scores, chess notation, football diagrams,
and so on) and have little or no mathematical structure. We shall not study them.
Basic definitions
If every word in the code has the same length, the code is called a block code. If a
code is not a block code, then it is called a variable-length code. A prefix-free code is a
code (typically one of variable-length) with the property that there is no valid codeword
in the code that is a prefix or start of any other codeword.1 This is the prefix-free
condition.
One example of a prefix-free code is the ASCII code. Another example is
00, 01, 100.
On the other hand, a non-example is the code
00, 01, 010, 100
1
130
since the second codeword is a prefix of the third one. Another non-example is Morse
code recalled in Table 3.1, where we use 0 for (dit) and 1 for (dah). For
example, consider the Morse code for aand the Morse code for w. These codewords
violate the prefix-free condition.
A
B
C
D
E
F
G
H
I
J
K
L
M
01
1000
1010
100
0
0010
110
0000
00
0111
101
0100
11
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
10
111
0110
1101
010
000
1
001
0001
011
1001
1011
1100
Gray codes
We begin with some history.2 Frank Gray (18871969) wrote about the so-called Gray
codes in a 1951 paper published in the Bell System Technical Journal and then in 1953
patented a device (used for television sets) based on his paper. However, the idea of
a binary Gray code appeared earlier. In fact, it appeared in an earlier patent (one by
Stibitz in 1943). It was also used in the French engineer E. Baudots telegraph machine
of 1878 and in a French booklet by L. Gros on the solution published in 1872 to the
Chinese ring puzzle.
The term Gray code is ambiguous. It is actually a large family of sequences of
n-tuples. Let Zm = {0, 1, . . . , m 1}. More precisely, an m-ary Gray code of length
n (called a binary Gray code when m = 2) is a sequence of all possible (i.e. N = mn )
n-tuples
g1 , g2 , . . . , gN
where
each gi Znm ,
gi and gi+1 differ by 1 in exactly one coordinate.
In other words, an m-ary Gray code of length n is a particular way to order the set of
all mn n-tuples whose coordinates are taken from Zm . From the transmission/communication perspective, this sequence has two advantages:
It is easy and fast to produce the sequence, since successive entries differ in only
one coordinate.
2
This history comes from an unpublished section 7.2.1.1 (Generating all n-tuples) in volume 4 of
Donald Knuths The Art of Computer Programming.
131
An error is relatively easy to detect, since we can compare an n-tuple with the
previous one. If they differ in more than one coordinate, we conclude that an error
was made.
and whose edges are those line segments in Rn connecting two neighboring vertices, i.e.
two vertices that differ in exactly one coordinate. A binary Gray code of length n can
be regarded as a path on the hypercube graph Qn that visits each vertex of the cube
exactly once. In other words, a binary Gray code of length n may be identified with a
Hamiltonian path on the graph Qn . For example, Figure 3.20 illustrates a Hamiltonian
path on Q3 .
132
where rev
m means the Gray code in reverse order. For instance, we have
0 = [ ],
1 = [0], [1] ,
2 = [0, 0], [0, 1], [1, 1], [1, 0]
and so on. This is a nice procedure for creating the entire list at once, which gets very
long very fast. An implementation of the reflected Gray code using Python is given
below.
def graycode ( length , modulus ):
"""
Returns the n - tuple reflected Gray code mod m .
EXAMPLES :
sage : graycode (2 ,4)
[[0 ,
[1 ,
[2 ,
[3 ,
[3 ,
[2 ,
[1 ,
[0 ,
[0 ,
[1 ,
[2 ,
[3 ,
[3 ,
[2 ,
[1 ,
[0 ,
0] ,
0] ,
0] ,
0] ,
1] ,
1] ,
1] ,
1] ,
2] ,
2] ,
2] ,
2] ,
3] ,
3] ,
3] ,
3]]
"""
n , m = length , modulus
F = range ( m )
if n == 1:
return [[ i ] for i in F ]
L = graycode (n -1 , m )
M = []
for j in F :
M = M +[ ll +[ j ] for ll in L ]
k = len ( M )
Mr = [0]* m
for i in range (m -1):
i1 = i * int ( k / m )
# this requires Python 3.0 or Sage
i2 = ( i +1)* int ( k / m )
Mr [ i ] = M [ i1 : i2 ]
Mr [m -1] = M [( m -1)* int ( k / m ):]
for i in range ( m ):
if is_odd ( i ):
Mr [ i ]. reverse ()
M0 = []
for i in range ( m ):
M0 = M0 + Mr [ i ]
return M0
Consider the reflected binary code of length 8, i.e. 8 . This has 28 = 256 codewords.
Sage can easily create the list plot of the coordinates (x, y), where x is an integer j Z256
that indexes the codewords in 8 and the corresponding y is the j-th codeword in 8
converted to decimal. This will give us some idea of how the Gray code looks in some
sense. The plot is given in Figure 3.21.
What if we only want to compute the i-th Gray codeword in the Gray code of length
n? Can it be computed quickly without computing the entire list? At least in the case of
the reflected binary Gray code, there is a very simple way to do this. The k-th element in
133
200
100
0
0
50
100
150
200
250
s = bin ( m )
k = len ( s )
F = GF (2)
b = [ F (0)]* n
for i in range (2 , k ):
b [n - k + i ] = F ( int ( s [ i ]))
return vector ( b )
def graycodeword (m , n ):
3.5
Huffman codes
An alphabet A is a finite set whose elements are referred to as symbols. A word (or string
or message) over A is a finite sequence of symbols in A and the length of the word is
134
3.5.1
Tree representation
00
01
10
11
135
all of its descendants are not in C. Next, remove all labels which do not correspond to
codewords in C. The resulting labeled graph is the tree associated to the binary code C.
For visualizing the construction of Huffman codes later, it is important to see that
we can reverse this construction to start from such a binary tree and recover a binary
code from it. The codewords are determined by the following rules:
The root node gets the empty codeword.
Each left-ward branch gets a 0 appended to the end of its parent. Each right-ward
branch gets a 1 appended to the end.
3.5.2
136
P
Consider now a weighted alphabet (A, p), where p : A [0, 1] satisfies aA p(a) =
1, and a code c : A B . In other words, p is a probability distribution on A. Think
of p(a) as the probability that the symbol a arises in a typical message. The average
word length L(c) is3
X
L(c) =
p(a) |c(a)|
aA
3.5.3
Huffman coding
The Huffman code construction is based on the second property in Lemma 3.22. Using
this property, in 1952 David Huffman [103] presented an optimal prefix-free binary code,
which has since been named Huffman code.
Here is the recursive/inductive construction of a Huffman code. We shall regard the
binary Huffman code as a tree, as described above. Suppose that the weighted alphabet
(A, p) has n symbols. We assume inductively that there is an optimal prefix-free binary
code for any weighted alphabet (A0 , p0 ) having < n symbols.
Huffmans rule 1 Let a, a0 A be symbols with the smallest weights. Construct a new
weighted alphabet with a, a0 replaced by the single symbol a = aa0 and having
weight p(a ) = p(a) + p(a0 ). All other symbols and weights remain unchanged.
Huffmans rule 2 For the code (A0 , p0 ) above, if a is encoded as the binary string s,
then the encoded binary string for a is s0 and the encoded binary string for a0 is
s1.
The above two rules tell us how to inductively build the tree representation for the
Huffman code of (A, p) up from its leaves (associated to the low weight symbols).
Find two different symbols of lowest weight, a and a0 . If two such symbols do not
exist, stop. Replace the weighted alphabet with the new weighted alphabet as in
Huffmans rule 1.
3
In probability terminology, this is the expected value E(X) of the random variable X, which assigns
to a randomly selected symbol in A the length of the associated codeword in c.
137
Add two nodes (labeled with a and a0 , respectively) to the tree, with parent a (see
Huffmans rule 1).
If there are no remaining symbols in A, label the parent a with the empty set and
stop. Otherwise, go to the first step.
These ideas are captured in Algorithm 3.6, which outlines steps to construct a binary
tree corresponding to the Huffman code of an alphabet. Line 2 initializes a minimumpriority queue Q with the symbols in the alphabet A. Line 3 creates an empty binary
tree that will be used to represent the Huffman code corresponding to A. The for loop
from lines 4 to 10 repeatedly extracts from Q two elements a and b of minimum weights.
We then create a new vertex z for the tree T and also let a and b be vertices of T . The
weight W [z] of z is the sum of the weights of a and b. We let z be the parent of a and b,
and insert the new edges za and zb into T . The newly created vertex z is now inserted
into Q with priority W [z]. After n 1 rounds of the for loop, the priority queue has only
one element in it, namely the root r of the binary tree T . We extract r from Q (line 11)
and return it together with T (line 12).
Algorithm 3.6: Binary tree representation of Huffman codes.
Input: An alphabet A of n symbols. A weight list W of size n such that W [i] is
the weight of ai A.
Output: A binary tree T representing the Huffman code of A and the root r of T .
1
2
3
4
5
6
7
8
9
10
11
12
n |A|
QA
/* minimum priority queue */
T empty tree
for i 1, 2, . . . , n 1 do
a extractMin(Q)
b extractMin(Q)
z node with left child a and right child b
add the edges za and zb to T
W [z] W [a] + W [b]
insert z into priority queue Q
r extractMin(Q)
return (T, r)
The runtime analysis of Algorithm 3.6 depends on the implementation of the priority
queue Q. Suppose Q is a simple unsorted list. The initialization on line 2 requires O(n)
time. The for loop from line 4 to 10 is executed exactly n 1 times. Searching Q to
determine the element of minimum weight requires time at most O(n). Determining two
elements of minimum weights requires time O(2n). The for loop requires time O(2n2 ),
which is also the time requirement for the algorithm. An efficient implementation of
the priority queue Q, e.g. as a binary minimum heap, can lower the running time of
Algorithm 3.6 down to O(n log2 (n)).
Algorithm 3.6 represents the Huffman code of an alphabet as a binary tree T rooted
at r. For an illustration of the process of constructing a Huffman tree, see Figure 3.23.
To determine the actual encoding of each symbol in the alphabet, we feed T and r to
Algorithm 3.7 to obtain the encoding of each symbol. Starting from the root r whose
designated label is the empty string , the algorithm traverses the vertices of T in a
138
13
13
19
19
4
2
19
13
19
(d)
19
13
8
4
19
5
2
1
(e)
15
7
10
5
2
10
5
15
13
2
1
23
5
4
2
(f)
19
10
2
1
13
5
2
(c)
8
13
19
8
(g)
(h)
57
23
10
5
34
13
5
2
(b)
2
1
(a)
13
15
7
23
19
13
5
2
4
2
15
7
19
8
2
1
(i)
10
8
4
34
4
2
2
1
(j)
7
3
139
breadth-first search fashion. If v is an internal vertex with label e, the label of its leftchild is the concatenation e0 and for the right-child of v we assign the label e1. If v
happens to be a leaf vertex, we take its label to be its Huffman encoding. Any Huffman
encoding assigned to a symbol of an alphabet is not unique. Either of the two children of
an internal vertex can be designated as the left- (respectively, right-) child. The runtime
of Algorithm 3.7 is O(|V |), where V is the vertex set of T .
Algorithm 3.7: Huffman encoding of an alphabet.
Input: A binary tree T representing the Huffman code of an alphabet A. The
root r of T .
Output: A list H representing a Huffman code of A, where H[ai ] corresponds to
a Huffman encoding of ai A.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
H []
/* list of Huffman encodings */
Q [r]
/* queue of vertices */
while length(Q) > 0 do
root dequeue(Q)
if root is a leaf then
H[root] label of root
else
a left child of root
b right child of root
enqueue(Q, a)
enqueue(Q, b)
label of a label of root + 0
label of b label of root + 1
return H
3.6
Tree traversals
In computer science, tree traversal refers to the process of examining each vertex in a tree
data structure. Starting at the root of an ordered tree T , we can traverse the vertices of
T in one of various ways.
A level-order traversal of an ordered tree T examines the vertices in increasing order
of depth, with vertices of equal depth being examined according to their prescribed
order. One way to think about level-order traversal is to consider vertices of T having
140
j : 71
i : 49
h : 24
g:5
b:2
d : 25
e : 31
c : 40
00
a : 19
000
f :3
0000
(a)
01
10
11
001
0001
(b)
Figure 3.24: Binary tree representation of an alphabet and its Huffman encodings.
the same depth as being ordered from left to right in decreasing order of importance.
If [v1 , v2 , . . . , vn ] lists the vertices from left to right at depth k, a decreasing order of
importance can be realized by assigning each vertex a numeric label using a labelling
function L : V (T ) R such that L(v1 ) < L(v2 ) < < L(vn ). In this way, a vertex
with a lower numeric label is examined prior to a vertex with a higher numeric label. A
level-order traversal of T , whose vertices of equal depth are prioritized according to L,
is an examination of the vertices of T from top to bottom, left to right. As an example,
the level-order traversal of the tree in Figure 3.25 is
42, 4, 15, 2, 3, 5, 7, 10, 11, 12, 13, 14.
Our discussion is formalized in Algorithm 3.8, whose general structure mimics that of
breadth-first search. For this reason, level-order traversal is also known as breadth-first
traversal. Each vertex is enqueued and dequeued exactly once. The while loop is executed
n times, hence we have a runtime of O(n). Another name for level-order traversal is topdown traversal because we first visit the root node and then work our way down the
tree, increasing the depth as we move downward.
Pre-order traversal is a traversal of an ordered tree using a general strategy similar
to depth-first search. For this reason, pre-order traversal is also referred to as depth-first
traversal. Parents are visited prior to their respective children and siblings are visited
according to their prescribed order. The pseudocode for pre-order traversal is presented
in Algorithm 3.9. Note the close resemblance to Algorithm 3.8; the only significant
change is to use a stack instead of a queue. Each vertex is pushed and popped exactly
once, so the while loop is executed n times, resulting in a runtime of O(n). Using
Algorithm 3.9, a pre-order traversal of the tree in Figure 3.25 is
42, 4, 2, 3, 10, 11, 14, 5, 12, 13, 15, 7.
Whereas pre-order traversal lists a vertex v the first time we visit it, post-order
traversal lists v the last time we visit it. In other words, children are visited prior to their
respective parents, with siblings being visited in their prescribed order. The prefix pre
in pre-order traversal means before, i.e. visit parents before visiting children. On the
141
42
15
10
11
12
13
14
L []
Q empty queue
r root of T
enqueue(Q, r)
while length(Q) > 0 do
v dequeue(Q)
append(L, v)
[u1 , u2 , . . . , uk ] ordering of children of v
for i 1, 2, . . . , k do
enqueue(Q, ui )
return L
L []
S empty stack
r root of T
push(S, r)
while length(S) > 0 do
v pop(S)
append(L, v)
[u1 , u2 , . . . , uk ] ordering of children of v
for i k, k 1, . . . , 1 do
push(S, ui )
return L
142
other hand, the prefix post in post-order traversal means after, i.e. visit parents
after having visited their children. The pseudocode for post-order traversal is presented
in Algorithm 3.10, whose general structure bears close resemblance to Algorithm 3.9.
The while loop of the former is executed n times because each vertex is pushed and
popped exactly once, resulting in a runtime of O(n). The post-order traversal of the tree
in Figure 3.25 is
2, 10, 14, 11, 3, 12, 13, 5, 4, 7, 15, 42.
Algorithm 3.10: Post-order traversal.
Input: An ordered tree T on n > 0 vertices.
Output: A list of the vertices of T in post-order.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
L []
S empty stack
r root of T
push(S, r)
while length(S) > 0 do
if top(S) is unmarked then
mark top(S)
[u1 , u2 , . . . , uk ] ordering of children of top(S)
for i k, k 1, . . . , 1 do
push(S, ui )
else
v pop(S)
append(L, v)
return L
Instead of traversing a tree T from top to bottom as is the case with level-order
traversal, we can reverse the direction of our traversal by traversing a tree from bottom
to top. Called bottom-up traversal , we first visit all the leaves of T and consider the
subtree T1 obtained by vertex deletion of those leaves. We then recursively perform
bottom-up traversal of T1 by visiting all of its leaves and obtain the subtree T2 resulting
from vertex deletion of those leaves of T1 . Apply bottom-up traversal to T2 and its vertex
deletion subtrees until we have visited all vertices, including the root vertex. The result
is a procedure for bottom-up traversal as presented in Algorithm 3.11. In lines 3 to 5,
we initialize the list C to contain the number of children of vertex i. This takes O(m)
time, where m = |E(T )|. Lines 6 to 14 extract all the leaves of T and add them to the
queue Q. From lines 15 to 23, we repeatedly apply bottom-up traversal to subtrees of
T . As each vertex is enqueued and dequeued exactly once, the two loops together run
in time O(n) and therefore Algorithm 3.11 has a runtime of O(n + m). As an example,
a bottom-up traversal of the tree in Figure 3.25 is
2, 7, 10, 12, 13, 14, 15, 5, 11, 3, 4, 42.
Yet another common tree traversal technique is called in-order traversal . However, inorder traversal is only applicable to binary trees, whereas the other traversal techniques
we considered above can be applied to any tree with at least one vertex. Given a binary
tree T having at least one vertex, in-order traversal first visits the root of T and consider
Q empty queue
r root of T
C [0, 0, . . . , 0]
/* n copies of 0 */
for each edge (u, v) E(T ) do
C[u] C[u] + 1
R empty queue
enqueue(R, r)
while length(R) > 0 do
v dequeue(R)
for each w children(v) do
if C[w] = 0 then
enqueue(Q, w)
else
enqueue(R, w)
L []
while length(Q) > 0 do
v dequeue(Q)
append(L, v)
if v 6= r then
C[parent(v)] C[parent(v)] 1
if C[parent(v)] = 0 then
u parent(v)
enqueue(Q, u)
return L
L []
S empty stack
v root of T
while True do
if v 6= NULL then
push(S, v)
v left-child of v
else
if length(S) = 0 then
exit the loop
v pop(S)
append(L, v)
v right-child of v
return L
143
144
its left- and right-children. We then recursively apply in-order traversal to the left and
right subtrees of the root vertex. Notice the symmetry in our description of in-order
traversal: start at the root, then traverse the left and right subtrees in in-order. For this
reason, in-order traversal is sometimes referred to as symmetric traversal. Our discussion
is summarized in Algorithm 3.12. In the latter algorithm, if a vertex does not have a
left-child, then the operation of finding its left-child returns NULL. The same holds when
the vertex does not have a right-child. Since each vertex is pushed and popped exactly
once, it follows that in-order traversal runs in time O(n). Using Algorithm 3.12, an
in-order traversal of the tree in Figure 3.24(b) is
0000, 000, 0001, 00, 001, 0, 01, , 10, 1, 11.
3.7
Problems
When solving problems, dig at the roots instead of just hacking at the leaves.
Anthony J. DAngelo, The College Blue Book
3.7. Problems
145
(b) Explain and provide pseudocode of an algorithm for constructing all spanning
trees of the n n grid graph, where n > 0.
(d) Describe and provide pseudocode of an algorithm to generate a random spanning tree of the n n grid graph. What is the worst-case runtime of your
algorithm?
3.8. Theorem 3.4 shows how to recursively construct a new tree from a given collection
of trees, hence it can be considered as a recursive definition of trees. To prove theorems based upon recursive definitions, we use a proof technique called structural
induction. Let S(C) be a statement about the collection of structures C, each of
which is defined by a recursive definition. In the base case, prove S(C) for the
basis structure(s) C. For the inductive case, let X be a structure formed using
the recursive definition from the structures Y1 , Y2 , . . . , Yk . Assume for induction
that the statements S(Y1 ), S(Y2 ), . . . , S(Yk ) hold and use the inductive hypotheses
S(Yi ) to prove S(X). Hence conclude that S(X) is true for all X. Apply structural
induction to show that any graph constructed using Theorem 3.4 is indeed a tree.
3.9. In Kruskals Algorithm 3.2, line 5 requires that the addition of a new edge to T
does not result in T having a cycle. A tree by definition has no cycles. Suppose
line 5 is changed to:
if ei
/ E(T ) and T {ei } is a tree then
With this change, explain why Algorithm 3.2 would return a minimum spanning
tree or why the algorithm would fail to do so.
3.10. This problem is concerned with improving the runtime of Kruskals Algorithm 3.2.
Explain how to use a priority queue to obviate the need for sorting the edges by
weight. Investigate the union-find data structure. Explain how to use union-find
to ensure that the addition of each edge results in an acyclic graph.
3.11. Figure 3.26 shows a weighted version of the Chvatal graph, which has 12 vertices and 24 edges. Use this graph as input to Kruskals, Prims, and Bor
uvkas
algorithms and compare the resulting minimum spanning trees.
3.12. Algorithm 3.1 presents a randomized procedure to construct a spanning tree of a
given connected graph via repeated edge deletion.
(a) Describe and present pseudocode of a randomized algorithm to grow a spanning tree via edge addition.
(b) Would Algorithm 3.1 still work if the input graph G has self-loops or multiple
edges? Explain why or why not. If not, modify Algorithm 3.1 to handle the
case where G has self-loops and multiple edges.
(c) Repeat the previous exercise for Kruskals, Prims, and Bor
uvkas algorithms.
146
11.4
40.7
17.1
14.4
5.6
35.4
5
4
14.5
15
9
11.8
8.5
3.7
48
11
10
0.2
9.1
27.1
6.9
17
43.2
10.2
22.1
22
42.7
36.6
44.2
if n = 1 then
return K1
P random permutation of V
T null tree
for i 1, 2, . . . , n 1 do
j random element from {0, 1, . . . , i 1}
add edge (P [j], P [i]) to T
return T
3.7. Problems
147
3.13. Algorithm 3.13 constructs a random spanning tree of the complete graph Kn on
n > 0 vertices. Its runtime is dependent on efficient algorithms for obtaining a
random permutation of a set of objects, and choosing a random element from a
given set.
(a) Describe and analyze the runtime of a procedure to construct a random permutation of a set of nonnegative integers.
(b) Describe an algorithm for randomly choosing an element of a set of nonnegative integers. Analyze the runtime of this algorithm.
(c) Taking into consideration the previous two algorithms, what is the runtime of
Algorithm 3.13?
3.14. We want to generate a random undirected, connected simple graph on n vertices
and having m edges. Start by generating a random spanning tree T of Kn . Then
add random edges to T until the requirements are satisfied.
(a) Present pseudocode to realize the above procedure. What is the worst-case
runtime of your algorithm?
(b) Modify your algorithm to handle the case where m < n 1. Why must
m n 1?
(c) Modify your algorithm to handle the case where each edge has a weight within
the closed interval [, ].
You can find this on the Internet or in the literature. Part of this exercise is finding this frequency
distribution yourself.
148
(3.4)
One way to think about the Collatz conjecture is to consider the digraph G
produced by considering (ai , T (ai )) as a directed edge of G. Then the Collatz
conjecture can be rephrased to say that there is some integer k > 0 such that
(ak , T (ak )) = (2, 1) is a directed edge of G. The graph obtained in this manner is called the Collatz graph of T (n). Given a collection of positive integers
1 , 2 , . . . , k , let Gi be the Collatz graph of the function T (i ) with initial iteration value i . Then the union of the Gi is the directed tree
[
Gi
3.7. Problems
149
16
32
10
20
21
64
42
128
12
13
40
80
84
85
256
24
26
168
170
512
48
52
53
160
336
340
1024
96
104
106
320
150
3.27. The following result [145] was independently discovered in the late 1980s by Merris
and McKay, and is known as the Merris-McKay theorem. Let T be a tree of order
n and let L be its Laplacian matrix having eigenvalues 1 , 2 , . . . , n . Show that
the Wiener number of T is
n1
X
1
.
W (T ) = n
i
i=1
3.28. For each of the algorithms below: (i) justify whether or not it can be applied
to multigraphs or multidigraphs; (ii) if not, modify the algorithm so that it is
applicable to multigraphs or multidigraphs.
(a) Randomized spanning tree construction Algorithm 3.1.
(b) Kruskals Algorithm 3.2.
(c) Prims Algorithm 3.3.
(d) Bor
uvkas Algorithm 3.4.
3.29. Section 3.6 provides iterative algorithms for the following tree traversal techniques:
(a) Level-order traversal: Algorithm 3.8.
(b) Pre-order traversal: Algorithm 3.9.
(c) Post-order traversal: Algorithm 3.10.
(d) Bottom-up traversal: Algorithm 3.11.
(e) In-order traversal: Algorithm 3.12.
Rewrite each of the above as recursive algorithms.
3.30. In cryptography, the Merkle signature scheme [143] was introduced in 1987 as an
alternative to traditional digital signature schemes such as the Digital Signature
Algorithm or RSA. Buchmann et al. [39] and Szydlo [174] provide efficient algorithms for speeding up the Merkle signature scheme. Investigate this scheme and
how it uses binary trees to generate digital signatures.
3.31. Consider the finite alphabet A = {a1 , a2 , . . . , ar }. If C is a subset of A , then we say
that C is an r-ary code and call r the radix of the code. McMillans theorem [141],
first published in 1956, relates codeword lengths to unique decipherability. In
particular, let C = {c1 , c2 , . . . , cn } be an r-ary code where each ci has length `i . If
C is uniquely decipherable, McMillans theorem states that the codeword lengths
`i must satisfy Krafts inequality
n
X
1
1.
`i
r
i=1
3.7. Problems
151
2,
if n = 0,
Ln = 1,
if n = 1,
Chapter 4
Tree Data Structures
4.1
153
Priority queues
A priority queue is essentially a queue data structure with various accompanying rules
regarding how to access and manage elements of the queue. Recall from section 2.2.1
that an ordinary queue Q has the following basic accompanying functions for accessing
and managing its elements:
dequeue(Q) Remove the front of Q.
enqueue(Q, e) Append the element e to the end of Q.
If the above three properties hold for the relation , then we say that is a total
order on X and that X is a totally ordered set. In all, if the key of each element of
Q belongs to the same totally ordered set X, we use the total order defined on X to
compare the keys of the queue elements. For example, the set Z of integers is totally
ordered by the less than or equal to relation. If the key of each e Q is an element
of Z, we use the latter relation to compare the keys of elements of Q. In the case of an
ordinary queue, the key of each queue element is its position index.
To extract from a priority queue Q an element of lowest priority, we need to define
the notion of smallest priority or key. Let pi be the priority or key assigned to element
ei of Q. Then pmin is the lowest key if pmin p for any element key p. The element with
corresponding key pmin is the minimum priority element. Based upon the notion of key
comparison, we define two operations on a priority queue:
insert(Q, e, p) Insert into Q the element e with key p.
extractMin(Q) Extract from Q an element having the smallest priority.
154
Q []
for i 1, 2, . . . , n do
e dequeue(L)
insert(Q, e, e)
for i 1, 2, . . . , n do
e extractMin(Q)
enqueue(L, e)
4.1.1
Sequence implementation
4.2
Binary heaps
155
internal vertices in a binary tree T , with external vertices or leaves being place-holders.
The tree T satisfies two further properties:
1. A relational property specifying the relative ordering and placement of queue elements.
2. A structural property that specifies the structure of T .
The relational property of T can be expressed as follows:
Definition 4.1. Heap-order property. Let T be a binary tree and let v be a vertex of
T other than the root. If p is the parent of v and these vertices have corresponding keys
p and v , respectively, then p v .
The heap-order property is defined in terms of the total order used to compare the
keys of the internal vertices. Taking the total order to be the ordinary less than or
equal to relation, it follows from the heap-order property that the root of T is always
the vertex with a minimum key. Similarly, if the total order is the usual greater than
or equal to relation, then the root of T is always the vertex with a maximum key. In
general, if is a total order defined on the keys of T and u and v are vertices of T , we
say that u is less than or equal to v if and only if u v. Furthermore, u is said to be
a minimum vertex of T if and only if u v for all vertices of T . From our discussion
above, the root is always a minimum vertex of T and is said to be at the top of the
heap, from which we derive the name heap for this data structure.
Another consequence of the heap-order property becomes apparent when we trace
out a path from the root of T to any internal vertex. Let r be the root of T and let v be
any internal vertex of T . If r, v0 , v1 , . . . , vn , v is an r-v path with corresponding keys
r , v0 , v1 , . . . , vn , v
then we have
r v0 v1 vn v .
In other words, the keys encountered on the path from r to v are arranged in nondecreasing order.
The structural property of T is used to enforce that T be of as small a height as
possible. Before stating the structural property, we first define the level of a binary tree.
Recall that the depth of a vertex in T is its distance from the root. Level i of a binary
tree T refers to all vertices of T that have the same depth i. We are now ready to state
the heap-structure property.
Definition 4.2. Heap-structure property. Let T be a binary tree with height h.
Then T satisfies the heap-structure property if T is nearly a complete binary tree. That
is, level 0 i h 1 has 2i vertices, whereas level h has 2h vertices. The vertices at
level h are filled from left to right.
If a binary tree T satisfies both the heap-order and heap-structure properties, then
T is referred to as a binary heap. By insisting that T satisfy the heap-order property,
we are able to determine the minimum vertex of T in constant time O(1). Requiring
that T also satisfy the heap-structure property allows us to determine the last vertex
of T . The last vertex of T is identified as the right-most internal vertex of T having
the greatest depth. Figure 4.1 illustrates various examples of binary heaps. The heapstructure property together with Theorem 3.16 result in the following corollary on the
height of a binary heap.
156
10
(a)
0
17
13
19
24
10
23
(b)
1
13
10
17
(c)
157
Proof. Level h 1 has at least one internal vertex. Apply Theorem 3.16 to see that T
has at least
2h2+1 1 + 1 = 2h1
internal vertices. On the other hand, level h 1 has at most 2h1 internal vertices.
Another application of Theorem 3.16 shows that T has at most
2h1+1 1 = 2h 1
internal vertices. Thus n is bounded by
2h1 n 2h 1.
Taking logarithms of each side in the latter bound results in
lg(n + 1) h lg n + 1
and the corollary follows.
0
0
10
17
13
19
24
23
10
(a)
(b)
10
13
17
(c)
4.2.1
Sequence representation
Any binary heap can be represented as a binary tree. Each vertex in the tree must know
about its parent and its two children. However, a more common approach is to represent
a binary heap as a sequence such as a list, array, or vector. Let T be a binary heap
consisting of n internal vertices and let L be a list of n elements. The root vertex is
represented as the list element L[0]. For each index i, the children of L[i] are L[2i + 1]
and L[2i + 2] and the parent of L[i] is
i1
.
L
2
158
With a sequence representation of a binary heap, each vertex needs not know about
its parent and children. Such information can be obtained via simple arithmetic on
sequence indices. For example, the binary heaps in Figure 4.1 can be represented as the
corresponding lists in Figure 4.2. Note that it is not necessary to store the leaves of T
in the sequence representation.
4.2.2
We now consider the problem of inserting a vertex v into a binary heap T . If T is empty,
inserting a vertex simply involves the creation of a new internal vertex. We let that
new internal vertex be v and let its two children be leaves. The resulting binary heap
augmented with v has exactly one internal vertex and satisfies both the heap-order and
heap-structure properties, as shown in Figure 4.3. In other words, any binary heap with
one internal vertex trivially satisfies the heap-order property.
v
(a)
(b)
159
6
17
4
13
19
8
24
2
10
23
6
17
4
13
19
8
24
(a)
17
3
4
13
19
8
24
23
2
10
6
17
4
13
19
0
24
(c)
3
4
13
19
0
24
23
2
10
6
17
4
13
19
3
24
(e)
0
4
13
19
3
24
(g)
17
23
10
(f)
1
17
23
10
(d)
1
(b)
1
23
10
23
2
10
6
17
4
13
19
3
24
(h)
23
10
8
160
in
while i > 0 do
p b(i 1)/2c
if T [p] v then
exit the loop
else
T [i] T [p]
ip
T [i] v
return T
4.2.3
The process for deleting the minimum vertex of a binary heap bears some resemblance
to that of inserting a new internal vertex into the heap. Having removed the minimum
vertex, we must then ensure that the resulting binary heap satisfies the heap-order
property. Let T be a binary heap. By the heap-order property, the root of T has a
key that is minimum among all keys of internal vertices in T . If the root r of T is the
only internal vertex of T , i.e. T is the trivial binary heap, we simply remove r and T now
becomes the empty binary heap or the trivial tree, for which the heap-order property
vacuously holds. Figure 4.5 illustrates the case of removing the root of a binary heap
having one internal vertex.
r
(a)
(b)
161
r that has minimum key among all of rs children. The key comparison and swapping
continue until the heap-order property holds for T . In the worst case, r would percolate
all the way down to the level that is immediately above the last level after undergoing a
number of swaps that is proportional to the height of T . Therefore, deleting the minimum
vertex of T can be achieved in time O(lg n). Figure 4.6 illustrates the deletion of the
minimum vertex of a binary heap with at least two internal vertices and the resulting
sift-down process that percolates vertices down through various levels of the heap in order
to maintain the heap-order property. Algorithm 4.3 summarizes our discussion of the
process for extracting the minimum vertex of T while also ensuring that T satisfies the
heap-order property. The pseudocode is adapted from the C implementation of binary
heaps in Howard [102]. With some minor changes, Algorithm 4.3 can be used to change
the key of the root vertex and maintain the heap-order property for the resulting binary
tree.
Algorithm 4.3: Extract the minimum vertex of a binary heap.
Input: A binary heap T , given in sequence representation, having n > 1 internal
vertices.
Output: Extract the minimum vertex of T . With one vertex removed, T must
satisfy the heap-order property.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
root T [0]
nn1
v T [n]
i0
j0
while True do
left 2i + 1
right 2i + 2
if left < n and T [left] v then
if right < n and T [right] T [left] then
j right
else
j left
else if right < n and T [right] v then
j right
else
T [i] v
exit the loop
T [i] T [j]
ij
return root
4.2.4
162
23
6
17
4
13
19
8
24
2
10
23
6
17
4
13
19
8
24
(a)
(b)
23
6
17
4
13
19
23
10
24
6
17
4
13
19
(d)
2
23
17
3
4
13
19
4
10
24
6
17
23
13
19
(f)
2
17
23
13
19
8
24
(g)
10
24
(e)
10
24
(c)
10
4
10
6
17
19
13
23
8
24
(h)
10
163
insertion requires O(lg n) time, the method of binary heap construction via successive
insertion of each of the n vertices requires O(n lg n) time. It turns out we could do a
bit better and achieve the same result in linear time.
Algorithm 4.4: Heapify a binary tree.
Input: A binary tree T , given in sequence representation, having n > 1 internal
vertices.
Output: The binary tree T heapified so that it satisfies the heap-order property.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
for i bn/2c 1, . . . , 0 do
v T [i]
j0
while True do
left 2i + 1
right 2i + 2
if left < n and T [left] v then
if right < n and T [right] T [left] then
j right
else
j left
else if right < n and T [right] v then
j right
else
T [i] v
exit the while loop
T [i] T [j]
ij
return T
Any vertex of T with sequence index beyond n 1 is a leaf. In other words, if an internal
vertex has index > j, then the children of that vertex are leaves and have indices n.
Thus any internal vertex with index bn/2c has leaves for its children. Conclude that
internal vertices with indices
jnk jnk
jnk
,
+ 1,
+ 2, . . . , n 1.
(4.1)
2
2
2
164
performing a sift-down on this subtree. Once we have heapified all subtrees rooted at
T [i] for 0 i bn/2c 1, the resulting tree T is a binary heap. Our discussion is
summarized in Algorithm 4.4.
Earlier in this section, we claimed that Algorithm 4.4 can be used to construct a
binary heap in worst-case linear time. To prove this, let T be a binary tree satisfying the
heap-structure property and having n internal vertices. By Corollary 4.3, T has height
h = dlg(n + 1)e. We perform a sift-down for at most 2i vertices of depth i, where each
sift-down for a subtree rooted at a vertex of depth i takes O(h i) time. Then the total
time for Algorithm 4.4 is
!
!
X
X 2i
O
2i (h i) = O 2h
2hi
0i<h
0i<h
!
X k
= O 2h
2k
k>0
= O 2h+1
= O(n)
4.3
k>0
Binomial heaps
We are given two binary heaps T1 and T2 and we want to merge them into a single heap.
We could start by choosing to insert each element of T2 into T1 , successively extracting
the minimum element from T2 and insert that minimum element into T1 . If T1 and T2
have m and n elements, respectively, we would perform n extractions from T2 totalling
!
X
O
lg k
0<kn
time and inserting all of the extracted elements from T2 into T1 requires a total runtime
of
!
X
O
lg k .
(4.2)
nk<n+m
n+m
k=n+m
k ln k k
+ C
lg k dk =
ln 2
k=0
for some constant C. The above method of successive extraction and insertion therefore
has a total runtime of
(n + m) ln(n + m) n m
O
ln 2
for merging two binary heaps.
Alternatively, we could slightly improve the latter runtime for merging T1 and T2 by
successively extracting the last internal vertex of T2 . The whole process of extracting
165
all elements from T2 in this way takes O(n) time and inserting each of the extracted
elements into T1 still requires the runtime in expression (4.2). We approximate the sum
in (4.2) by
k=n+m
Z k=n+m
k ln k k
lg k dk =
+ C
ln 2
k=n
k=n
for some constant C. Therefore the improved extraction and insertion method requires
(n + m) ln(n + m) n ln n m
O
n
ln 2
4.3.1
Binomial trees
A binomial heap can be considered as a collection of binomial trees. The binomial tree
of order k is denoted Bk and defined recursively as follows:
1. The binomial tree of order 0 is the trivial tree.
2. The binomial tree of order k > 0 is a rooted tree, where from left to right the
children of the root of Bk are roots of Bk1 , Bk2 , . . . , B0 .
Various examples of binomial trees are shown in Figure 4.7. The binomial tree Bk can
also be defined as follows. Let T1 and T2 be two copies of Bk1 with root vertices r1
and r2 , respectively. Then Bk is obtained by letting, say, r1 be the left-most child of r2 .
Lemma 4.4 lists various basic properties of binomial trees. Property (3) of Lemma 4.4
uses the binomial coefficient, from whence Bk derives its name.
Lemma 4.4. Basic properties of binomial trees. Let Bk be a binomial tree of
order k 0. Then the following properties hold:
1. The order of Bk is 2k .
2. The height of Bk is k.
3. For 0 i k, we have
k
i
vertices at depth i.
4. The root of Bk is the only vertex with maximum degree (Bk ) = k. If the children
of the root are numbered k 1, k 2, . . . , 0 from left to right, then child i is the
root of the subtree Bi .
Proof. We use induction on k. The base case for each of the above properties is B0 ,
which trivially holds.
(1) By our inductive hypothesis, Bk1 has order 2k1 . Since Bk is comprised of two
copies of Bk1 , conclude that Bk has order
2k1 + 2k1 = 2k .
166
(a) B0
(b) B1
(c) B2
(d) B3
(e) B4
(f) B5
167
(2) The binomial tree Bk is comprised of two copies of Bk1 , the root of one copy
being the left-most child of the root of the other copy. Then the height of Bk is one
greater than the height of Bk1 . By our inductive hypothesis, Bk1 has height k 1 and
therefore Bk has height (k 1) + 1 = k.
(3) Denote by D(k, i) the number of vertices of depth i in Bk . As Bk is comprised
of two copies of Bk1 , a vertex at depth i in Bk1 appears once in Bk at depth i and a
second time at depth i + 1. By our inductive hypothesis,
D(k, i) = D(k 1, i) + D(k 1, i 1)
k1
k1
=
+
i
i1
k
=
i
where we used Pascals formula which states that
n+1
n
n
=
+
r
r1
r
for any positive integers n and r with r n.
(4) This property follows from the definition of Bk .
Corollary 4.5. If a binomial tree has order n 0, then the degree of any vertex i is
bounded by deg(i) lg n.
Proof. Apply properties (1) and (4) of Lemma 4.4.
4.3.2
Binomial heaps
In 1978, Jean Vuillemin [186] introduced binomial heaps as a data structure for implementing priority queues. Mark R. Brown [37, 38] subsequently extended Vuillemins
work, providing detailed analysis of binomial heaps and introducing an efficient implementation.
A binomial heap H can be considered as a collection of binomial trees. Each vertex
in H has a corresponding key and all vertex keys of H belong to a totally ordered set
having total order . The heap also satisfies the following binomial heap properties:
Heap-order property. Let Bk be a binomial tree in H. If v is a vertex of Bk
other than the root and p is the parent of v and having corresponding keys v and
p , respectively, then p v .
Root-degree property. For any integer k 0, H contains at most one binomial
tree whose root has degree k.
If H is comprised of the binomial trees Bk0 , Bk1 , . . . , Bkn for nonnegative integers ki ,
we can consider H as a forest made up of the trees Bki . We can also represent H as a tree
in the following way. List the binomial trees of H as Bk0 , Bk1 , . . . , Bkn in nondecreasing
order of root degrees, i.e. the root of Bki has order less than or equal to the root of Bkj
if and only if ki kj . The root of H is the root of Bk0 and the root of each Bki has
for its child the root of Bki+1 . Both the forest and tree representations are illustrated in
Figure 4.8 for the binomial heap comprised of the binomial trees B0 , B1 , B3 .
168
Pblg nc i
ai 2 . Apply
The binary representation of n requires 1 + blg nc bits, hence n = i=0
property (1) of Lemma 4.4 to see that the binomial tree Bi is in H if and only if the i-th
bit is bi = 1. Conclude that H has at most 1 + blg nc binomial trees.
4.3.3
Let H be a binomial heap comprised of the binomial trees Bk0 , Bk1 , . . . , Bkn where the
root of Bki has order less than or equal to the root of Bkj if and only if ki kj .
Denote by rki the root of the binomial tree Bki . If v is a vertex of H, denote by
child[v] the left-most child of v and by sibling[v] we mean the sibling immediately to
the right of v. Furthermore, let parent[v] be the parent of v and let degree[v] denote
the degree of v. If v has no children, we set child[v] = NULL. If v is one of the roots
rki , we set parent[v] = NULL. And if v is the right-most child of its parent, then we set
sibling[v] = NULL.
The roots rk0 , rk1 , . . . , rkn can be organized as a linked list, called a root list, with
two functions for accessing the next root and the previous root. The root immediately
following rki is denoted next[rki ] = sibling[v] = rki+1 and the root immediately before rki
is written prev[rki ] = rki1 . For rk0 and rkn , we set next[rkn ] = sibling[v] = NULL and
prev[rk0 ] = NULL. We also define the function head[H] that simply returns rk0 whenever
H has at least one element, and head[H] = NULL otherwise.
169
Minimum vertex
To find the minimum vertex, we find the minimum among rk0 , rk1 , . . . , rkm because by
definition the root rki is the minimum vertex of the binomial tree Bki . If H has n vertices,
we need to check at most 1 + blg nc vertices to find the minimum vertex of H. Therefore
determining the minimum vertex of H takes O(lg n) time. Algorithm 4.5 summarizes
our discussion.
Algorithm 4.5: Determine the minimum vertex of a binomial heap.
Input: A binomial heap H of order n > 0.
Output: The minimum vertex of H.
1
2
3
4
5
6
7
8
9
u NULL
v head[H]
min
while v 6= NULL do
if v < min then
min v
uv
v sibling[v]
return u
Merging heaps
Recall that Bk is constructed by linking the root of one copy of Bk1 with the root of
another copy of Bk1 . When merging two binomial heaps whose roots have the same
degree, we need to repeatedly link the respective roots. The root linking procedure runs
in constant time O(1) and is rather straightforward, as presented in Algorithm 4.6.
Algorithm 4.6: Linking the roots of binomial heaps.
Input: Two copies of Bk1 , one rooted at u and the other at v.
Output: The respective roots of two copies of Bk1 linked, with one root
becoming the parent of the other.
1
2
3
4
parent[u] v
sibling[u] child[v]
child[v] u
degree[v] degree[v] + 1
Besides linking the roots of two copies of Bk1 , we also need to merge the root lists
of two binomial heaps H1 and H2 . The resulting merged list is sorted in nondecreasing
order of degree. Let L1 be the root list of H1 and let L2 be the root list of H2 . First
we create an empty list L. As the lists Li are already sorted in nondecreasing order of
vertex degree, we use merge sort to merge the Li into a single sorted list. The whole
procedure for merging the Li takes linear time O(n), where n = |L1 | + |L2 | 1. Refer to
Algorithm 4.7 for pseudocode of the procedure just described.
Having clarified the root linking and root lists merging procedures, we are now ready
to describe a procedure for merging two nonempty binomial heaps H1 and H2 into a
170
i1
j1
L []
n |L1 | + |L2 | 1
append(L1 , )
append(L2 , )
for k 0, 1, . . . , n do
if deg(L1 [i]) deg(L2 [j]) then
append(L, L1 [i])
ii+1
else
append(L, L2 [j])
j j+1
return L
single binomial heap H. Initially there are at most two copies of B0 , one from each of
the Hi . If two copies of B0 are present, we let the root of one be the parent of the other
as per Algorithm 4.6, producing B1 as a result. From thereon, we generally have at most
three copies of Bk for some integer k > 0: one from H1 , one from H2 , and the third from
a previous merge of two copies of Bk1 . In the presence of two or more copies of Bk , we
merge two copies as per Algorithm 4.6 to produce Bk+1 . If Hi has ni vertices, then Hi
has at most 1 + blg ni c binomial trees, from which it is clear that merging H1 and H2
requires
max(1 + blg n1 c, 1 + blg n2 c)
steps. Letting N = max(n1 , n2 ), we see that merging H1 and H2 takes logarithmic time
O(lg N ). The operation of merging two binomial heaps is presented in pseudocode as
Algorithm 4.8, which is adapted from Cormen et al. [54, p.463] and the C implementation
of binomial queues in [102]. A word of warning is order here. Algorithm 4.8 is destructive
in the sense that it modifies the input heaps Hi in-place without making copies of those
heaps.
Vertex insertion
Let v be a vertex with corresponding key v and let H1 be a binomial heap of n vertices.
The single vertex v can be considered as a binomial heap H2 comprised of exactly the
binomial tree B0 . Then inserting v into H1 is equivalent to merging the heaps Hi and
can be accomplished in O(lg n) time. Refer to Algorithm 4.9 for pseudocode of this
straightforward procedure.
10
11
12
13
14
15
16
17
18
19
20
21
22
23
171
172
4.4
A binary search tree (BST) is a rooted binary tree T = (V, E) having vertex weight
function : V R. The weight of each vertex v is referred to as its key, denoted v .
Each vertex v of T satisfies the following properties:
Left subtree property. The left subtree of v contains only vertices whose keys
are at most v . That is, if u is a vertex in the left subtree of v, then u v .
Right subtree property. The right subtree of v contains only vertices whose
keys are at least v . In other words, any vertex u in the right subtree of v satisfies
v u .
Recursion property. Both the left and right subtrees of v must also be binary
search trees.
The above are collectively called the binary search tree property. See Figure 4.10 for
an example of a binary search tree. Based on the binary search tree property, we can
use in-order traversal (see Algorithm 3.12) to obtain a listing of the vertices of a binary
search tree sorted in nondecreasing order of keys.
70
173
60
65
67
66
40
68
41
69
45
43
48
44
49
47
11
12
15
10
(a)
70
60
65
67
66
40
68
41
69
45
43
48
44
49
47
11
12
15
10
(b)
70
65
67
60
66
40
68
69
41
45
43
48
44
15
49
12
47
11
10
(c)
5
70
65
67
15
60
66
69
68
12
40
41
45
43
48
49
44
11
10
47
(d)
174
15
13
20
11
4.4.1
Searching
Given a BST T and a key k, we want to locate a vertex (if one exists) in T whose key is k.
The search procedure for a BST is reminiscent of the binary search algorithm discussed
in problem 2.10. We begin by examining the root v0 of T . If v0 = k, the search is
successful. However, if v0 6= k then we have two cases to consider. In the first case, if
k < v0 then we search the left subtree of v0 . The second case occurs when k > v0 , in
which case we search the right subtree of v0 . Repeat the process until a vertex v in T
is found for which k = v or the indicated subtree is empty. Whenever the target key
is different from the key of the vertex we are currently considering, we move down one
level of T . Thus if h is the height of T , it follows that searching T takes a worst-case
runtime of O(h). The above procedure is presented in pseudocode as Algorithm 4.11.
Note that if a vertex v does not have a left subtree, the operation of locating the root
of vs left subtree should return NULL. A similar comment applies when v does not have
a right subtree. Furthermore, from the structure of Algorithm 4.11, if the input BST is
empty then NULL is returned. See Figure 4.11 for an illustration of locating vertices with
given keys in a BST.
Algorithm 4.11: Locate a key in a binary search tree.
Input: A binary search tree T and a target key k.
Output: A vertex in T with key k. If no such vertex exists, return NULL.
1
2
3
4
5
6
7
v root[T ]
while v 6= NULL and k 6= v do
if k < v then
v leftchild[v]
else
v rightchild[v]
return v
From the binary search tree property, deduce that a vertex of a BST T with minimum
key can be found by starting from the root of T and repeatedly traversing left subtrees.
When we have reached the left-most vertex v of T , querying for the left subtree of v
should return NULL. At this point, we conclude that v is a vertex with minimum key.
Each query for the left subtree moves us one level down T , resulting in a worst-case
runtime of O(h) with h being the height of T . See Algorithm 4.12 for pseudocode of the
procedure.
The procedure for finding a vertex with maximum key is analogous to that for finding
175
10
10
15
13
11
20
19
23
15
22
26
13
11
20
19
23
22
26
10
10
15
13
11
20
19
23
15
22
26
13
11
20
19
23
22
26
10
10
15
13
11
20
19
(a) Successor of 9.
15
23
22
26
13
11
20
19
23
22
26
176
one with minimum key. Starting from the root of T , we repeatedly traverse right subtrees
until we encounter the right-most vertex, which by the binary search tree property has
maximum key. This procedure has the same worst-case runtime of O(h). Figure 4.12
illustrates the process of locating the minimum and maximum vertices of a BST.
Algorithm 4.12: Finding a vertex with minimum key in a BST.
Input: A nonempty binary search tree T .
Output: A vertex of T with minimum key.
1
2
3
4
v root of T
while leftchild[v] 6= NULL do
v leftchild[v]
return v
Corresponding to the notions of left- and right-children, we can also define successors
and predecessors as follows. Suppose v is not a maximum vertex of a nonempty BST
T . The successor of v is a vertex in T distinct from v with the smallest key greater
than or equal to v . Similarly, for a vertex v that is not a minimum vertex of T , the
predecessor of v is a vertex in T distinct from v with the greatest key less than or equal
to v . The notions of successors and predecessors are concerned with relative key order,
not a vertexs position within the hierarchical structure of a BST. For instance, from
Figure 4.10 we see that the successor of the vertex u with key 8 is the vertex v with key
10, i.e. the root, even though v is an ancestor of u. The predecessor of the vertex a with
key 4 is the vertex b with key 3, i.e. the minimum vertex, even though b is a descendant
of a.
We now describe a method to systematically locate the successor of a given vertex.
Let T be a nonempty BST and v V (T ) not a maximum vertex of T . If v has a right
subtree, then we find a minimum vertex of vs right subtree. In case v does not have
a right subtree, we backtrack up one level to vs parent u = parent(v). If v is the root
of the right subtree of u, we backtrack up one level again to us parent, making the
assignments v u and u parent(u). Otherwise we return vs parent. Repeat the
above backtracking procedure until the required successor is found. Our discussion is
summarized in Algorithm 4.13. Each time we backtrack to a vertexs parent, we move
up one level, hence the worst-case runtime of Algorithm 4.13 is O(h) with h being the
height of T . The procedure for finding predecessors is similar. Refer to Figure 4.13 for
an illustration of locating successors and predecessors.
4.4.2
Insertion
177
into T takes O(h) time, where h is the height of T . Algorithm 4.14 presents pseudocode
of our discussion and Figure 4.14 illustrates how to insert a vertex into a BST.
Algorithm 4.14: Inserting a vertex into a binary search tree.
Input: A binary search tree T and a vertex x to be inserted into T .
Output: The same BST T but augmeneted with x.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
u NULL
v root of T
while v 6= NULL do
uv
if x < v then
v leftchild[v]
else
v rightchild[v]
parent[x] u
if u = NULL then
root[T ] x
else
if x < u then
leftchild[u] x
else
rightchild[u] x
4.4.3
Deletion
Whereas insertion into a BST is straightforward, removing a vertex requires much more
work. Let T be a nonempty binary search tree and suppose we want to remove v V (T )
from T . Having located the position that v occupies within T , we need to consider three
separate cases: (1) v is a leaf; (2) v has one child; (3) v has two children.
1. If v is a leaf, we simply remove v from T and the procedure is complete. The
resulting tree without v satisfies the binary search tree property.
178
10
10
15
13
11
15
20
19
23
22
26
(a)
13
11
20
19
12
23
22
26
(b)
u NULL
v NULL
if leftchild[x] 6= NULL or rightchild[x] 6= NULL then
vx
else
v successor of x
if leftchild[v] 6= NULL then
u leftchild[v]
else
u rightchild[v]
if u 6= NULL then
parent[u] parent[v]
if parent[v] = NULL then
root[T ] u
else
if v = leftchild[parent[v]] then
leftchild[parent[v]] u
else
rightchild[parent[v]] u
if v 6= x then
x v
copy vs auxilary data into x
179
2. Suppose v has the single child u. Removing v would disconnect T , a situation that
can be prevented by splicing out u and letting u occupy the position previously
held by v. The resulting tree with v removed as described satisfies the binary
search tree property.
3. Finally suppose v has two children and let s and p be the successor and predecessor
of v, respectively. It can be shown that s has no left-child and p has no right-child.
We can choose to either splice out s or p. Say we choose to splice out s. Then we
remove v and let s hold the position previously occupied by v. The resulting tree
with v thus removed satisfies the binary search tree property.
The above procedure is summarized in Algorithm 4.15, which is adapted from [54, p.262].
Figure 4.15 illustrates the various cases to be considered when removing a vertex from
a BST. Note that in Algorithm 4.15, the process of finding the successor dominates the
runtime of the entire algorithm. Other operations in the algorithm take at most constant
time. Therefore deleting a vertex from a binary search tree can be accomplished in worstcase O(h) time, where h is the height of the BST under consideration.
4.5
AVL trees
To motivate the need for AVL trees, note the lack of a structural property for binary
search trees similar to the structural property for binary heaps. Unlike binary heaps,
a BST is not required to have as small a height as possible. As a consequence, any
given nonempty collection C = {v0 , v1 , . . . , vk } of weighted vertices can be represented by
various BSTs with different heights; see Figure 4.16. Some BST representations of C have
heights smaller than other BST representations of C. Those BST representations with
smaller heights can result in reduced time for basic operations such as search, insertion,
and deletion and out-perform BST representations having larger heights. To achieve
logarithmic or near-logarithmic time complexity for basic operations, it is desirable to
maintain a BST with as small a height as possible.
Adelson-Velski and Landis [1] introduced in 1962 a criterion for constructing and
maintaining binary search trees having logarithmic heights. Recall that the height of a
tree is the maximum depth of the tree. Then the Adelson-Velski-Landis criterion can
be expressed as follows.
Definition 4.6. Height-balance property. Let T be a binary tree and suppose v is
an internal vertex of T . Let h` be the height of the left subtree of v and let hr be the
height of vs right subtree. Then v is said to be height-balanced if |h` hr | 1. For each
internal vertex u of T , if u is height-balanced then the whole tree T is height-balanced.
Binary trees having the height-balance property are called AVL trees. The structure
of such trees is such that given any internal vertex v of an AVL tree, the heights of the
left and right subtrees of v differ by at most 1. Complete binary trees are trivial examples
of AVL trees, as are nearly complete binary trees. A less trivial example of AVL trees
are what is known as Fibonacci trees, so named because the construction of Fibonacci
trees bears some resemblance to how Fibonacci numbers are produced. Fibonacci trees
can be constructed recursively in the following manner. The Fibonacci tree F0 of height
0 is the trivial tree. The Fibonacci tree F1 of height 1 is a binary tree whose left and
right subtrees are both F0 . For n > 1, the Fibonacci tree Fn of height n is a binary
180
10
10
15
13
20
11
19
23
15
22
26
13
13
26
15
23
10
19
22
10
11
20
20
12
19
23
11
15
22
26
12
20
11
19
23
22
26
10
15
10
13
20
12
18
11
17
23
19
22
16
26
16
13
20
12
11
18
17
23
19
22
26
181
20
15
13
13
10
10
10
15
13
7
5
15
13
20
10
20
20
(a)
15
(b)
(c)
(d)
(a) F0
(b) F1
(c) F2
(d) F3
(e) F4
(f) F5
Theorem 4.7. Logarithmic height. The height h of an AVL tree with n internal
vertices is bounded by
lg(n + 1) h < 2 lg n + 1.
Proof. Any binary tree of height h has at most 2i leaves. From the proof of Corollary 4.3,
we see that n is bounded by 2h1 n 2h 1 and in particular n + 1 2h . Take the
logarithm of both sides to get h lg(n + 1).
182
Figure 4.18: Fibonacci tree F6 with subtree heights for vertex labels.
Now instead of deriving an upper bound for h, we find the minimum order of an AVL
tree and from there derive the required upper bound for h. Let T be an AVL tree of
minimum order. One subtree of T has height h1. The other subtree has height h1 or
h 2. Our objective is to construct T to have as small a number of vertices as possible.
Without loss of generality, let the left and right subtrees of T have heights h 2 and
h 1, respectively. The Fibonacci tree Fh of height h fits the above requirements for T .
If N (h) denote the number of internal vertices of Fh , then N (h) = 1+N (h1)+N (h2)
is strictly increasing so
N (h) > N (h 2) + N (h 2) = 2 N (h 2).
(4.3)
(4.4)
4.5.1
Insertion
The algorithm for insertion into a BST can be modified and extended to support insertion
into an AVL tree. Let T be an AVL tree having the binary search tree property, and v
183
a vertex to be inserted into T . In the trivial case, T is the null tree so inserting v into T
is equivalent to letting T be the trivial tree rooted at v. Consider now the case where T
has at least one vertex. Apply Algorithm 4.14 to insert v into T and call the resulting
augmented tree Tv . But our problem is not yet over; Tv may violate the height-balance
property. To complete the insertion procedure, we require a technique to restore, if
necessary, the height-balance property to Tv .
To see why the augmented tree Tv may not necessarily be height-balanced, let u be
the parent of v in Tv , where previously u was a vertex T (and possibly a leaf). In the
original AVL tree T , let Pu : r = u0 , u1 , . . . , uk = u be the path from the root r of T
to u with corresponding subtree heights H(ui ) = hi for i = 0, 1, . . . , k. An effect of the
insertion is to extend the path Pu to the longer path Pv : r = u0 , u1 , . . . , uk = u, v and
possibly increase subtree heights by one. One of two cases can occur with respect to Tv .
1. Height-balanced: Tv is height-balanced so no need to do anything further. A simple
way to detect this is to consider the subtree S rooted at u, the parent of v. If S
has two children, then no height adjustment need to take place for vertices in Pu ,
hence Tv is an AVL tree (see Figure 4.19). Otherwise we perform any necessary
height adjustment for vertices in Pu , starting from uk = u and working our way
up to the root r = u0 . After adjusting the height of ui , we test to see whether
ui (with its new height) is height-balanced. If each of the ui with their new heights
are height-balanced, then Tv is height-balanced.
2. Height-unbalanced: During the height adjustment phase, it may happen that some
uj with its new height is not height-balanced. Among all such height-unbalanced
vertices, let u` be the first height-unbalanced vertex detected during the process of
height adjustment starting from uk = u and going up towards r = u0 . We need to
rebalance the subtree rooted at u` . Then we continue on adjusting heights of the
remaining vertices in Pu , also performing height-rebalancing where necessary.
Case 1 is relatively straightforward, but it is case 2 that involves much intricate work.
4
4
3
2
0
1
0
1
0
4
3
2
0
2
0
1
0
1
0
2
0
2
0
1
0
1
0
2
0
1
0
Figure 4.19: Augmented tree is balanced after insertion; vertex labels are heights.
We now turn to the case where inserting a vertex v into a nonempty AVL tree T
results in an augmented tree Tv that is not height-balanced. A general idea for rebalancing (and hence restoring the height-balance property to) Tv is to determine where in
Tv the height-balance property is first violated (the search phase), and then to locally
rebalance subtrees at and around the point of violation (the repair phase). A description
of the search phase follows. Let
Pv : r = u0 , u1 , . . . , uk = u, v
184
be the path from the root r of Tv (and hence of T ) to v. Traversing upward from v to r,
let z be the first height-unbalanced vertex. Among the children of z, let y be the child of
higher height and hence an ancestor of v. Similarly, among the children of y let x be the
child of higher height. In case a tie occurs, let x be the child of y that is also an ancestor
of v. As each vertex is an ancestor of itself, it is possible that x = v. Furthermore, x is
a grandchild of z because x is a child of y, which in turn is a child of z. The vertex z
is not height-balanced due to inserting v into the subtree rooted at y, hence the height
of y is 2 greater than its sibling (see Figure 4.20, where height-unbalanced vertices are
colored red). We have determined the location at which the height-balance property is
first violated.
5
4
1
0
3 z
2
1
0
2
0
0
2 y
1 x
0 v
4 z
1
0
0
2
1
0
1
0
3 y
1
0
2 x
0
1
0 v
Figure 4.20: Augmented tree is unbalanced after insertion; vertex labels are heights.
We now turn to the repair phase. The central question is: How are we to restore the height-balance property to the subtree rooted at z? By trinode restructuring is meant the process whereby the height-balance property is restored; the prefix
tri refers to the three vertices x, y, z that are central to this process. A common
name for the trinode restructuring is rotation in view of the geometric interpretation
of the process. Figure 4.21 distinguishes four rotation possibilities, two of which are
symmetrical to the other two. The single left rotation in Figure 4.21(a) occurs when
height(x) = height(root(T0 )) + 1 and detailed in Algorithm 4.16. The single right rotation in Figure 4.21(b) occurs when height(x) = height(root(T3 )) + 1; see Algorithm 4.17
for pseudocode. Figure 4.21(c) illustrates the case of a right-left double rotation and
occurs when height(root(T3 )) = height(root(T0 )); see Algorithm 4.18 for pseudocode to
handle the rotation. The fourth case is illustrated in Figure 4.21(d) and occurs when
height(root(T0 )) = height(root(T3 )); refer to Algorithm 4.19 for pseudocode to handle
this left-right double rotation. Each of the four algorithms mentioned above run in constant time O(1) and preserves the in-order traversal ordering of all vertices in Tv . In all,
the insertion procedure is summarized in Algorithm 4.20. If h is the height T , locating and inserting the vertex v takes worst-case O(h) time, which is also the worst-case
runtime for the search-and-repair phase. Thus letting n be the number of vertices in T ,
insertion takes worst-case O(lg n) time.
4.5.2
Deletion
The process of removing a vertex from an AVL tree is similar to the insertion procedure. However, instead of using the insertion algorithm for BST, we use the deletion
185
z
y
y
z
T0
T1
T2
T3
T0
T1
T2
T3
z
y
T3
T2
T0
T1
T0
T1
T2
T3
z
y
T0
T3
T1
T0
T2
T1
T2
T3
(c) Double rotation: right rotation of x over y, then left rotation over z.
z
y
x
x
T3
T0
T1
T2
T0
T1
T2
T3
(d) Double rotation: left rotation of x over y, then right rotation over z.
186
rightchild[parent[z]] y
parent[y] parent[z]
parent[z] y
leftchild[y] z
parent[root[T1 ]] z
rightchild[z] root[T1 ]
height[z] 1 + max(height[root[T0 ]], height[root[T1 ]])
height[x] 1 + max(height[root[T2 ]], height[root[T3 ]])
height[y] 1 + max(height[x], height[z])
leftchild[parent[z]] y
parent[y] parent[z]
parent[z] y
rightchild[y] z
parent[root[T2 ]] z
leftchild[z] root[T2 ]
height[x] 1 + max(height[root[T0 ]], height[root[T1 ]])
height[z] 1 + max(height[root[T2 ]], height[root[T3 ]])
height[y] 1 + max(height[x], height[z])
187
rightchild[parent[z]] x
parent[x] parent[z]
parent[z] x
leftchild[x] z
rightchild[x] y
rightchild[z] root[T1 ]
parent[root[T1 ]] z
parent[y] x
leftchild[y] root[T2 ]
parent[root[T2 ]] y
height[z] 1 + max(height[root[T0 ]], height[root[T1 ]])
height[y] 1 + max(height[root[T2 ]], height[root[T3 ]])
height[x] 1 + max(height[y], height[z])
leftchild[parent[z]] x
parent[x] parent[z]
parent[z] x
rightchild[x] z
leftchild[z] root[T2 ]
parent[T2 ] z
leftchild[x] y
parent[y] x
rightchild[y] root[T1 ]
parent[root[T1 ]] y
height[z] 1 + max(height[root[T2 ]], height[root[T3 ]])
height[y] 1 + max(height[root[T0 ]], height[root[T1 ]])
height[x] 1 + max(height[y], height[z])
188
4.6. Problems
189
Algorithm 4.15 for BST to remove the target vertex from an AVL tree. The resulting tree may violate the height-balance property, which can be restored using trinode
restructuring.
Let T be an AVL tree having vertex v and suppose we want to remove v from T . In
the trivial case, T is the trivial tree whose sole vertex is v. Deleting v is simply removing
it from T so that T becomes the null tree. On the other hand, suppose T has at least
n > 1 vertices. Apply Algorithm 4.15 to remove v from T and call the resulting tree with
v removed Tv . It is possible that Tv does not satisfy the height-balance property. To
restore the height-balance property to Tv , let u be the parent of v in T prior to deleting
v from T . Having deleted v from T , let P : r = u0 , u1 , . . . , uk = u be the path from the
root r of Tv to u. Adjust the height of u and, traversing from u up to r, perform height
adjustment to each vertex in P and where necessary carry out trinode restructuring. The
resulting algorithm is very similar to Algorithm 4.20; see Algorithm 4.21 for pseudocode.
The deletion procedure via Algorithm 4.15 requires worst-case runtime O(lg n), where
n is the number of vertices in T , and the height-adjustment process runs in worst-case
O(lg n) time as well. Thus Algorithm 4.21 has worst-case runtime of O(lg n).
4.6
Problems
No problem is so formidable that you cant walk away from it.
Charles M. Schulz
190
u parent[v]
delete v from T as per Algorithm 4.15
adjust the height of u
/* begin height adjustment */
x NULL
y NULL
z NULL
while parent[u] 6= NULL do
u parent[u]
if leftchild[u] 6= NULL and rightchild[u] 6= NULL then
h` height[leftchild[u]]
hr height[rightchild[u]]
height[u] 1 + max(h` , hr )
if |h` hr | > 1 then
if height[rightchild[rightchild[u]]] = height[leftchild[u]] + 1 then
zu
y rightchild[z]
x rightchild[y]
trinode restructuring as per Algorithm 4.16
continue with next iteration of loop
if height[leftchild[leftchild[u]]] = height[rightchild[u]] + 1 then
zu
y leftchild[z]
x leftchild[y]
trinode restructuring as per Algorithm 4.17
continue with next iteration of loop
if height[rightchild[rightchild[u]]] = height[leftchild[u]] then
zu
y rightchild[z]
x leftchild[y]
trinode restructuring as per Algorithm 4.18
continue with next iteration of loop
if height[leftchild[leftchild[u]]] = height[rightchild[u]] then
zu
y leftchild[z]
x rightchild[y]
trinode restructuring as per Algorithm 4.19
continue with next iteration of loop
if leftchild[u] 6= NULL then
height[u] 1 + height[leftchild[u]]
continue with next iteration of loop
if rightchild[u] 6= NULL then
height[u] 1 + height[rightchild[u]]
continue with next iteration of loop
4.6. Problems
191
4.6. Let S be a sequence of n > 1 real numbers. How can we use algorithms described
in section 4.2 to sort S?
4.7. The binary heaps discussed in section 4.2 are properly called minimum binary
heaps because the root of the heap is always the minimum vertex. A corresponding notion is that of maximum binary heaps, where the root is always the maximum
element. Describe algorithms analogous to those in section 4.2 for managing maximum binary heaps.
4.8. What is the total time required to extract all elements from a binary heap?
4.9. Numbers of the form nr are called binomial coefficients. They also count the
number of r-combinations from a set of n objects. Algorithm 4.22 presents pseudocode to generate all the r-combinations of a set of n distinct objects. What is the
worst-case runtime of Algorithm 4.22? Prove the correctness of Algorithm 4.22.
4.10. In contrast to enumerating all the r-combinations of a set of n objects, we may
only want to generate a random r-combination. Describe and present pseudocode
of a procedure to generate a random r-combination of {1, 2, . . . , n}.
4.11. A problem related to the r-combinations of the set S = {1, 2, . . . , n} is that of
generating the permutations of S. Algorithm 4.23 presents pseudocode to generate
all the permutations of S in increasing lexicographic order. Find the worst-case
runtime of this algorithm and prove its correctness.
4.12. Provide a description and pseudocode of an algorithm to generate a random permutation of {1, 2, . . . , n}.
4.13. Takaoka [175] presents a general method for combinatorial generation that runs in
O(1) time. How can Takaokas method be applied to generating combinations and
permutations?
4.14. The proof of Lemma 4.4 relies on Pascals formula, which states that for any
positive integers n and r such that r n, the following identity holds:
n+1
n
n
=
+
.
r
r1
r
Prove Pascals formula.
4.15. Let m, n, r be nonnegative integers such that r n. Prove the Vandermonde
convolution
X
r
m+n
m
n
=
.
r
k
rk
k=0
The latter equation, also known as Vandermondes identity, was already known
as early as 1303 in China by Chu Shi-Chieh. Alexandre-Theophile Vandermonde
independently discovered it and his result was published in 1772.
192
L []
ci i for i = 1, 2, . . . , r
append(L, c1 c2 cr )
for i 2, 3, . . . , nr do
mr
max n
while cm = max do
mm1
max max 1
cm cm + 1
cj cj1 + 1 for j = m + 1, m + 2, . . . , r
append(L, c1 c2 cr )
return L
L []
ci i for i = 1, 2, . . . , n
append(L, c1 c2 cn )
for i 2, 3, . . . , n! do
mn1
while cm > cm+1 do
mm1
kn
while cm > ck do
k k1
swap the values of cm and ck
pm+1
qn
while p < q do
swap the values of cp and cq
pp+1
q q1
append(L, c1 c2 cn )
return L
4.6. Problems
193
4.17. Let n be a positive integer. How many distinct binomial heaps having n vertices
are there?
4.18. The algorithms described in section 4.3 are formally for minimum binomial heaps
because the vertex at the top of the heap is always the minimum vertex. Describe
analogous algorithms for maximum binomial heaps.
4.19. If H is a binomial heap, what is the total time required to extract all elements
from H?
4.20. Frederickson [78] describes an O(k) time algorithm for finding the k-th smallest
element in a binary heap. Provide a description and pseudocode of Fredericksons
algorithm and prove its correctness.
4.21. Fibonacci heaps [79] allow for amortized O(1) time with respect to finding the
minimum element, inserting an element, and merging two Fibonacci heaps. Deleting the minimum element takes amortized time O(lg n), where n is the number
of vertices in the heap. Describe and provide pseudocode of the above Fibonacci
heap operations and prove the correctness of the procedures.
4.22. Takaoka [176] introduces another type of heap called a 2-3 heap. Deleting the
minimum element takes amortized O(lg n) time with n being the number of vertices
in the 2-3 heap. Inserting an element into the heap takes amortized O(1) time.
Describe and provide pseudocode of the above 2-3 heap operations. Under which
conditions would 2-3 heaps be more efficient than Fibonacci heaps?
4.23. In 2000, Chazelle [47] introduced the soft heap, which can perform common heap
operations in amortized O(1) time. He then applied [46] the soft heap to realize a
very efficient implementation of an algorithm for finding minimum spanning trees.
In 2009, Kaplan and Zwick [114] provided a simple implementation and analysis of
Chazelles soft heap. Describe soft heaps and provide pseudocode of common heap
operations. Prove the correctness of the algorithms and provide runtime analyses.
Describe how to use soft heap to realize an efficient implementation of an algorithm
to produce minimum spanning trees.
4.24. Explain any differences between the binary heap-order property, the binomial heaporder property, and the binary search tree property. Can in-order traversal be used
to list the vertices of a binary heap in sorted order? Explain why or why not.
4.25. Present pseudocode of an algorithm to find a vertex with maximum key in a binary
search tree.
4.26. Compare and contrast algorithms for locating minimum and maximum elements
in a list with their counterparts for a binary search tree.
4.27. Let T be a nonempty BST and suppose v V (T ) is not a minimum vertex of T .
If h is the height of T , describe and present pseudocode of an algorithm to find the
predecessor of v in worst-case time O(h).
4.28. Let L = [v0 , v1 , . . . , vn ] be the in-order listing of a BST T . Present an algorithm
to find the successor of v V (T ) in constant time O(1). How can we find the
predecessor of v in constant time as well?
194
4.29. Modify Algorithm 4.15 to extract a minimum vertex of a binary search tree. Now
do the same to extract a maximum vertex. How can Algorithm 4.15 be modified
to extract a vertex from a binary search tree?
4.30. Let v be a vertex of a BST and suppose v has two children. If s and p are the
successor and predecessor of v, respectively, show that s has no left-child and p has
no right-child.
4.31. Let L = [e0 , e1 , . . . , en ] be a list of n + 1 elements from a totally ordered set X with
total order . How can binary search trees be used to sort L?
4.32. Describe and present pseudocode of a recursive algorithm for each of the following
operations on a BST.
(a) Find a vertex with a given key.
(b) Locate a minimum vertex.
(c) Locate a maximum vertex.
(d) Insert a vertex.
4.33. Are the algorithms presented in section 4.4 able to handle a BST having duplicate
keys? If not, modify the relevant algorithm(s) to account for the case where two
vertices in a BST have the same key.
4.34. The notion of vertex level for binary trees can be extended to general rooted trees
as follows. Let T be a rooted tree with n > 0 vertices and height h. Then level
0 i h of T consists of all those vertices in T that have the same depth i. If
each vertex at level i has i + m children for some fixed integer m > 0, what is the
number of vertices at each level of T ?
4.35. Compare the search, insertion, and deletion times of AVL trees and random binary
search trees. Provide empirical results of your comparative study.
4.36. Describe and present pseudocode of an algorithm to construct a Fibonacci tree of
height n for some integer n 0. Analyze the worst-case runtime of your algorithm.
4.37. The upper bound in Theorem 4.7 can be improved as follows. From the proof of
the theorem, we have the recurrence relation N (h) > N (h 1) + N (h 2).
(a) If h 2, show that there exists some c > 0 such that N (h) ch .
(b) Assume for induction that
N (h) > N (h 1) + N (h 2) ch1 + ch2
for some h > 2. If c > 0, show that c2 c 1 = 0 is a solution to the
recurrence relation ch1 + ch2 and that
N (h) >
!h
1+ 5
.
2
4.6. Problems
195
1
lg n
lg
where = (1 + 5)/2 is the golden ratio and n counts the number of internal
vertices of an AVL tree of height h.
4.38. The Fibonacci sequence Fn is defined as follows. We have initial values F0 = 0
and F1 = 1. For n > 1, the n-th term in the sequence can be obtained via the
recurrence relation Fn = Fn1 + Fn2 . Show that
n (1/)n
Fn =
5
(4.5)
where is the golden ratio. The closed form solution (4.5) to the Fibonacci sequence is known as Binets formula, named after Jacques Philippe Marie Binet,
even through Abraham de Moivre knew about this formula long before Binet did.
Chapter 5
Distance and Connectivity
5.1
5.1.1
m
X
W (ei ).
i=1
196
(5.1)
197
otherwise, where the minimum is taken over all paths P from v1 to v2 . By hypothesis, G
has no negative weight cycles so the minimum in (5.1) exists. It follows by definition of
the distance function that d(u, v) = if and only if there is no path between u and v.
How we interpret the distance function d depends on the meaning of the weight
function W . In practical applications, vertices can represent physical locations such as
cities, sea ports, or landmarks. An edge weight could be interpreted as the physical
distance in kilometers between two cities, the monetary cost of shipping goods from one
sea port to another, or the time required to travel from one landmark to another. Then
d(u, v) could mean the shortest route in kilometers between two cities, the lowest cost
incurred in transporting goods from one sea port to another, or the least time required
to travel from one landmark to another.
The distance function d is not in general a metric, i.e. the triangle inequality does
not in general hold for d. However, when the distance function is a metric then G is
called a metric graph. The theory of metric graphs, due to their close connection with
tropical curves, is an active area of research. For more information on metric graphs, see
Baker and Faber [10].
5.1.2
A new hospital is to be built in a large city. Construction has not yet started and a
number of urban planners are discussing the future location of the new hospital. What
is a possible location for the new hospital and how are we to determine this location?
This is an example of a class of problems known as facility location problems. Suppose
our objective in selecting a location for the hospital is to minimize the maximum response
time between the new hospital and the site of an emergency. To help with our decision
making, we could use the notion of the center of a graph.
The center of a graph G = (V, E) is defined in terms of the eccentricity of the graph
under consideration. The eccentricity : V R is defined as follows. For any vertex
v, the eccentricity (v) is the greatest distance between v and any other vertex in G. In
symbols, the eccentricity is expressible as
(v) = max d(u, v).
uV
For example, in a tree T with root r the eccentricity of r is the height of T . In the graph
of Figure 5.1, the eccentricity of 2 is 5 and the shortest paths that yield (2) are
P1 : 2, 3, 4, 14, 15, 16
P2 : 2, 3, 4, 14, 15, 17.
The eccentricity of a vertex v can be thought of as an upper bound on the distance from
v to any other vertex in G. Furthermore, we have at least one vertex in G whose distance
from v is (v).
v 1
(v) 6
2
5
3
4
4
4
5
5
6
6
7
7
8
7
9
5
10
6
11
7
12
7
13
6
14
5
15 16
6 7
17
7
198
16
11
12
15
10
13
14
17
(v)
10
12
14
16
Figure 5.2: Eccentricity distribution of the graph in Figure 5.1. The horizontal axis
represents the vertex name, while the vertical axis is the corresponding eccentricity.
199
is shown in Table 5.1. Among the eccentricities in the latter table, the minimum eccentricity is (3) = (4) = 4. An intuitive interpretation is that both of the vertices 3 and
4 have the shortest distance to any other vertices in G. We can invoke an analogy with
plane geometry as follows. If a circle has radius r, then the distance from the center
of the circle to any point within the circle is at most r. The minimum eccentricity in
graph theory plays a role similar to the radius of a circle. If an object is strategically
positionede.g. a vertex with minimum eccentricity or the center of a circlethen its
greatest distance to any other object is guaranteed to be minimum. With the above
analogy in mind, we define the radius of a graph G = (V, E), written rad(G), to be the
minimum eccentricity among the eccentricity distribution of G. In symbols,
rad(G) = min (v).
vV
The center of G, written C(G), is the set of vertices with minimum eccentricity. Thus
the graph in Figure 5.1 has radius 4 and center {3, 4}. As should be clear from the latter
example, the radius is a number whereas the center is a set. Refer to the beginning of
the section where we mentioned the problem of selecting a location for a new hospital.
We could use a graph to represent the geography of the city wherein the hospital is to
be situated and select a location that is in the center of the graph.
Consider now the maximum eccentricity of a graph. In (2.5) we defined the diameter
of a graph G = (V, E) by
diam(G) = max d(u, v).
u,vV
u6=v
The diameter of G can also be defined as the maximum eccentricity of any vertex in G:
diam(G) = max (v).
vV
200
5.1.3
Center of trees
Given a tree T of order 3, we want to derive a bound on the number of vertices that
comprise the center of T . A graph in general can have one, two, or more number of
vertices for its center. Indeed, for any integer n > 0 we can construct a graph whose
center has cardinality n. The cases for n = 1, 2, 3 are illustrated in Figure 5.3. But can
we do the same for trees? That is, given any positive integer n does there exist a tree
whose center has n vertices? It turns out that the center of a tree cannot have more
than two vertices, a result first discovered [111] by Camille Jordan in 1869.
(a) |C(G)| = 1
(b) |C(G)| = 2
(c) |C(G)| = 3
5.1.4
Distance matrix
In sections 1.3.4 and 2.3, the distance matrix D of a graph G was defined to be D = [dij ],
where dij = d(vi , vj ) and the vertices of G are indexed by V = {v0 , v1 , . . . , vk }. The
matrix D is square where we set dij = 0 for entries along the main diagonal. If there is
no path from vi to vj , then we set dij = . If G is undirected, then D is symmetric and
is equal to its transpose, i.e. DT = D. To compute the distance matrix D, apply the
Floyd-Roy-Warshall algorithm to determine the distances between all pairs of vertices.
Refer to Figure 5.4 for examples of distance matrices of directed and undirected graphs.
In the remainder of this section, graph refers to an undirected graph unless otherwise
specified.
Instead of one distance matrix, we can define several distance matrices on G. Consider
an edge-weighted graph G = (V, E) without negative weight cycles and let
d : V V R {}
be a distance function of G. Let = diam(G) be the diameter of G and index the
vertices of G in some arbitrary but fixed manner, say V = {v0 , v1 , . . . , vn }. The sequence
201
3
5
(a)
1
0
1
2
2
1
0
1
1
0
1
2
2
3
2
1
0
1
3
2
2
0
1
1
0
3
5
1
0
(b)
0
1
1
2
3
2
1
0
2
1
1
2
3
2
0
1
2
3
1
0
5.2
202
vertex connectivity v (G) is also written as (G). The vertex connectivity of the graph in
Figure 5.5 is v (G) = 1 because we only need to remove vertex 0 in order to disconnect
the graph. The vertex connectivity of a connected graph G is thus the vertex-cut of
minimum cardinality. And G is said to be k-connected if v (G) k. From the latter
definition, it immediately follows that if G has at least 3 vertices and is k-connected then
any vertex-cut of G has at least cardinality k. For instance, the graph in Figure 5.5 is
1-connected. In other words, G is k-connected if the graph remains connected even after
removing any k 1 or fewer vertices from G.
0
G = graphs . PetersenGraph ()
len ( G . vertices ())
G . vertex_connectivity ()
G . delete_vertex (0)
len ( G . vertices ())
G . vertex_connectivity ()
The notions of edge-cut and cut-edge are similarly defined. Let G = (V, E) be a
graph and D E an edge set such that the edge deletion subgraph G D has more
components than G. Then D is called an edge-cut. An edge-cut D is said to be minimal
203
G = graphs . PetersenGraph ()
len ( G . vertices ())
E = G . edges (); len ( E )
G . edge_connectivity ()
G . delete_edge ( E [0])
len ( G . edges ())
G . edge_connectivity ()
Vertex and edge connectivity are intimately related to the reliability and survivability
of computer networks. If a computer network G (which is a connected graph) is kconnected, then it would remain connected despite the failure of at most k 1 network
nodes. Similarly, G is k-edge-connected if the network remains connected after the failure
of at most k 1 network links. In practical terms, a network with redundant nodes
and/or links can afford to endure the failure of a number of nodes and/or links and
still be connected, whereas a network with very few redundant nodes and/or links (e.g.
something close to a spanning tree) is more prone to be disconnected. A k-connected or
k-edge-connected network is more robust (i.e. can withstand) against node and/or link
failures than is a j-connected or j-edge-connected network, where j < k.
Proposition 5.5. If (G) is the minimum degree of an undirected connected graph G =
(V, E), then the edge connectivity of G satisfies (G) (G).
Proof. Choose a vertex v V whose degree is deg(v) = (G). Deleting the (G) edges
incident on v suffices to disconnect G as v is now an isolated vertex. It is possible that
G has an edge-cut whose cardinality is smaller than (G). Hence the result follows.
Let G = (V, E) be a graph and suppose X1 and X2 comprise a partition of V . A
partition-cut of G, denoted hX1 , X2 i, is the set of all edges of G with one endpoint in
X1 and the other endpoint in X2 . If G is a bipartite graph with bipartition X1 and X2 ,
then hX1 , X2 i is a partition-cut of G. It follows that a partition-cut is also an edge-cut.
204
205
206
Note that d(w, wk1 ) < k and apply the induction hypothesis to see that we have two
internally disjoint w-wk1 paths in G; call these paths P and Q. As G is 2-connected,
we have a w-x path R in G wk1 and hence R is also a w-x path in G. Let z be the
vertex on R that immediately precedes x and assume without loss of generality that z
is on P . We claim that G has two internally disjoint w-x paths. One of these paths is
the concatenation of the subpath of P from w to z with the subpath of R from z to x.
If x is not on Q, then construct a second w-x path, internally disjoint from the first one,
as follows: concatenate the path Q with the edge wk1 w. In case x is on Q, take the
subpath of Q from w to x as the required second path.
From Theorem 5.11, an undirected connected graph G is 2-connected if and only if any
two distinct vertices of G are connected by two internally disjoint paths. In particular,
let u and v be any two distinct vertices of G and let P and Q be two internally disjoint
u-v paths as guaranteed by Theorem 5.11. Starting from u, travel along the path P
to arrive at v. Then start from v and travel along the path Q to arrive at u. The
concatenation of the internally disjoint paths P and Q is hence a cycle passing through
u and v. We have proved the following corollary to Theorem 5.11.
Corollary 5.12. Let G be an undirected connected graph having at least 3 vertices. Then
G is 2-connected if and only if any two distinct vertices of G lie on a common cycle.
The following theorem provides further characterizations of 2-connected graphs, in
addition to Whitneys characterization.
Theorem 5.13. Characterizations of 2-connected graphs. Let G = (V, E) be an
undirected connected graph having at least 3 vertices. Then the following are equivalent.
1. G is 2-connected.
2. If u, v V are distinct vertices of G, then u and v lie on a common cycle.
3. If v V and e E, then v and e lie on a common cycle.
4. If e1 , e2 E are distinct edges of G, then e1 and e2 lie on a common cycle.
5. If u, v V are distinct vertices and e E, then they lie on a common path.
6. If u, v, w V are distinct vertices, then they lie on a common path.
7. If u, v, w V are distinct vertices, then there is a path containing any two of these
vertices but excluding the third.
5.3
Mengers theorem
207
deletion subgraph G S. The vertices u and v are positioned such that after removing
vertices in S from G and the corresponding edges, u and v are no longer connected nor
strongly connected to each other. It is clear by definition that u, v
/ S. We also say
that S separates u and v, or S is a vertex separating set. Similarly an edge set T E is
u-v separating (or separates u and v) if u and v lie in different components of the edge
deletion subgraph G T . But unlike the case of vertex separating sets, it is possible for
u and v to be endpoints of edges in T because the removal of edges does not result in
deleting the corresponding endpoints. The set T is also called an edge separating set. In
other words, S is a vertex cut and T is an edge cut. When it is clear from context, we
simply refer to a separating set. See Figure 5.7 for illustrations of separating sets.
Figure 5.7: Vertex and edge separating sets. Blue-colored vertices are those we want
to separate. The red-colored vertices form a vertex separating set or vertex cut; the
red-colored edges constitute an edge separating set or edge cut.
(5.2)
Proof. Each u-v path in Puv must include at least one vertex from Suv because Suv is
a vertex cut of G. Any two distinct paths in Puv cannot contain the same vertex from
Suv . Thus the number of internally disjoint u-v paths is at most |Suv |.
The bound (5.2) holds for any u-v separating set Suv of vertices in G. In particular,
we can choose Suv to be of minimum cardinality among all u-v separating sets of vertices
in G. Thus we have the following corollary. Mengers Theorem 5.18 provides a much
stronger statement of Corollary 5.15, saying in effect that the two quantities max(|Puv |)
and min(|Suv |) are equal.
Corollary 5.15. Consider any two distinct, non-adjacent vertices u, v in a connected
graph G. Let max(|Puv |) be the maximum number of internally disjoint u-v paths in G
and denote by min(|Suv |) the minimum cardinality of a u-v separating set of vertices in
G. Then we have max(|Puv |) min(|Suv |).
208
209
so that k `. Let e E and let G/e be the contraction graph having edges E {e}
and vertices the same as those of G, except that the endpoints of e have been identified.
Suppose that k < ` and G does not have ` independent u-v paths. The contraction
graph G/e does not have ` independent u-v paths either (where now, if e contains u or
v then we must appropriately redefine u or v, if needed). However, by the induction
hypothesis G/e does have the property that the maximum number of internally disjoint
u-v paths equals the minimum number of vertices needed to separate u and v. Therefore,
#{independent u v paths in G/e}
< #{min. number of vertices needed to separate u and v in G}.
By induction,
#{independent u v paths in G/e}
= #{min. number of vertices needed to separate u and v in G/e}.
Now, we claim we can pick e such that e does contain u or v and in such a way that
#{minimum number of vertices needed to separate u and v in G}
#{minimum number of vertices needed to separate u and v in G/e}.
Proof: Indeed, since n > 3 any separating set realizing the minimum number of vertices
needed to separate u and v in G cannot contain both a vertex in G adjacent to u and a
vertex in G adjacent to v. Therefore, we may pick e accordingly. (Q.E.D. claim)
The result follows from the claim and the above inequalities.
The following statement is the undirected, edge-connectivity version of Mengers theorem.
Theorem 5.19. Mengers theorem (edge-connectivity form). Let G be an undirected graph, and let s and t be vertices in G. Then, the maximum number of edgedisjoint (s, t)-paths in G equals the minimum number of edges from E(G) whose deletion
separates s and t.
This is proven the same way as the previous version but using the generalized mincut/max-flow theorem (see Remark 9.6 above).
Theorem 5.20. Diracs theorem. Let G = (V, E) be an undirected k-connected graph
with |V | k + 1 vertices for k 3. If S V is any set of k vertices, then G has a cycle
containing the vertices of S.
Proof.
5.4
Whitneys Theorem
210
Solution. ...
Solution. ...
Theorem 5.23. Whitneys Theorem. Let G = (V, E) be a connected graph such that
|V | 3. Then G is 2-connected if and only if any pair u, v V has two internally
disjoint paths between them.
5.5
Centrality of a vertex
Louis, I think this is the beginning of a beautiful friendship.
Rick from the 1942 film Casablanca
degree centrality
betweenness centrality
closeness centrality
eigenvector centrality
if n = 1 then
return C3
G null graph
N 2n + 1
for i 0, 1, . . . , N 3 do
if i is odd then
add edges (i, i + 1) and (i, N 1) to G
else
add edge (i, N 1) to G
add edges (N 2, 0) and (N 2, N 1) to G
return E
5.6
211
Network reliability
Whitney synthesis
Tuttes synthesis of 3-connected graphs
Harary graphs
constructing an optimal k-connected n-vertex graph
5.7
Problems
When you dont share your problems, you resent hearing the problems of other people.
Chuck Palahniuk, Invisible Monsters, 1999
5.1. Let G = (V, E) be an undirected, unweighted simple graph. Show that V and the
distance function on G form a metric space if and only if G is connected.
5.2. Let u and v be two distinct vertices in the same connected component of G. If P
is a u-v path such that d(u, v) = (u), we say that P is an eccentricity path for u.
(a) If r is the root of a tree, show that the end-vertex of an eccentricity path for
r is a leaf.
(b) If v is a vertex of a tree distinct from the root r, show that any eccentricity
path for v must contain r or provide an example to the contrary.
(c) A vertex w is said to be an eccentric vertex of v if d(v, w) = (v). Intuitively,
an eccentric vertex of v can be considered as being as far away from v as
possible. If w is an eccentric vertex of v and vice versa, then v and w are said
to be mutually eccentric. See Buckley and Lau [41] for detailed discussions of
mutual eccentricity. If w is an eccentric vertex of v, explain why v is also an
eccentric vertex of w or show that this does not in general hold.
5.3. If u and v are vertices of a connected graph G such that d(u, v) = diam(G), show
that u and v are mutually eccentric.
5.4. If uv is an edge of a tree T and w is a vertex of T distinct from u and v, show that
|d(u, w) d(w, v)| = W (uv) with W (uv) being the weight of uv.
5.5. If u and v are vertices of a tree T such that d(u, v) = diam(T ), show that u and v
are leaves.
5.6. Let v1 , v2 , . . . , vk be the leaves of a tree T . Show that per(T ) = {v1 , v2 , . . . , vk }.
5.7. Show that all the eccentric vertices of a tree are leaves.
5.8. If G is a connected graph, show that rad(G) diam(G) 2 rad(G).
5.9. Let T be a tree of order 3. If the center of T has one vertex, show that diam(T ) =
2 rad(T ). If the center of T has two vertices, show that diam(T ) = 2 rad(T ) 1.
212
5.10. Let G = (V, E) be a simple undirected, connected graph. Define the distance of a
vertex v V by
X
d(v) =
d(v, x)
xV
1X
d(v).
2 vV
For any vertex v V , show that d(G) d(v) + d(G v) with G v being a vertex
deletion subgraph of G. This result appeared in Entringer et al. [67, p.284].
5.11. Determine the sequence of distance matrices for the graphs in Figure 5.4.
5.12. If G = (V, E) is an undirected connected graph and v V , prove the following
vertex connectivity inequality:
(G) 1 (G v) (G).
5.13. If G = (V, E) is an undirected connected graph and e E, prove the following
edge connectivity inequality:
(G) 1 (G e) (G).
code
0
3
6
9
12
15
18
21
24
27
30
33
36
39
42
45
48
51
name
Alicante Bouschet
Cabernet Franc
Chardonnay
Donzillinho
Flora
Gr
uner Veltliner
Meslier-Saint-Francois
Muscat Hamburg
Ortega
Perle
Petit Manseng
Reichensteiner
Roter Veltliner
Ruby Cabernet
Semillon
Taminga
Traminer
Trousseau
code
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
name
Aramon
Cabernet Sauvignon
Chenin Blanc
Ehrenfelser
Gamay
Kemer
M
uller-Thurgau
Muscat of Alexandria
Osteiner
Perle de Csaba
Petite Bouschet
Riesling
Rotgipfler
Sauvignon Blanc
Siegerrebe
Teinturier du Cher
Trincadeiro
Verdelho
code
2
5
8
11
14
17
20
23
26
29
32
35
38
41
44
47
50
53
name
Bequignol
Carignan
Colombard
Fer Servadou
Gelber Ortlieber
Merlot
Muscat Blanc
Optima
Peagudo
Perlriesling
Pinot Noir
Rotberger
Royalty
Schonburger
Sylvaner
Tinta Madeira
Trollinger
Wittberger
Table 5.2: Numeric code and actual name of common grape cultivars.
5.14. Figure 5.8 depicts how common grape cultivars are related to one another; the
graph is adapted from Myles et al. [147]. The numeric code of each vertex can
be interpreted according to Table 5.2. Compute various distance and connectivity
measures for the graph in Figure 5.8.
5.15. Prove the characterizations of 2-connected graphs as stated in Theorem 5.13.
5.7. Problems
213
17
39
38
42
49
36
40
11
26
18
51
12
37
47
30
24
48
33
45
41
14
20
43
15
52
13
19
27
32
44
23
10
6
46
25
29
34
31
53
16
35
1
50
21
22
28
214
5.16. Let G = (V, E) be an undirected connected graph of order n and suppose that
deg(v) (n + k 2)/2 for all v V and some fixed positive integer k. Show that
G is k-connected.
5.17. A vertex (or edge) separating set S of a connected graph G is minimum if S has
the smallest cardinality among all vertex (respectively edge) separating sets in G.
Similarly S is said to be maximum if it has the greatest cardinality among all
vertex (respectively edge) separating sets in G. For the graph in Figure 5.7(a),
determine the following:
(a) A minimum vertex separating set.
(b) A minimum edge separating set.
(c) A maximum vertex separating set.
(d) A maximum edge separating set.
(e) The number of minimum vertex separating sets.
(f) The number of minimum edge separating sets.
Chapter 6
Optimal Graph Traversals
6.1
Eulerian graphs
6.2
Hamiltonian graphs
Theorem 6.1. Ore 1960. Let G be a simple graph with n 3 vertices. If deg(u) +
deg(v) n for each pair of non-adjacent vertices u, v V (G), then G is Hamiltonian.
Corollary 6.2. Dirac 1952. Let G be a simple graph with n 3 vertices. If deg(v)
n/2 for all v V (G), then G is Hamiltonian.
6.3
215
216
6.4
Chapter 7
Planar Graphs
A planar graph is a graph that can be drawn on a sheet of paper without any overlapping
between its edges.
It is a property of many natural graphs drawn on the earths surface, like for
instance the graph of roads, or the graph of internet fibers. It is also a necessary property
of graphs we want to build, like VLSI layouts.
Of course, the property of being planar does not prevent one from finding a drawing
with many overlapping between edges, as this property only asserts that there exists a
drawing (or embedding) of the graph avoiding it. Planarity can be characterized in many
different ways, one of the most satiating being Kuratowskis theorem.
See chapter 9 of Gross and Yellen [88].
7.1
7.2
Kuratowskis Theorem
Kuratowski graphs
218
219
It can easily be seen that if a graph G is planar, any of its subgraph is also planar.
Besides, planarity is still preserved under edge contraction. These two facts mean together that any minor of a planar graph is still planar graph, which makes of planarity
a minor-closed property. If we let P denote the poset of all non-planar graph, ordered
with the minor partial order, we can now consider the set Pmin of its minimal elements
which, by the Graph Minor Theorem, is a finite set.
Actually, Kuratowskis theorem asserts that Pmin = {K5 , K3,3 }.
7.3
Planarity algorithms
Chapter 8
Graph Coloring
8.1
Vertex coloring
Vertex coloring is a widespread center of interest in graph theory, which has many variants. Formally speaking, a coloring of the vertex set of a graph G is any function
f : V (G) 7 {1, . . . , k} giving to each vertex a color among a set of cardinal k. Things
get much more difficult when we add to it the constraint under which a coloring becomes
a proper coloring : a coloring with k colors of a graph G is said to be proper if there are
no edges between any two vertices colored with the same color. This can be rephrased
in many different ways :
i {1, . . . , k}, G[f 1 (i)] is a stable set
u, v G, u 6= v, f (u) = f (v) uv 6 E(G)
A proper coloring of G with k colors is a partition of V (G) into k independent sets
220
221
Brooks Theorem
heuristics for vertex coloring
8.2
Edge coloring
Edge coloring is the direct application of vertex coloring to the line graph of a graph G,
written L(G), which is the graph whose vertices are the edges of G, two vertices being
adjacent if and only if their corresponding edges share an endpoint. We write (L(G)) =
0 (G) the chromatic index of G. In this special case, however, the optimization problem
defined above, though still NP-Complete, is much better understood through Vizings
theorem.
Theorem 8.1 (Vizing). The edges of a graph G can be properly colored using at least
(G) colors and at most (G) + 1
Notice that the lower bound can be easily proved : if a vertex v has a degree d(v),
then at least d(v) colors are required to color G as all the edges incident to v must
receive different colors. Besides, the upper bound of (G) + 1 can not be deduced from
the greedy algorithm given in the previous section, as the maximal degree of L(G) is not
equal to (G) but to max d(u) + d(v) 2, which can reach 2(G) 2 in regular graphs.
uv
8.3
assignment problems
222
Chapter 9
Network Flows
See Jungnickel [112], and chapter 12 of Gross and Yellen [88].
9.1
9.2
Ford-Fulkerson theorem
uV, (u,v)E
f (u, v) =
uV, (v,u)E
224
vV
where s is the source. It represents the amount of flow passing from the source to the
sink. The maximum flow problem is to maximize |f |, that is, to route as much flow as
possible from s to t.
Example 9.1. Consider the digraph having
0
1
1
1 0 1
1 1
0
0 1 0
0
0 1
0 1 0
adjacency matrix
0
0 0
1
0 1
0
1 0
,
0
0 1
0
0 1
1 1 0
sage : B = matrix ([[0 ,1 ,1 ,0 ,0 ,0] ,[0 ,0 ,0 ,1 ,0 ,1] ,[0 ,1 ,0 ,0 ,1 ,0] ,[0 ,0 ,0 ,0 ,0 ,1] ,[0 ,0 ,0 ,0 ,0 ,1] ,[0 ,0 ,
sage : H = DiGraph (B , format = " adjacency_matrix " , weighted = True )
Type H.show(edgewlabels=True) if you want to see the graph with the capacities labeling the edges.
225
Given a capacitated digraph with capacity c and flow f , we define the residual digraph
Gf = (V, E) to be the digraph with capacity cf (u, v) = c(u, v) f (u, v) and no flow. In
other words, Gf is the same graph but it has a different capacity cf and flow 0. This is
also called a residual network.
Define an s t cut in our capacitated digraph G to be a partition C = (S, T ) of V
such that s S and t T . Recall the cut-set of C is the set
{(u, v) E | u S, v T }.
Lemma 9.2. Let G = (V, E) be a capacitated digraph with capacity c : E R, and
let s and t denote the source and the sink of G, respectively. If C is an s t cut and if
the edges in the cut-set of C are removed, then |f | = 0.
Exercise 9.3. Prove Lemma 9.2.
The capacity of an s t cut C = (S, T ) is defined by
X
c(S, T ) =
c(u, v).
(s,t)(S,T )
for each v V {s, t}. Define an s t cut to be the set of vertices and edges such
that for any path from s to t, the path contains a member of the cut. In this case, the
226
capacity of the cut is the sum the capacity of each edge and vertex in it. In this new
definition, the generalized max-flow min-cut theorem states that the maximum value of
an s t flow is equal to the minimum capacity of an s t cut..
The idea behind the Ford-Fulkerson algorithm is very simple: As long as there is a
path from the source to the sink, with available capacity on all edges in the path, we
send as much flow as we can alone along each of these paths. This is done inductively,
one path at a time.
Algorithm 9.1: Ford-Fulkerson algorithm.
Input: Graph G = (V, E) with flow capacity c, source s, and sink t.
Output: A flow f from s to t which is a maximum for all edges in E.
1
2
3
4
5
6
227
Here is some Python code1 which implements this. The class FlowNetwork is basically
a Sage Graph class with edge weights and an extra data structure representing the flow
on the graph.
class Edge :
def __init__ ( self ,U ,V , w ):
self . source = U
self . to = V
self . capacity = w
def __repr__ ( self ):
return str ( self . source ) + " ->" + str ( self . to ) + " : " + str ( self . capacity )
class FlowNetwork ( object ):
"""
This is a graph structure with edge capacities .
EXAMPLES :
g = FlowNetwork ()
map ( g . add_vertex , [ s ,o ,p ,q ,r ,t ])
g . add_edge ( s ,o ,3)
g . add_edge ( s ,p ,3)
g . add_edge ( o ,p ,2)
g . add_edge ( o ,q ,3)
g . add_edge ( p ,r ,2)
g . add_edge ( r ,t ,3)
g . add_edge ( q ,r ,4)
g . add_edge ( q ,t ,2)
print g . max_flow ( s ,t )
"""
def __init__ ( self ):
self . adj , self . flow , = {} ,{}
def add_vertex ( self , vertex ):
self . adj [ vertex ] = []
def get_edges ( self , v ):
return self . adj [ v ]
def add_edge ( self , u ,v , w =0):
assert ( u != v )
edge = Edge (u ,v , w )
redge = Edge (v ,u ,0)
edge . redge = redge
redge . redge = edge
self . adj [ u ]. append ( edge )
self . adj [ v ]. append ( redge )
self . flow [ edge ] = self . flow [ redge ] = 0
def find_path ( self , source , sink , path ):
if source == sink :
return path
for edge in self . get_edges ( source ):
residual = edge . capacity - self . flow [ edge ]
if residual > 0 and not ( edge , residual ) in path :
result = self . find_path ( edge . to , sink , path + [ ( edge , residual ) ])
if result != None :
return result
def max_flow ( self , source , sink ):
path = self . find_path ( source , sink , [])
while path != None :
flow = min ( res for edge , res in path )
for edge , res in path :
self . flow [ edge ] += flow
self . flow [ edge . redge ] -= flow
path = self . find_path ( source , sink , [])
return sum ( self . flow [ edge ] for edge in self . get_edges ( source ))
228
9.3
The objective of this section is to prove Edmond and Karps algorithm for the maximum
flow-minimum cut problem with polynomial complexity.
9.4
The objective of this section is to prove Goldberg and Tarjans algorithm for finding
maximal flows with polynomial complexity.
Chapter 10
Random Graphs
A random graph can be thought of as being a member from a collection of graphs having
some common properties. Recall that Algorithm 3.5 allows for generating a random
binary tree having at least one vertex. Fix a positive integer n and let T be a collection
of all binary trees on n vertices. It can be infeasible to generate all members of T , so for
most purposes we are only interested in randomly generating a member of T . A binary
tree of order n generated in this manner is said to be a random graph.
This chapter is a digression into the world of random graphs and various models for
generating different types of random graphs. Unlike other chapters in this book, our
approach is rather informal and not as rigorous as in other chapters. We will discuss
some common models of random graphs and a number of their properties without being bogged down in details of proofs. Along the way, we will demonstrate that random
graphs can be used to model diverse real-world networks such as social, biological, technological, and information networks. Bollobas [25] and Kolchin [121] provide standard
references on the theory of random graphs with rigorous proofs. For comprehensive
surveys of random graphs and networks that do not go into too much technical details, see Barabasi [12], Easley and Kleinberg [64], and Watts [190, 191]. On the other
hand, surveys that cover diverse applications of random graphs and networks and are
geared toward the technical aspects of the subject include Albert and Barabasi [5], Barrat et al. [16], Ben-Naim et al. [22], Bollobas et al. [26], Bornholdt and Schuster [30], Caldarelli and Vespignani [42], Cohen and Havlin [53], Csermely [55], Dehmer and EmmertStreib [57], Dorogovtsev and Mendes [60, 61], Ganguly et al. [82], Gross and Sayama [89],
Newman et al. [148], and Newman [152, 153].
10.1
Network statistics
Numerous real-world networks are large, having from thousands up to millions of vertices
and edges. Network statistics provide a way to describe properties of networks without
concerning ourselves with individual vertices and edges. A network statistic should
describe essential properties of the network under consideration, provide a means to
differentiate between different classes of networks, and be useful in network algorithms
and applications [35]. In this section, we discuss various common network statistics that
can be used to describe graphs underlying large networks.
229
230
10.1.1
Degree distribution
(10.1)
As indicated by the notation, we can think of (10.1) as the probability that a vertex v V
chosen uniformly at random has degree k. The degree distribution of G is consequently a
histogram of the degrees of vertices in G. Figure 10.1 illustrates the degree distribution
of the Zachary [201] karate club network. The degree distributions of many real-world
networks have the same general curve as depicted in Figure 10.1(b), i.e. a peak at low
degrees followed by a tail at higher degrees. See for example the degree distribution of
the neural network in Figure 10.2, that of a power grid network in Figure 10.3, and the
degree distribution of a scientific coauthorship network in Figure 10.4.
0.3
0.25
0.2
0.15
0.1
5 102
2
10
12
14
16
100.6
100.8
101
101.2
101.4
100
100.2
100.4
100.6
100.8
101
101.2
Figure 10.1: The friendship network within a 34-person karate club. This is more commonly known as the Zachary [201] karate club network. The network is an undirected,
connected, unweighted graph having 34 vertices and 78 edges. The horizontal axis represents degree; the vertical axis represents the probability that a vertex from the network
has the corresponding degree.
231
102
6
101.5
4
102
2
20
40
60
80
100
120
100
101
102
Figure 10.2: Degree distribution of the neural network of the Caenorhabditis elegans.
The network is a directed, not strongly connected, weighted graph with 297 vertices
and 2,359 edges. The horizontal axis represents degree; the vertical axis represents the
probability that a vertex from the network has the corresponding degree. The degree
distribution is derived from dataset by Watts and Strogatz [192] and White et al. [193].
0.3
101
0.2
102
0.1
103
10
15
100
100.2
100.4
100.6
100.8
101
101.2
Figure 10.3: Degree distribution of the Western States Power Grid of the United States.
The network is an undirected, connected, unweighted graph with 4,941 vertices and 6,594
edges. The horizontal axis represents degree; the vertical axis represents the probability
that a vertex from the network has the corresponding degree. The degree distribution is
derived from dataset by Watts and Strogatz [192].
232
0.12
0.1
102
8 102
6 102
103
4 102
104
2 102
0
50
100
150
200
250
100
101
102
Figure 10.4: Degree distribution of the network of coauthorships between scientists posting preprints on the condensed matter eprint archive at http://arxiv.org/archive/
cond-mat. The network is a weighted, disconnected, undirected graph having 40,421
vertices and 175,693 edges. The horizontal axis represents degree; the vertical axis
represents the probability that a vertex from the coauthorship network has the corresponding degree. The degree distribution is derived from the 2005 update of the dataset
by Newman [150].
10.1.2
Distance statistics
In chapter 5 we discussed various distance metrics such as radius, diameter, and eccentricity. To that distance statistics collection we add the average or characteristic distance
d, defined as the arithmetic mean of all distances in a graph. Let G = (V, E) be a simple
graph with n = |V | and m = |E|, where G can be either directed or undirected. Then
G has size at most n(n 1) because for any distinct vertex pair u, v V we count the
edge from u to v and the edge from v to u. The characteristic distance of G is defined
by
X
1
d(G) =
d(u, v)
n(n 1) u6=vV
where the distance function
d(u, v) = 0,
k,
d is given by
if there is no path from u to v,
if u = v,
where k is the length of a shortest u-v path.
If G is strongly connected (respectively, connected for the undirected case) then our
distance function is of the form d : V V Z+ {0}, where the codomain is the
set of nonnegative integers. The case where G is not strongly connected (respectively,
disconnected for the undirected version) requires special care. One way is to compute
the characteristic distance for each component and then find the average of all such
characteristic distances. Call the resulting characteristic distance dc , where c means
component. Another way is to assign a large number as the distance of non-existing
shortest paths. If there is no u-v path, we let d(u, v) = n because n = |V | is larger than
the length of any shortest path between connected vertices. The resulting characteristic
distance is denoted db , where b means big number. Furthermore denote by d the number
233
of pairs (u, v) such that v is not reachable from u. For example, the Zachary [201] karate
club network has d = 2.4082 and d = 0; the C. elegans neural network [192, 193]
has db = 71.544533, dc = 3.991884, and d = 20, 268; the Western States Power Grid
network [192] has d = 18.989185 and d = 0; and the condensed matter coauthorship
network [150] has db = 7541.74656, dc = 5.499329, and d = 152, 328, 281.
We can also define the concept of distance distribution similar to how the degree
distribution was defined in section 10.1.1. If ` is a positive integer with u and v being
connected vertices in a graph G = (V, E), denote by
p = Pr[d(u, v) = `]
(10.2)
G Kn
V {0, 1, . . . , n 1}
E {2-combinations of V }
for each e E do
r draw uniformly at random from interval (0, 1)
if r < p then
add edge e to G
return G
10.2
Fix a positive integer n, a probability p, and a vertex set V = {0, 1, . . . , n1}. The binomial (or Bernoulli ) random graph model, denoted G(n, p) and introduced by Gilbert [84],
is formally a probability space over the set of undirected simple graphs on n vertices. If
G is any element of the probability space G(n, p) and ij is any edge for distinct i, j V ,
then ij occurs as an edge of G independently with probability p. In symbols, for any
distinct pair i, j V we have
Pr[ij E(G)] = p
where all such events are mutually independent. Equivalently the model G(n, p) considers
the collection
of all undirected simple graphs on n vertices, each such graph having at
n
most 2 edges, m actual edges, and an associated probability
n
pm (1 p)( 2 )m .
(10.3)
234
0.3
0.4
0.2
0.3
0.2
0.1
0.1
10
12
14
102
0.3
5
4
0.2
3
2
0.1
1
0
10
20
30
40
(d) Condensed
work [150].
10
matter
12
14
16
coauthorship
18
net-
Figure 10.5: Distance distributions for various real-world networks. The horizontal axis
represents distance and the vertical axis represents the probability that a uniformly
chosen pair of distinct vertices from the network has the corresponding distance between
them.
235
To generate a random graph in G(n, p), start with G being a graph on n vertices but
no edges. That is, initially
G is Kn , the complement of the complete graph on n vertices.
Consider each of the n2 possible edges in some order and add it independently to G
with probability p. See Algorithm 10.1 for pseudocode of the procedure. The runtime
of Algorithm 10.1 depends on an efficient algorithm for generating all 2-combinations of
a set of n objects. We could adapt Algorithm 4.22 to our needs or search for a more
efficient algorithm; see problem 10.3 for discussion of an algorithm to generate a graph
in G(n, p) in quadratic time. Figure 10.6 illustrates some random graphs from G(25, p)
with p = i/6 for i = 0, 1, . . . , 5. See Figure 10.7 for results for graphs in G(2 104 , p).
The expected number of edges of any G G(n, p) is
n
pn(n 1)
= E[|E|] = p
=
2
2
and the expected total degree is
n
= E[# deg] = 2p
= pn(n 1).
2
Then the expected degree of each edge is p(n 1). From problem 1.7 we know that the
number of undirected simple graphs on n vertices is given by
2n(n1)/2
where (10.3) is the probability of any of these graphs being the output of the above
procedure. Let (n, m) be the number of graphs from G(n, p) that are connected and
have size m, and by Pr[G ] is meant the probability that G G(n, p) is connected. Apply
expression (10.3) to see that
Pr[G ] =
(n2 )
X
i=n1
n
(n, i) pi (1 p)( 2 )i
where n 1 is the least number of edges of any undirected connected graph on n vertices,
i.e. the size of any spanning tree of a connected graph in G(n, p). Similarly define
Pr[ij ] to be the probability that two distinct vertices i, j of G G(n, p) are connected.
Gilbert [84] showed that as n , the probabilities Pr[G ] and Pr[ij ] approach
Pr[G ] 1 n(1 p)n1
and
Pr[ij ] 1 2(1 p)n1 .
Example 10.1. Consider a digraph D = (V, E) without self-loops or multiple edges.
Then D is said to be oriented if for any distinct pair u, v V at most one of uv, vu is
an edge of D. Provide specific examples of oriented graphs.
Solution. If u, v V is any pair of distinct vertices of an oriented graph D = (V, E), we
have various possibilities:
1. uv
/ E and vu
/ E.
236
237
108
0.2
0.4
0.6
0.8
Figure 10.7: Comparison of expected and experimental values of the number of edges
and total degree of random simple undirected graphs in G(n, p). The horizontal axis
represents probability points; the vertical axis represents the size and total degree (expected or experimental). Fix n = 20, 000 and consider r = 50 probability points chosen
as follows. Let pmin = 0.000001, pmax = 0.999999, and F = (pmax /pmin )1/(r1) . For
i = 1, 2, . . . , r = 50 the i-th probability point pi is defined by pi = pmin F i1 . Each
experiment consists in generating M = 500 random graphs from G(n, pi ). For each
Gi G(n, pi ), where i = 1, 2, . . . , 500, compute its actual size i and actual total degree
i . Then take the mean
of the i and the mean of the i .
238
2. uv E and vu
/ E.
3. uv
/ E and vu E.
Let n > 0 be the number of vertices in D and let 0 < p < 1. Generate a random
oriented graph as follows. First we generate a binomial random graph G G(n, p) where
G is simple and undirected. Then we consider the digraph version of G and proceed to
randomly prune either uv or vu from G, for each distinct pair of vertices u, v. Refer to
Algorithm 10.2 for pseudocode of our discussion. A Sage implementation follows:
sage :
sage :
sage :
sage :
sage :
...
...
...
...
...
Figure 10.8: A random oriented graph generated using a graph in G(20, 0.1) and cutoff
probability 0.5.
239
0 < p < 1 that an edge will be in the resulting random sparse graph G. If e is an edge
of G, we can consider the events leading up to the choice of e as
e1 , e2 , . . . , ek
where in the i-th trial the event ei is a failure, for 1 i < k, but the event ek is the
first success after k 1 successive failures. In probabilistic terms, we perform a series
of independent trials each having success probability p and stop when the first success
occurs. Letting X be the number of trials required until the first success occurs, then X
is a geometric random variable with parameter p and probability mass function
Pr[X = k] = p(1 p)k1
for integers k 1, where
X
k=1
(10.4)
p(1 p)k1 = 1.
`
X
k=1
k = 1, 2, 3, . . .
r)
= min k k >
ln(1 p)
ln(1 r)
=1+
.
ln(1 p)
That is, we can choose k to be
ln(1 r)
k =1+
ln(1 p)
which is used as a basis of Algorithm 10.3. In the latter algorithm, note that the vertex
set is V = {0, 1, . . . , n 1} and candidate edges are generated in lexicographic order.
The Batagelj-Brandes Algorithm 10.3 has worst-case runtime O(n + m), where n and m
are the order and size, respectively, of the resulting graph.
240
G Kn
u1
v 1
while u < n do
r draw uniformly at random from interval (0, 1)
v v + 1 + bln(1 r)/ ln(1 p)c
while v u and u < n do
v vu
uu+1
if u < n then
add edge uv to G
return G
Degree distribution
Consider a random graph G G(n, p) and let v be a vertex of G. With probability p, the
vertex v is incident with each of the remaining n 1 vertices in G. Then the probability
that v has degree k is given by the binomial distribution
n1 k
Pr[deg(v) = k] =
p (1 p)n1k
(10.5)
k
and the expected degree of v is E[deg(v)] = p(n 1). Setting z = p(n 1), we can
express (10.5) as
k
n1
z
n1
z
Pr[deg(v) = k] =
1
k
n1z
n1
and thus
zk
exp(z)
k!
as n . In the limit of large n, the probability that vertex v has degree k approaches
the Poisson distribution. That is, as n gets larger and larger any random graph in G(n, p)
has a Poisson degree distribution.
Pr[deg(v) = k]
10.3
Erd
os-R
enyi model
Let N be a fixed nonnegative integer. The Erdos-Renyi [69, 70] (or uniform) random
graph model, denoted G(n, N ), is a probability space over the set of undirected simple
graphs on n vertices and exactly N edges. Hence G(n, N ) can be considered as a collection
n
of ( 2 ) undirected simple graphs on exactly N edges, each such graph being selected with
N
equal probability. A note of caution is in order here. Numerous papers on random graphs
refer to G(n, p) as the Erdos-Renyi random graph model, where in fact this binomial
random graph model should be called the Gilbert model in honor of E. N. Gilbert who
241
of being the graph resulting from the above procedure. Furthermore each of the
edges has a probability
n
1
2
n
2
1
2
3
4
5
6
7
8
GK
nn
E e0 , e1 , . . . , e(n)1
2
n
2
for i 0, 1, . . . , N 1 do
r draw uniformly at random from 0, 1, . . . , n2 1
while er is an edge of G do
r draw uniformly at random from 0, 1, . . . , n2 1
add edge er to G
return G
The runtime of Algorithm 10.4 is probabilistic and can be analyzed via the geometric
distribution. If i is the number of edges chosen so far, then the probability of choosing
a new edge in the next step is
n
i
2
.
n
2
We repeatedly choose an edge uniformly at random from the collection of all possible
edges, until we come across the first edge that is not already in the graph. The number
of trials required until the first new edge is chosen can be modeled using the geometric
distribution with probability mass function (10.4). Given a geometric random variable
X, we have the expectation
E[X] =
1
n p(1 p)n1 = .
p
n=1
242
dx
n
n
i
x
0
2
i=1 2
n
n
2
=
ln n
.
2
N
2
The denominator in the latter fraction becomes zero when n2 = N , which can be
prevented by adding one to the denominator. Then we have the expected total runtime
!
N
n
n
X
n
2
ln n 2
n
2
i
N +1
2
i=1 2
which is O(N ) when N n2 /2, and O(N ln N ) when N = n2 . In other words,
Algorithm
runtime when the number N of required edges satisfies
10.4 has expected linear
N n2 /2. But for N > n2 /2, we obtain expected
linear runtime by generating the
n
complete graph Kn and randomly delete 2 N edges from the latter graph. Our
discussion is summarized in Algorithm 10.5.
Algorithm 10.5: Generation of random graph in G(n, N ) in expected linear time.
Input: Positive integer n and integer N with 0 N n2 .
Output: A random graph from G(n, N ).
n
1 if N
/2 then
2
2
return result of Algorithm 10.4
3 G Kn
n
4 for i 1, 2, . . . ,
N do
2
5
e draw uniformly at random from E(G)
6
remove edge e from G
7 return G
10.4
Small-world networks
Many real-world networks exhibit the small-world effect: that most pairs of distinct
vertices in the network are connected by relatively short path lengths. The small-world
effect was empirically demonstrated [144] in a famous 1960s experiment by Stanley Milgram, who distributed a number of letters to a random selection of people. Recipients
were instructed to deliver the letters to the addressees on the condition that letters must
be passed to people whom the recipients knew on a first-name basis. Milgram found that
on average six steps were required for a letter to reach its target recipient, a number now
immortalized in the phrase six degrees of separation [91]. Figure 10.9 plots results of
an experimental study of the small-world problem as reported in [180]. The small-world
effect has been studied and verified for many real-world networks including
social: collaboration network of actors in feature films [7, 192], scientific publication
authorship [44, 90, 149, 150];
243
information: citation network [161], Rogets Thesaurus [118], word co-occurrence [59,
73];
technological: internet [48, 72], power grid [192], train routes [167], software [151,
183];
biological: metabolic network [108], protein interactions [107], food web [104, 138],
neural network [192, 193].
frequency
15
10
10
number of intermediaries
Figure 10.9: Frequency distribution of the number of intermediaries required for letters
to reach their intended addressees. The distribution has a mean of 5.3, interpreted as the
average number of intermediaries required for a letter to reach its intended destination.
The plot is derived from data reported in Travers and Milgram [180].
Watts and Strogatz [189, 190, 192] proposed a network model that produces graphs
exhibiting the small-world effect. Let n and k be positive integers such that n k
ln n 1 (in particular, 0 < k < n/2), k being even, and consider a probability 0 < p < 1.
Starting from an undirected k-circulant graph G = (V, E) on n vertices, the WattsStrogatz model proceeds to rewire each edge with probability p. The rewiring procedure
works as follows. Let V be uniformly distributed. For each v V , let e E be an
edge having v as an endpoint. Choose another u V different from v. With probability
p, delete the edge e and add the edge vu. The rewiring must produce a simple graph
with the same order and size as G. As p 1, the graph G goes from k-circulant
to exhibiting properties of G(n, p). Small-world networks are intermediate between kcirculant and binomial random graphs (see Figure 10.10). The Watts-Strogatz model is
said to provide a procedure for interpolating between the latter two types of graphs.
The last paragraph contains an algorithm for rewiring edges of a graph. While the
algorithm is simple, in practice it potentially skips over a number of vertices to be
considered for rewiring. If G = (V, E) is a k-circulant graph on n vertices and p is the
rewiring probability, the candidate vertices to be rewired follow a geometric distribution
with parameter p. This geometric trick, essentially the same speed-up technique used by
the Batagelj-Brandes Algorithm 10.3, can be used to speed up the rewiring algorithm.
To elaborate, suppose G has vertex set V = {0, 1, . . . , n 1}. If r is chosen uniformly at
random from the interval (0, 1), the index of the vertex to be rewired can be obtained
244
(a) p = 0, k-circulant
(c) p = 1, random
Figure 10.10: With increasing randomness, k-circulant graphs evolve to exhibit properties of random graphs in G(n, p). Small-world networks are intermediate between
k-circulant graphs and random graphs in G(n, p).
from
ln(1 r)
1+
.
ln(1 p)
The above geometric method is incorporated into Algorithm 10.6 to generate a WattsStrogatz network in worst-case runtime O(nk +m), where n and k are as per the input of
the algorithm and m is the size of the k-circulant graph on n vertices. Note that lines 7
to 12 are where we avoid self-loops and multiple edges.
Algorithm 10.6: Watts-Strogatz network model.
Input: Positive integer n denoting the number of vertices. Positive even integer k
for the degree of each vertex, where n k ln n 1. In particular, k
should satisfy 0 < k < n/2. Rewiring probability 0 < p 1.
Output: A Watts-Strogatz network on n vertices.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
M nk
/* sum of all vertex degrees = twice number of edges */
r draw uniformly at random from interval (0, 1)
v 1 + bln(1 r)/ ln(1 p)c
E contiguous edge list of k-circulant graph on n vertices
while v M do
u draw uniformly at random from [0, 1, . . . , n 1]
if v 1 is even then
while E[v] = u or (u, E[v]) E do
u draw uniformly at random from [0, 1, . . . , n 1]
else
while E[v 2] = u or (E[v 2], u) E do
u draw uniformly at random from [0, 1, . . . , n 1]
E[v 1] u
r draw uniformly at random from interval (0, 1)
v v + 1 + bln(1 r)/ ln(1 p)c
G Kn
add edges in E to G
return G
245
dij = 0, if i = j,
dij
`(G) =
n(n 1)/2 2 i6=j
(10.6)
X
1
=
dij
n(n 1) i6=j
which is averaged over all possible pairs of distinct vertices, i.e. the number of edges in
the complete graph Kn .
It is inefficient to compute the characteristic path length via equation (10.6) because
we would effectively sum n(n 1) distance values. As G is undirected, note that
X
X
1X
dij =
dij =
dij .
2 i6=j
i<j
i>j
The latter equation holds for the following reason. Let D = [dij ] be a matrix of distances
for G, where i is the row index, j is the column index, and dij is the distance from i to j.
The required sum of distances can be obtained by summing all entries above (or below)
the main diagonal of D. Therefore the characteristic path length can be expressed as
X
2
`(G) =
dij
n(n 1) i<j
=
X
2
dij
n(n 1) i>j
246
`=
However as p 1, we have `
ln n
.
ln k
Clustering coefficient
The clustering coefficient of a simple graph G quantifies the cliquishness of vertices
in G = (V, E). This quantity is thus said to be a local property of G. Watts and
Strogatz [192] defined the clustering coefficient as follows. Suppose n = |V | > 0 and let ni
count the number of neighbors of vertex i V , a quantity that is equivalent to the degree
of i, i.e. deg(i) = ni . The complete graph Kni on the ni neighbors of i has ni (ni 1)/2
edges. The neighbor graph Ni of i is a subgraph of G, consisting of all vertices (6= i) that
are adjacent to i and preserving the adjacency relation among those vertices as found in
the supergraph G. For example, given the graph in Figure 10.11(a) the neighbor graph
of vertex 10 is shown in Figure 10.11(b). The local clustering coefficient Ci of i is the
ratio
Ni
Ci =
ni (ni 1)/2
where Ni counts the number of edges in Ni . In case i has degree deg(i) < 2, we set the
local clustering coefficient of i to be zero. Then the clustering coefficient of G is defined
by
1X
Ni
1X
Ci =
.
C(G) =
n iV
n iV ni (ni 1)/2
3
3
1
10
9
7
6
7
(b) N10
247
can be approximated by
C(G0 )
3(k 0 1)
(1 p)3 .
2(2k 0 1)
Degree distribution
For a Watts-Strogatz network without rewiring, each vertex has the same degree k. It
easily follows that for each vertex v, we have the degree distribution
(
1, if i = k,
Pr[deg(v) = i] =
0, otherwise.
A rewiring probability p > 0 introduces disorder in the network and broadens the
degree distribution, while the expected degree is k. A k-circulant graph on n vertices
has nk/2 edges. With the rewiring probability p > 0, a total of pnk/2 edges would
be rewired. However note that only one endpoint of an edge is rewired, thus after the
rewiring process the degree of any vertex v is deg(v) k/2. Therefore with k > 2, a
Watts-Strogatz network has no isolated vertices.
For p > 0, Barrat and Weigt [17] showed that the degree of a vertex v can be written
as deg(v) = k/2 + ni with ni 0, where ni can be divided into two parts and as
follows. First k/2 edges are left intact after the rewiring process, the probability of
this occurring is 1 p for each edge. Second = ni edges have been rewired towards
i, each with probability 1/n. The probability distribution of is
k/2
P1 () =
(1 p) pk/2
where
P2 ()
(pk/2)
exp(pk/2)
!
for large n. Combine the above two factors to obtain the degree distribution
Pr[deg(v) = ] =
min{k/2, k/2}
X
i=0
k/2
(pk/2)k/2i
(1 p)i pk/2i
exp(pk/2)
i
( k/2 i)!
for k/2.
10.5
Scale-free networks
The networks covered so farGilbert G(n, p) model, Erdos-Renyi G(n, N ) model, WattsStrogatz small-world modelare static. Once a network is generated from any of these
models, the corresponding model does not specify any means for the network to evolve
over time. Barabasi and Albert [13] proposed a network model based on two ingredients:
248
1. Growth: at each time step, a new vertex is added to the network and connected
to a pre-determined number of existing vertices.
2. Preferential attachment: the newly added vertex is connected to an existing vertex
in proportion to the latters existing degree.
Preferential attachment also goes by the colloquial name of the rich-get-richer effect
due to the work of Herbert Simon [172]. In sociology, preferential attachment is known
as the Matthew effect due to the following verse from the Book of Matthew, chapter 25
verse 29, in the Bible: For to every one that hath shall be given but from him that
hath not, that also which he seemeth to have shall be taken away. Barabasi and Albert
observed that many real-world networks exhibit statistical properties of their proposed
model. One particularly significant property is that of power-law scaling, hence the
Barabasi-Albert model is also called a model of scale-free networks. Note that it is only
the degree distributions of scale-free networks that are scale-free. In their empirical study
of the World Wide Web (WWW) and other real-world networks, Barabasi and Albert
noted that the probability that a web page increases in popularity is directly proportional
to the pages current popularity. Thinking of a web page as a vertex and the degree of a
page as the number of other pages that the current page links to, the degree distribution
of the WWW follows a power law function. Power-law scaling has been confirmed for
many real-world networks:
actor collaboration network [13]
citation [56, 161, 166] and coauthorship networks [150]
human sexual contacts network [110, 134]
the Internet [48, 72, 184] and the WWW [6, 15, 36]
metabolic networks [107, 108]
telephone call graphs [3, 4]
Figure 10.12 illustrates the degree distributions of various real-world networks, plotted
on log-log scales. Corresponding distributions for various simulated Barabasi-Albert
networks are illustrated in Figure 10.13.
But how do we generate a scale-free graph as per the description in Barabasi and
Albert [13]? The original description of the Barabasi-Albert model as contained in [13]
is rather ambiguous with respect to certain details. First, the whole process is supposed
to begin with a small number of vertices. But as the degree of each of these vertices
is zero, it is unclear how the network is to grow via preferential attachment from the
initial pool of vertices. Second, Barabasi and Albert neglected to clearly specify how to
select the neighbors for the newly added vertex. The above ambiguities are resolved in
Bollobas et al. [28], wherein is given a precise statement of a random graph process that
realizes the Barabasi-Albert model. Fix a sequence of vertices v1 , v2 , . . . and consider
the case where each newly added vertex is to be connected to m = 1 vertex already in
a graph. Inductively define a random graph process (Gt1 )t0 as follows, where Gt1 is a
digraph on {vi | 1 i t}. Start with the null graph G01 or the graph G11 with one
vertex and one self-loop. Denote by degG (v) the total (in and out) degree of vertex v in
249
101
101
102
102
3
10
103
104
104
105
105
106
100
101
102
100
101
102
103
101
102
102
103
103
104
104
105
105
106
100
101
102
103
104
100
101
102
103
Figure 10.12: Degree distributions of various real-world networks on log-log scales. The
horizontal axis represents degree and the vertical axis is the corresponding probability of
a vertex having that degree. The US patent citation network [132] is a directed graph on
3, 774, 768 vertices and 16, 518, 948 edges. It covers all citations made by patents granted
between 1975 and 1999. The Google web graph [133] is a digraph having 875, 713 vertices
and 5, 105, 039 edges. This dataset was released in 2002 by Google as part of the Google
Programming Contest. The LiveJournal friendship network [9, 133] is a directed graph on
4, 847, 571 vertices and 68, 993, 773 edges. The actor collaboration network [13], based on
the Internet Movie Database (IMDb) at http://www.imdb.com, is an undirected graph
on 383, 640 vertices and 16, 557, 920 edges. Two actors are connected to each other if
they have starred in the same movie. In all of the above degree distributions, self-loops
are not taken into account and, where a graph is directed, we only consider the in-degree
distribution.
250
101
101
102
102
103
103
104
104
105
105 0
10
101
102
103
106 0
10
104
101
103
104
105
101
101
102
102
103
103
104
104
105
105
106
106
107 0
10
102
107
10
10
10
10
10
10
100
101
102
103
104
105
106
251
(
degG1t1 (vs )/(2t 1), if 1 s t 1,
1/(2t 1),
if s = t.
The latter process generates a forest. For m > 1 the graph evolves as per the case m = 1;
i.e. we add m edges from vt one at a time. This process can result in self-loops and
n
for the collection of all graphs on n vertices and minimal
multiple edges. We write Gm
n
is denoted
degree m in the Barabasi-Albert model, where a random graph from Gm
n
n
Gm Gm .
Now consider the problem of translating the above procedure into pseudocode. Fix a
positive integer n > 1 for the number of vertices in the scale-free graph to be generated
via preferential attachment. Let m 1 be the number of vertices that each newly added
vertex is to be connected to; this is equivalent to the minimum degree that any new vertex
will end up possessing. At any time step, let M be the contiguous edge list of all edges
created thus far in the above random graph process. It is clear that the frequency (or
number of occurrences) of a vertex is equivalent to the vertexs degree. We can thus use
M as a pool to sample in constant time from the degree-skewed distribution. Batagelj and
Brandes [18] used the latter observation to construct an algorithm for generating scalefree networks via preferential attachment; pseudocode is presented in Algorithm 10.7.
Note that the algorithm has linear runtime O(n + m), where n is the order and m the
size of the graph generated by the algorithm.
Algorithm 10.7: Scale-free network via preferential attachment.
Input: Positive integer n > 1 and minimum degree d 1.
Output: Scale-free network on n vertices.
1
2
3
4
5
6
7
8
9
1
2t 1
252
2t
(vs )].
E[degGt1
1
2t 1
n
Y
i=s
2m(m + 1)n
(k + m)(k + m + 1)(k + m + 2)
uniformly in k.
As regards the diameter, with n as per Algorithm 10.7, computer simulation by
Barabasi, Albert, and Jeong [6, 15] and heuristic arguments by Newman et al. [154]
suggest that a graph generated by the Barabasi-Albert model has diameter approximately
ln n. As noted by Bollobas and Riordan [27], the approximation diam(Gnm ) ln n holds
for the case m = 1, but for m 2 they showed that as n then diam(Gnm )
ln / ln ln n.
10.6
Problems
Where should I start? Start from the statement of the problem. What can I do? Visualize
the problem as a whole as clearly and as vividly as you can.
G. Polya, from page 33 of [158]
10.1. Algorithm 10.8 presents a procedure to construct a random graph that is simple
and undirected; the procedure is adapted from pages 47 of Lau [129]. Analyze the
time complexity of Algorithm 10.8. Compare and contrast your results with that
for Algorithm 10.5.
10.2. Modify Algorithm 10.8 to generate the following random graphs.
(a) Simple weighted, undirected graph.
(b) Simple digraph.
(c) Simple weighted digraph.
10.3. Algorithm 10.1 can be considered as a template for generating random graphs in
G(n, p). The procedure does not specify how to generate all the 2-combinations of
a set of n > 1 objects. Here we discuss how to construct all such 2-combinations
and derive a quadratic time algorithm for generating random graphs in G(n, p).
(a) Consider a vertex set V = {0, 1, . . . , n 1} with at least two elements and let
E be the set of all 2-combinations of V , where each 2-combination is written
ij. Show that ij E if and only if i < j.
10.6. Problems
if n = 1 then
return K1
max n(n 1)/2
if m > max then
return Kn
G null graph
A n n adjacency matrix with entries aij
aij False for 0 i, j < n
i0
while i < m do
u draw uniformly at random from {0, 1, . . . , n 1}
v draw uniformly at random from {0, 1, . . . , n 1}
if u = v then
continue with next iteration of loop
if u > v then
swap values of u and v
if auv = False then
add edge uv to G
auv True
ii+1
return G
G Kn
V {0, 1, . . . , n 1}
for i 0, 1, . . . , n 2 do
for j i + 1, i + 2, . . . , n 1 do
r draw uniformly at random from interval (0, 1)
if r < p then
add edge ij to G
return G
253
254
10.6. Problems
255
(b) From the previous exercise, we know that if 0 i < n 1 then there are
n (i + 1) pairs jk where either i = j or i = k. Show that
n2
X
n2 n
(n i 1) =
2
i=0
and conclude that Algorithm 10.9 has worst-case runtime O((n2 n)/2).
10.4. Modify the Batagelj-Brandes Algorithm 10.3 to generate the following types of
graphs.
(a) Directed simple graphs.
(b) Directed acyclic graphs.
(c) Bipartite graphs.
10.5. Repeat the previous problem for Algorithm 10.5.
10.6. In 2006, Keith M. Briggs provided [34] an algorithm that generates a random
graph in G(n, N ), inspired by Knuths Algorithm S (Selection sampling technique)
as found on page 142 of Knuth [119]. Pseudocode of Briggs procedure is presented
in Algorithm 10.10. Provide runtime analysis of Algorithm 10.10 and compare your
results with those presented in section 10.3. Under which conditions would Briggs
algorithm be more efficient than Algorithm 10.5?
10.7. Briggs Algorithm 10.10 follows the general template of an algorithm that samples
without replacement n items from a pool of N candidates. Here 0 < n N and
the size N of the candidate pool is known in advance. However there are situations
where the value of N is not known beforehand, and we wish to sample without
replacement n items from the candidate pool. What we know is that the candidate
pool has enough members to allow us to select n items. Vitters algorithm R [185],
called reservoir sampling, is suitable for the situation and runs in O(n(1+ln(N/n)))
expected time. Describe and provide pseudocode of Vitters algorithm, prove its
correctness, and provide runtime analysis.
10.8. Repeat Example 10.1 but using each of Algorithms 10.1 and 10.5.
10.9. Diego Garlaschelli introduced [83] in 2009 a weighted version of the G(n, p) model,
called the weighted random graph model. Denote by GW (n, p) the weighted random
graph model. Provide a description and pseudocode of a procedure to generate a
graph in GW (n, p) and analyze the runtime complexity of the algorithm. Describe
various statistical physics properties of GW (n, p).
10.10. Latora and Marchiori [128] extended the Watts-Strogatz model to take into account weighted edges. A crucial idea in the Latora-Marchiori model is the concept
of network efficiency. Describe the Latora-Marchiori model and provide pseudocode
of an algorithm to construct Latora-Marchiori networks. Explain the concepts
of local and global efficiencies and how these relate to clustering coefficient and
characteristic path length. Compare and contrast the Watts-Strogatz and LatoraMarchiori models.
256
10.11. The following model for growing graphs is known as the CHKNS model [43],1
named for its original proponents. Start with the trivial graph G at time step
t = 1. For each subsequent time step t > 1, add a new vertex to G. Furthermore
choose two vertices uniformly at random and with probability join them by an
undirected edge. The newly added edge does not necessarily have the newly added
vertex as an endpoint. Denote by dk (t) the expected number of vertices with degree
k at time t. Assuming that no self-loops are allowed, show that
d0 (t + 1) = d0 (t) + 1 2
d0 (t)
t
and
dk1 (t)
dk (t)
2
.
t
t
As t , show that the probability that a vertex be chosen twice decreases as
t2 . If v is a vertex chosen uniformly at random, show that
dk (t + 1) = dk (t) + 2
Pr[deg(v) = k] =
(2)k
(1 + 2)k+1
and conclude that the CHKNS model has an exponential degree distribution. The
size of a component counts the number of vertices in the component itself. Let
Nk (t) be the expected number of components of size k at time t. Show that
N1 (t + 1) = N1 (t) + 1 2
N1 (t)
t
t
t
i=1
kNk (t)
.
t
10.12. Algorithm 10.7 can easily be modified to generate other types of scale-free networks. Based upon the latter algorithm, Batagelj and Brandes [18] presented
a procedure for generating bipartite scale-free networks; see Algorithm 10.11 for
pseudocode. Analyze the runtime efficiency of Algorithm 10.11. Fix positive integer values for n and d, say n = 10, 000 and d = 4. Use Algorithm 10.11 to generate
a bipartite graph with your chosen values for n and d. Plot the degree distribution
of the resulting graph using a log-log scale and confirm that the generated graph
is scale-free.
10.13. Find the degree and distance distributions, average path lengths, and clustering
coefficients of the following network datasets:
(a) actor collaboration [13]
(b) coauthorship of condensed matter preprints [150]
(c) Google web graph [133]
(d) LiveJournal friendship [9, 133]
1
10.6. Problems
257
G K2n
/* vertex set is {0, 1, . . . , 2n 1} */
M1 list of length 2nd
M2 list of length 2nd
for v = 0, 1, . . . , n 1 do
for i = 0, 1, . . . , d 1 do
M1 [2(vd + i)] v
M2 [2(vd + i)] n + v
r draw uniformly at random from {0, 1, . . . , 2(vd + i)}
if r is even then
M1 [2(vd + i) + 1] M2 [r]
else
M1 [2(vd + i) + 1] M1 [r]
r draw uniformly at random from {0, 1, . . . , 2(vd + i)}
if r is even then
M2 [2(vd + i) + 1] M1 [r]
else
M2 [2(vd + i) + 1] M2 [r]
add edges (M1 [2i], M1 [2i + 1]) and (M2 [2i], M2 [2i + 1]) to G for i = 0, 1, . . . , nd 1
return G
258
Chapter 11
Graph Problems and Their LP
Formulations
This document is meant as an explanation of several graph theoretical functions defined
in Sages Graph Library (http://www.sagemath.org/), which use Linear Programming
to solve optimization of existence problems.
11.1
Even though such a formulation does not show it, this quantity can be computed in
polynomial time through Linear Programming. Indeed, we can think of this as a simple
flow problem defined on a bipartite graph. Let D be a directed graph whose vertex set
we first define as the disjoint union of E(G) and V (G). We add in D an edge between
(e, v) E(G) V (G) if and only if v is one of es endpoints. Each edge will then have a
flow of 2 (through the addition in D of a source and the necessary edges) to distribute
among its two endpoints. We then write in our linear program the constraint that each
vertex can absorb a flow of at most z (add to D the necessary sink and the edges with
capacity z).
Clearly, if H G is the densest subgraph in G, its |E(H)| edges will send a flow
. An
of 2|E(H)| to their |V (H)| vertices, such a flow being feasible only if z 2|E(H)|
|V (H)|
elementary application of the max-flow/min-cut theorem, or of Halls bipartite matching
theorem shows that such a value for z is also sufficient. This LP can thus let us compute
the Maximum Average Degree of the graph.
Sage method : Graph.maximum_average_degree()
LP Formulation :
Minimize : z
Such that :
259
260
eE(G)
ev
xe,v z
REMARK : In many if not all the other LP formulations, this Linear Program
is used as a constraint. In those problems, we are always at some point looking for a
subgraph H of G such that H does not contain any cycle. The edges of G are in this
case variables, whose value can be equal to 0 or 1 depending on whether they belong
to such a graph H. Based on the observation that the Maximum Average Degree of a
tree on n vertices is exactly its average degree (= 2 2/n < 1), and that any cycles
in a graph ensures its average degree is larger than 2, we can then set the constraint
2
. This is a handy way to write in LP the constraint that the set of
that z 2 |V (G)|
edges belonging to H is acyclic. For this to work, though, we need to ensure that the
variables corresponding to our edges are binary variables.
11.2
261
Minimize
w(e)be
eE(G)
Such that :
be = 2
eE(G)
ev
v V (G v ),
eE(G)
ev
xe,v 2
2
|V (G)|
Variables
262
p . get_values ( f )
= Graph ()
e in g . edges ( labels = False ):
if f [ R ( e [0] , e [1])] == 1:
tsp . add_edge ( e )
#
#
#
#
#
11.3
optional
optional
optional
optional
optional
This problem is polynomial by a result from Edmonds. Obviously, nothing ensures the
following formulation is a polynomial algorithm as it contains many integer variables,
but it is still a short practical way to solve it.
This problem amounts to finding, given a graph G and an integer k, edge-disjoint
spanning trees T1 , . . . , Tk which are subgraphs of G. In this case, we will chose to define
a spanning tree as an acyclic set of |V (G)| 1 edges.
Sage method : Graph.edge_disjoint_spanning_trees()
LP Formulation :
Maximize : nothing
Such that :
i[1,...,k]
eE(G)
be,k 1
be,k = |V (G)| 1
No cycles
* In each set, each edge sends a flow of 2 if it is taken
eE(G)
ev
xe,k,v 2
2
|V (G)|
Variables
263
#
#
#
#
11.4
optional
optional
optional
optional
Steiner tree
See Trietsch [181] for a relationship between Steiner trees and Eulers problem of polygon
division. Finding a spanning tree in a Graph G can be done in linear time, whereas
computing a Steiner Tree is NP-hard. The goal is in this case, given a graph, a weight
function w : E(G) R and a set S of vertices, to find the tree of minimum cost
connecting them all together. Equivalently, we will be looking for an acyclic subgraph
Hof G containing |V (H)| vertices and |E(H)| = |V (H)| 1 edges, which contains each
vertex from S
264
LP Formulation :
Minimize :
w(e)be
eE(G)
Such that :
eE(G)
ev
be 1
eE(G)
No Cycles
* Each edge sends a flow of 2 if it is taken
v V (G),
eE(G)
ev
xe,v 2
2
|V (G)|
Variables :
265
11.5
Linear arboricity
The linear arboricity of a graph G is the least number k such that the edges of G can
be partitioned into k classes, each of them being a forest of paths (the disjoints union
of paths trees of maximal degree 2). The corresponding LP is very similar to the one
giving edge-disjoint spanning trees
LP Formulation :
Maximize : nothing
Such that :
i[1,...,k]
be,k = 1
266
eE(G)
ev
be,k 2
No cycles
* In each set, each edge sends a flow of 2 if it is taken
eE(G)
ev
xe,k,v 2
2
|V (G)|
Variables
gg = g . copy ()
gg . delete_edges ( g . edges ())
answer = [ gg . copy () for i in range ( k )]
add = lambda (u , v ) , i : answer [ i ]. add_edge (( u , v ))
11.6. H-minor
11.6
267
H-minor
268
LP Formulation :
Maximize : nothing
Such that :
An edge e can only belong to the tree of h if both its endpoints represent h
e = g1 g2 E(G), te,h rsh,g1 and te,h rsh,g2
In each representative set, the number of vertices is one more than the number
of edges in the corresponding tree
X
X
h,
rsh,g
te,h = 1
gV (G)
eE(G)
te,h
hV (H)
v V (G),
eE(G)
ev
xe,k,v 2
2
|V (G)|
arc(g1 ,g2 ),(h1 ,h2 ) can only be equal to 1 if g1 g2 is leaving the representative set
of h1 to enter the one of h2 . (note that this constraints has to be written both
for g1 , g2 , and then for g2 , g1 )
g1 , g2 V (G), g1 6= g2 , h1 h2 E(H)
arc(g1 ,g2 ),(h1 ,h2 ) rsh1 ,g1 and arc(g1 ,g2 ),(h1 ,h2 ) rsh2 ,g2
We have the necessary edges between the representative sets
h1 h2 E(H)
Variables
11.6. H-minor
269
arc(g1 ,g2 ),(h1 ,h2 ) binary (is edge g1 g2 leaving the representative set of h1 to enter
the one of h2 ?)
Here is the corresponding Sage code:
sage : g = graphs . PetersenGraph ()
sage : H = graphs . CompleteGraph (4)
sage : p = M i x e d I n t e g e rL i n e a r P r o g r a m ()
sage : # sorts an edge
sage : S = lambda (x , y ) : (x , y ) if x < y else (y , x )
sage :
sage :
sage :
sage :
sage : for v in g :
...
p . add_constraint ( sum ([ rs [ h ][ v ] for h in H ]) , max = 1)
sage : # We ensure that the set of representatives of a
sage : # vertex h contains a tree , and thus is connected
sage : # edges represents the edges of the tree
sage : edges = p . new_variable ( dim = 2)
sage : # there can be a edge for h between two vertices
sage : # only if those vertices represent h
sage : for u , v in g . edges ( labels = None ):
...
for h in H :
...
p . add_constraint ( edges [ h ][ S (( u , v ))] - rs [ h ][ u ] , max = 0 )
...
p . add_constraint ( edges [ h ][ S (( u , v ))] - rs [ h ][ v ] , max = 0 )
sage : # The number of edges of the tree in h is exactly the cardinal
sage : # of its representative set minus 1
sage : for h in H :
...
p . add_constraint (
...
sum ([ edges [ h ][ S ( e )] for e in g . edges ( labels = None )])
...
- sum ([ rs [ h ][ v ] for v in g ])
...
==1 )
sage : # a tree has no cycle
sage : epsilon = 1/(5* Integer ( g . order ()))
sage : r_edges = p . new_variable ( dim =2)
sage : for h in H :
...
for u , v in g . edges ( labels = None ):
...
p . add_constraint (
...
r_edges [ h ][( u , v )] + r_edges [ h ][( v , u )] >= edges [ h ][ S (( u , v ))])
...
for v in g :
...
p . add_constraint (
...
sum ([ r_edges [ h ][( u , v )] for u in g . neighbors ( v )]) <= 1 - epsilon )
sage : # Once the representative sets are described , we must ensure
sage : # there are arcs corresponding to those of H between them
sage : h_edges = p . new_variable ( dim =2)
sage : for h1 , h2 in H . edges ( labels = None ):
...
for v1 , v2 in g . edges ( labels = None ):
...
p . add_constraint ( h_edges [( h1 , h2 )][ S (( v1 , v2 ))]
...
p . add_constraint ( h_edges [( h1 , h2 )][ S (( v1 , v2 ))]
...
p . add_constraint ( h_edges [( h2 , h1 )][ S (( v1 , v2 ))]
...
p . add_constraint ( h_edges [( h2 , h1 )][ S (( v1 , v2 ))]
sage :
sage :
sage :
sage :
0.0
p . set_binary ( rs )
p . set_binary ( edges )
p . set_objective ( None )
p . solve ()
rs [ h2 ][ v2 ] ,
rs [ h1 ][ v1 ] ,
rs [ h1 ][ v2 ] ,
rs [ h2 ][ v1 ] ,
max
max
max
max
=
=
=
=
0)
0)
0)
0)
270
Appendix A
Asymptotic Growth
Name
theta
Standard notation
f (n) = (g(n))
big oh
f (n) = O(g(n))
omega
f (n) = (g(n))
little oh
f (n) = o(g(n))
little omega
f (n) = (g(n))
f (n) = (g(n))
tilde
f (n) c g(n)
f (n) o(g(n))
f (n) g(n)
f (n) (g(n))
f (n) (g(n))
f (n) (g(n))
f (n) c g(n)
f (n) g(n)
Class
lim f (n)/g(n) =
Equivalent definition
f (n) = (g(n))
a constant
f (n) = o(g(n))
zero
f (n) = (g(n))
271
Appendix B
GNU Free Documentation License
Version 1.3, 3 November 2008
Copyright 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
http://www.fsf.org
Everyone is permitted to copy and distribute verbatim copies of this license document,
but changing it is not allowed.
Preamble
The purpose of this License is to make a manual, textbook, or other functional
and useful document free in the sense of freedom: to assure everyone the effective
freedom to copy and redistribute it, with or without modifying it, either commercially
or noncommercially. Secondarily, this License preserves for the author and publisher a
way to get credit for their work, while not being considered responsible for modifications
made by others.
This License is a kind of copyleft, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General
Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals
providing the same freedoms that the software does. But this License is not limited to
software manuals; it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book. We recommend this License principally for
works whose purpose is instruction or reference.
273
A Modified Version of the Document means any work containing the Document
or a portion of it, either copied verbatim, or with modifications and/or translated into
another language.
A Secondary Section is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the
Document to the Documents overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part
a textbook of mathematics, a Secondary Section may not explain any mathematics.) The
relationship could be a matter of historical connection with the subject or with related
matters, or of legal, commercial, philosophical, ethical or political position regarding
them.
The Invariant Sections are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is
released under this License. If a section does not fit the above definition of Secondary
then it is not allowed to be designated as Invariant. The Document may contain zero
Invariant Sections. If the Document does not identify any Invariant Sections then there
are none.
The Cover Texts are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under
this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may
be at most 25 words.
A Transparent copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable
for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing
editor, and that is suitable for input to text formatters or for automatic translation to
a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to
thwart or discourage subsequent modification by readers is not Transparent. An image
format is not Transparent if used for any substantial amount of text. A copy that is not
Transparent is called Opaque.
Examples of suitable formats for Transparent copies include plain ASCII without
markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly
available DTD, and standard-conforming simple HTML, PostScript or PDF designed
for human modification. Examples of transparent image formats include PNG, XCF
and JPG. Opaque formats include proprietary formats that can be read and edited only
by proprietary word processors, SGML or XML for which the DTD and/or processing
tools are not generally available, and the machine-generated HTML, PostScript or PDF
produced by some word processors for output purposes only.
The Title Page means, for a printed book, the title page itself, plus such following
pages as are needed to hold, legibly, the material this License requires to appear in the
title page. For works in formats which do not have any title page as such, Title Page
means the text near the most prominent appearance of the works title, preceding the
beginning of the body of the text.
The publisher means any person or entity that distributes copies of the Document
to the public.
A section Entitled XYZ means a named subunit of the Document whose title
either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ
274
in another language. (Here XYZ stands for a specific section name mentioned below,
such as Acknowledgements, Dedications, Endorsements, or History.)
To Preserve the Title of such a section when you modify the Document means that
it remains a section Entitled XYZ according to this definition.
The Document may include Warranty Disclaimers next to the notice which states
that this License applies to the Document. These Warranty Disclaimers are considered
to be included by reference in this License, but only as regards disclaiming warranties:
any other implication that these Warranty Disclaimers may have is void and has no effect
on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially
or noncommercially, provided that this License, the copyright notices, and the license
notice saying this License applies to the Document are reproduced in all copies, and
that you add no other conditions whatsoever to those of this License. You may not use
technical measures to obstruct or control the reading or further copying of the copies
you make or distribute. However, you may accept compensation in exchange for copies.
If you distribute a large enough number of copies you must also follow the conditions in
section 3.
You may also lend copies, under the same conditions stated above, and you may
publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers)
of the Document, numbering more than 100, and the Documents license notice requires
Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all
these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the
back cover. Both covers must also clearly and legibly identify you as the publisher of
these copies. The front cover must present the full title with all words of the title equally
prominent and visible. You may add other material on the covers in addition. Copying
with changes limited to the covers, as long as they preserve the title of the Document
and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put
the first ones listed (as many as fit reasonably) on the actual cover, and continue the
rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100,
you must either include a machine-readable Transparent copy along with each Opaque
copy, or state in or with each Opaque copy a computer-network location from which
the general network-using public has access to download using public-standard network
protocols a complete Transparent copy of the Document, free of added material. If
you use the latter option, you must take reasonably prudent steps, when you begin
distribution of Opaque copies in quantity, to ensure that this Transparent copy will
remain thus accessible at the stated location until at least one year after the last time
you distribute an Opaque copy (directly or through your agents or retailers) of that
edition to the public.
275
It is requested, but not required, that you contact the authors of the Document well
before redistributing any large number of copies, to give them a chance to provide you
with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under
precisely this License, with the Modified Version filling the role of the Document, thus
licensing distribution and modification of the Modified Version to whoever possesses a
copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the
Document, and from those of previous versions (which should, if there were any,
be listed in the History section of the Document). You may use the same title as
a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for
authorship of the modifications in the Modified Version, together with at least five
of the principal authors of the Document (all of its principal authors, if it has fewer
than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the
publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other
copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public
permission to use the Modified Version under the terms of this License, in the form
shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover
Texts given in the Documents license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled History, Preserve its Title, and add to it an item
stating at least the title, year, new authors, and publisher of the Modified Version as
given on the Title Page. If there is no section Entitled History in the Document,
create one stating the title, year, authors, and publisher of the Document as given
on its Title Page, then add an item describing the Modified Version as stated in
the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to
a Transparent copy of the Document, and likewise the network locations given in
the Document for previous versions it was based on. These may be placed in the
History section. You may omit a network location for a work that was published
at least four years before the Document itself, or if the original publisher of the
version it refers to gives permission.
276
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License,
under the terms defined in section 4 above for modified versions, provided that you
include in the combination all of the Invariant Sections of all of the original documents,
unmodified, and list them all as Invariant Sections of your combined work in its license
notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical
Invariant Sections may be replaced with a single copy. If there are multiple Invariant
Sections with the same name but different contents, make the title of each such section
unique by adding at the end of it, in parentheses, the name of the original author or
publisher of that section if known, or else a unique number. Make the same adjustment
277
to the section titles in the list of Invariant Sections in the license notice of the combined
work.
In the combination, you must combine any sections Entitled History in the various
original documents, forming one section Entitled History; likewise combine any sections Entitled Acknowledgements, and any sections Entitled Dedications. You must
delete all sections Entitled Endorsements.
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released
under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the
rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted
document, and follow this License in all other respects regarding verbatim copying of
that document.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations
of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include
translations of some or all Invariant Sections in addition to the original versions of these
Invariant Sections. You may include a translation of this License, and all the license
notices in the Document, and any Warranty Disclaimers, provided that you also include
the original English version of this License and the original versions of those notices and
disclaimers. In case of a disagreement between the translation and the original version
of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled Acknowledgements, Dedications, or
History, the requirement (section 4) to Preserve its Title (section 1) will typically
require changing the actual title.
278
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly
provided under this License. Any attempt otherwise to copy, modify, sublicense, or
distribute it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular
copyright holder is reinstated (a) provisionally, unless and until the copyright holder
explicitly and finally terminates your license, and (b) permanently, if the copyright holder
fails to notify you of the violation by some reasonable means prior to 60 days after the
cessation.
Moreover, your license from a particular copyright holder is reinstated permanently
if the copyright holder notifies you of the violation by some reasonable means, this is the
first time you have received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after your receipt of the
notice.
Termination of your rights under this section does not terminate the licenses of parties
who have received copies or rights from you under this License. If your rights have been
terminated and not permanently reinstated, receipt of a copy of some or all of the same
material does not give you any rights to use it.
11. RELICENSING
Massive Multiauthor Collaboration Site (or MMC Site) means any World Wide
Web server that publishes copyrightable works and also provides prominent facilities for
anybody to edit those works. A public wiki that anybody can edit is an example of
such a server. A Massive Multiauthor Collaboration (or MMC) contained in the
site means any set of copyrightable works thus published on the MMC site.
CC-BY-SA means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal
place of business in San Francisco, California, as well as future copyleft versions of that
license published by that same organization.
279
Incorporate means to publish or republish a Document, in whole or in part, as part
of another Document.
An MMC is eligible for relicensing if it is licensed under this License, and if all
works that were first published under this License somewhere other than this MMC, and
subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or
invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under
CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is
eligible for relicensing.
Bibliography
[1] G. M. Adelson-Velski and E. M. Landis. An algorithm for the organization of information. Soviet Mathematics Doklady, 3:12591263, 1962.
[2] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer
Algorithms. Addison-Wesley Publishing Company, 1974.
[3] W. Aiello, F. Chung, and L. Lu. A random graph model for massive graphs. In Proceedings of the 32nd Annual ACM Symposium on Theory of Computing, pages 171180.
Association for Computing Machinery, 2000.
[4] W. Aiello, F. Chung, and L. Lu. Handbook of Massive Data Sets, volume 4 of Massive Computing, chapter Random evolution of massive graphs, pages 97122. Kluwer
Academic Publishers, 2002.
[5] R. Albert and A.-L. Barab
asi. Statistical mechanics of complex networks. Reviews of
Modern Physics, 74(1):4797, 2002.
[6] R. Albert, H. Jeong, and A.-L. Barabasi. Diameter of the World-Wide Web. Nature,
401(6749):130131, 1999.
[7] L. A. N. Amaral, A. Scala, M. Barthelemy, and H. E. Stanley. Classes of small-world
networks. Proceedings of the National Academy of Sciences USA, 97(21):1114911152,
2000.
[8] V. Arlazarov, E. Dinic, M. Kronrod, and I. Faradzev. On economical construction of
the transitive closure of a directed graph. Soviet Mathematics Doklady, 11(5):12091210,
1970.
[9] L. Backstrom, D. Huttenlocher, J. Kleinberg, and X. Lan. Group formation in large
social networks: Membership, growth, and evolution. In T. Eliassi-Rad, L. H. Ungar,
M. Craven, and D. Gunopulos, editors, Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 4454, Philadelphia, PA, USA, 2006. Association for Computing Machinery.
[10] M. Baker and X. Faber. Quantum Graphs and Their Applications, volume 415 of Contemporary Mathematics, chapter Metrized Graphs, Laplacian Operators, and Electrical
Networks, pages 1533. American Mathematical Society, 2006.
[11] W. W. R. Ball and H. S. M. Coxeter. Mathematical Recreations and Essays. Dover
Publications, 13th edition, 1987.
[12] A.-L. Barab
asi. Linked: The New Science of Networks. Basic Books, 2002.
[13] A.-L. Barab
asi and R. Albert. Emergence of scaling in random networks. Science,
286(5439):509512, 1999.
[14] A.-L. Barab
asi, R. Albert, and H. Jeong. Mean-field theory for scale-free random networks. Physica A, 272(1-2):173187, 1999.
[15] A.-L. Barab
asi, R. Albert, and H. Jeong. Scale-free characteristics of random networks:
The topology of the world wide web. Physica A, 281(1-4):6977, 2000.
[16] A. Barrat, M. Barthelemy, and A. Vespignani. Dynamical Processes on Complex Networks. Cambridge University Press, 2008.
[17] A. Barrat and M. Weigt. On the properties of small-world network models. The European
Physical Journal B, 13(3):547560, 2000.
280
Bibliography
281
[18] V. Batagelj and U. Brandes. Efficient generation of large random networks. Physical
Review E, 71(3):036113, 2005.
[19] R. A. Beezer. A First Course in Linear Algebra. Robert A. Beezer, University of Puget
Sound, USA, 2009. http://linear.ups.edu.
[20] J. Bell and B. Stevens. A survey of known results and research areas for n-queens.
Discrete Mathematics, 309(1):131, 2009.
[21] R. Bellman. Dynamic Programming. Princeton University Press, 1957.
[22] E. Ben-Naim, H. Frauenfelder, and Z. Toroczkai, editors. Complex Networks. Springer,
2004.
[23] A. T. Benjamin and C. R. Yerger. Combinatorial interpretations of spanning tree identities. Bulletin of the Institute for Combinatorics and its Applications, 47(May):3742,
2006.
[24] N. L. Biggs. Codes: An Introduction to Information, Communication, and Cryptography.
Springer, 2009.
[25] B. Bollob
as. Random Graphs. Cambridge University Press, 2nd edition, 2001.
[26] B. Bollob
as, R. Kozma, and D. Miklos, editors. Handbook of Large-Scale Random Networks. J
anos Bolyai Mathematical Society and Springer, 2008.
[27] B. Bollob
as and O. Riordan. The diameter of a scale-free random graph. Combinatorica,
24(1):534, 2004.
[28] B. Bollob
as, O. Riordan, J. Spencer, and G. E. Tusnady. The degree sequence of a
scale-free random graph process. Random Structures & Algorithms, 18(3):279290, 2001.
[29] S. P. Borgatti. Centrality and network flow. Social Networks, 27(1):5571, 2005.
[30] S. Bornholdt and H. G. Schuster, editors. Handbook of Graphs and Networks: From the
Genome to the Internet. Wiley-VCH, 2003.
[31] O. Bor
uvka. O jistem problemu minimalnm (about a certain minimal problem). Pr
ace
mor. prrodoved. spol. v Brne III, 3:3758, 1926.
[32] O. Bor
uvka. Prspevek k resen otazky ekonomicke stavby elektrovodnch st (contribution to the solution of a problem of economical construction of electrical networks).
Elektronicky Obzor, 15:153154, 1926.
[33] J. M. Boyer and W. J. Myrvold. On the cutting edge: Simplified O(n) planarity by edge
addition. Journal of Graph Algorithms and Applications, 8(2):241273, 2004.
[34] K. M. Briggs. The verywnauty graph library (version 1.1), accessed 28th January 2011.
http://keithbriggs.info/very_nauty.html.
[35] M. Brinkmeier and T. Schank. Network Analysis: Methodological Foundations, volume
3418 of Lecture Notes in Computer Science, chapter Network Statistics, pages 293317.
Springer, 2005.
[36] A. Broder, R. Kumar, F. Maghoul, P. Raghavan, S. Rajagopalan, R. Stata, A. Tomkins,
and J. Wiener. Graph structure in the web. Computer Networks, 33(1-6):309320, 2000.
[37] M. R. Brown. The Analysis of a Practical and Nearly Optimal Priority Queue. PhD thesis, Computer Science Department, Stanford University, 1977. Technical Report STANCS-77-600.
[38] M. R. Brown. Implementation and analysis of binomial queue algorithms. SIAM Journal
on Computing, 7(3):298319, 1978.
[39] J. Buchmann, E. Dahmen, and M. Schneider. Merkle tree traversal revisited. In J. Buchmann and J. Ding, editors, Post-Quantum Cryptography, Second International Workshop, PQCrypto 2008, volume 5299 of Lecture Notes in Computer Science, pages 6378.
Springer, 2008.
[40] F. Buckley and F. Harary. Distance in Graphs. Perseus Books, 1990.
[41] F. Buckley and W. Y. Lau. Mutually eccentric vertices in graphs. Ars Combinatoria,
67(April), 2003.
282
Bibliography
[42] G. Caldarelli and A. Vespignani, editors. Large Scale Structure and Dynamics of Complex Networks: From Information Technology to Finance and Natural Science. World
Scientific, 2007.
[43] D. S. Callaway, J. E. Hopcroft, J. M. Kleinberg, M. E. J. Newman, and S. H. Strogatz.
Are randomly grown graphs really random? Physical Review E, 64(4):041902, 2001.
[44] R. D. Castro and J. W. Grossman. Famous trails to Paul Erdos. Mathematical Intelligencer, 21(3):5153, 1999.
[45] J.-L. Chabert, editor. A History of Algorithms: From the Pebble to the Microchip.
Springer, 1999.
[46] B. Chazelle. A minimum spanning tree algorithm with inverse-Ackermann type complexity. Journal of the ACM, 47(6):10281047, 2000.
[47] B. Chazelle. The soft heap: An approximate priority queue with optimal error rate.
Journal of the ACM, 47(6):10121027, 2000.
[48] Q. Chen, H. Chang, R. Govindan, S. Jamin, S. Shenker, and W. Willinger. The origin
of power-laws in internet topologies revisited. In Proceedings of the 21st Annual Joint
Conference of the IEEE Computer and Communications Societies, pages 608617. IEEE
Computer Society, 2002.
[49] A. G. Chetwynd and A. J. W. Hilton. Star multigraphs with three vertices of maximum
degree. Math. Proc. Camb. Phil. Soc., 100:303317, 1986.
Bibliography
283
284
Bibliography
[92] I. Gutman, Y.-N. Yeh, S.-L. Lee, and Y.-L. Luo. Some recent results in the theory of
the Wiener number. Indian Journal of Chemistry, 32A(8):651661, 1993.
[93] S. L. Hakimi. On realizability of a set of integers as degrees of the vertices of a linear
graph I. SIAM Journal of Applied Mathematics, 10(3):496506, 1962.
[94] S. L. Hakimi. On realizability of a set of integers as degrees of the vertices of a linear
graph II: Uniqueness. SIAM Journal of Applied Mathematics, 11(1):135147, 1963.
[95] V. Havel. Pozn
amka o existenci konecn
ych graf
u (in Czech, a remark on the existence
Bibliography
285
286
Bibliography
[139] B. McKay. Description of graph6 and sparse6 encodings, accessed 05th April 2010.
http://cs.anu.edu.au/~bdm/data/formats.txt.
[140] B. D. McKay. Knights tours of an 8 8 chessboard. Technical Report TR-CS-97-03,
Department of Computer Science, Australian National University, Australia, February
1997.
[141] B. McMillan. Two inequalities implied by unique decipherability. IRE Transactions on
Information Theory, 2(4):115116, 1956.
[142] A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone. Handbook of Applied Cryptography. CRC Press, 1996.
[143] R. C. Merkle. A digital signature based on a conventional encryption function. In
C. Pomerance, editor, Advances in Cryptology CRYPTO 87, A Conference on the
Theory and Applications of Cryptographic Techniques, volume 293 of Lecture Notes in
Computer Science, pages 369378. Springer, 1987.
[144] S. Milgram. The small world problem. Psychology Today, 1(1):6067, 1967.
[145] B. Mohar, D. Babic, and N. Trinajstic. A novel definition of the Wiener index for trees.
Journal of Chemical Information and Computer Sciences, 33(1):153154, 1993.
[146] E. F. Moore. The shortest path through a maze. In Proceedings of the International
Symposium on the Theory of Switching, pages 285292, 1959.
[147] S. Myles, A. R. Boyko, C. L. Owens, P. J. Brown, F. Grassi, M. K. Aradhya, B. Prins,
A. Reynolds, J.-M. Chia, D. Ware, C. D. Bustamante, and E. S. Buckler. Genetic
structure and domestication history of the grape. Proceedings of the National Academy
of Sciences USA, 2010.
[148] M. Newman, A.-L. Barab
asi, and D. J. Watts. The Structure and Dynamics of Networks.
Princeton University Press, 2006.
[149] M. E. J. Newman. Scientific collaboration networks: I. Network construction and fundamental results. Physical Review E, 64(1):016131, 2001.
[150] M. E. J. Newman. The structure of scientific collaboration networks. Proceedings of the
National Academy of Sciences USA, 98(2):404409, 2001.
[151] M. E. J. Newman. Mixing patterns in networks. Physical Review E, 67(2):026126, 2003.
[152] M. E. J. Newman. The structure and function of complex networks. SIAM Review,
45(2):167256, 2003.
[153] M. E. J. Newman. Networks: An Introduction. Oxford University Press, 2010.
[154] M. E. J. Newman, S. H. Strogatz, and D. J. Watts. Random graphs with arbitrary degree
distribution and their applications. Physical Review E, 64(2):026118, 2001.
[155] E. Nuutila. Efficient Transitive Closure Computation in Large Digraphs, volume 74 of
Mathematics and Computing in Engineering Series. Finnish Academy of Technology,
1995. http://www.cs.hut.fi/~enu/thesis.html.
[156] J. Oxley. What is a matroid? Cubo Matem
atica Educacional, 5(3):179218, 2003.
[157] J. Petersen. Sur le theor`eme de tait. LIntermediaire des Mathematiciens, 5:225227,
1898.
[158] G. Polya. How To Solve It: A New Aspect of Mathematical Method. Princeton University
Press, 2nd edition, 1957.
[159] R. C. Prim. Shortest connection networks and some generalizations. Bell System Technical Journal, 36:13891401, 1957.
[160] R. Rasmussen. Algorithmic Approaches for Playing and Solving Shannon Games. PhD
thesis, Faculty of Information Technology, Queensland University of Technology, Australia, 2007. http://eprints.qut.edu.au/18616/.
[161] S. Redner. How popular is your paper? An empirical study of the citation distribution.
The European Physical Journal B, 4(2):131134, 1998.
[162] K. H. Rosen. Elementary Number Theory and Its Applications. Addison Wesley Longman, 4th edition, 2000.
Bibliography
287
[163] B. Roy. Transitivite et connexite. Comptes Rendus des Seances de lAcademie des
Sciences, 249:216218, 1959.
[164] V. Runde. A Taste of Topology. Springer, 2005.
[165] R. Sedgewick. Algorithms in C. Addison-Wesley Publishing Company, 1990.
[166] P. O. Seglen. The skewness of science. Journal of the American Society for Information
Science, 43(9):628638, 1992.
[167] P. Sen, S. Dasgupta, A. Chatterjee, P. A. Sreeram, G. Mukherjee, and S. S. Manna.
Small-world properties of the Indian railway network. Physical Review E, 67(3):036106,
2003.
[168] A. Shimbel. Structure in communications nets. In Proceedings of the Symposium on
Information Networks, pages 199203, 1955.
[169] S. Shirali and H. L. Vasudeva. Metric Spaces. Springer, 2006.
[170] V. Shoup. A Computational Introduction to Number Theory and Algebra. Cambridge
University Press, 2nd edition, 2008. http://www.shoup.net/ntb.
[171] G. Sierksma and H. Hoogeveen. Seven criteria for integer sequences being graphic.
Journal of Graph Theory, 15(2):223231, 1991.
[172] H. A. Simon. On a class of skew distribution functions. Biometrika, 42(3-4):425440,
1955.
[173] D. R. Stinson. Cryptography: Theory and Practice. Chapman & Hall/CRC, 2nd edition,
2002.
[174] M. Szydlo. Merkle tree traversal in log space and time. In C. Cachin and J. Camenisch,
editors, Advances in Cryptology - EUROCRYPT 2004, International Conference on the
Theory and Applications of Cryptographic Techniques, volume 3027 of Lecture Notes in
Computer Science, pages 541554. Springer, 2004.
[175] T. Takaoka. O(1) time algorithms for combinatorial generation by tree traversal. The
Computer Journal, 42(5):400408, 1999.
[176] T. Takaoka. Theory of 2-3 heaps. In T. Asano, H. Imai, D. T. Lee, S.-I. Nakano,
and T. Tokuyama, editors, COCOON 99: Proceedings of the 5th Annual International
Conference on Computing and Combinatorics, volume 1627 of Lecture Notes in Computer
Science. Springer, 1999.
[177] R. E. Tarjan. Depth-first search and linear graph algorithms. SIAM Journal on Computing, 1(2):146160, 1972.
[178] G. Tarry. Le probl`eme des labyrinthes. Nouvelles Annales de Mathematique, 14(3):187
190, 1895.
[179] W. Trappe and L. C. Washington. Introduction to Cryptography with Coding Theory.
Pearson Education, 2nd edition, 2006.
[180] J. Travers and S. Milgram. An experimental study of the small world problem. Sociometry, 32(4):425443, 1969.
[181] D. Trietsch. Eulers problem of polygon division and full steiner topologiesa duality.
Technical Report 625, Center for Mathematical Studies in Economics and Management
Science, Northwestern University, USA, October 1984. http://econpapers.repec.org/
paper/nwucmsems/625.htm.
[182] A. Tripathi and S. Vijay. A note on a theorem of Erdos & Gallai. Discrete Mathematics,
265(1-3):417420, 2003.
[183] S. Valverde, R. F. Cancho, and R. V. Sole. Scale-free networks from optimal design.
Europhysics Letters, 60(4):512517, 2002.
[184] A. Vazquez, R. Pastor-Satorras, and A. Vespignani. Large-scale topological and dynamical properties of the Internet. Physical Review E, 65(6):066130, 2002.
[185] J. S. Vitter. Random sampling with a reservoir. ACM Transactions on Mathematical
Software, 11(1):3757, 1985.
[186] J. Vuillemin. A data structure for manipulating priority queues. Communications of the
ACM, 21(4):309315, 1978.
288
Bibliography
[187] H. Walther. Ten Applications of Graph Theory. Kluwer Academic Publishers, 1984.
[188] S. Warshall. A theorem on boolean matrices. Journal of the ACM, 9(1):1112, 1962.
[189] D. J. Watts. Networks, dynamics, and the small-world phenomenon. The American
Journal of Sociology, 105(2):493527, 1999.
[190] D. J. Watts. Small Worlds. Princeton University Press, 1999.
[191] D. J. Watts. Six Degrees: The Science of a Connected Age. W. W. Norton & Company,
2004.
[192] D. J. Watts and S. H. Strogatz. Collective dynamics of small-world networks. Nature,
393(6684):440442, 1998.
[193] J. G. White, E. Southgate, J. N. Thompson, and S. Brenner. The structure of the nervous
system of the nematode Caenorhabditis elegans. Philosophical Transactions of the Royal
Society B: Biological Sciences, 314(1165):1340, 1986.
[194] H. Whitney. Congruent graphs and the connectivity of graphs. American Journal of
Mathematics, 54(1):150168, 1932.
[195] H. Wiener. Structural determination of paraffin boiling points. Journal of the American
Chemical Society, 69(1):1720, 1947.
[196] J. W. J. Williams. Algorithm 232: Heapsort. Communications of the ACM, 7(6):347348,
1964.
[197] T. Yamada, S. Kataoka, and K. Watanabe. Listing all the minimum spanning trees in an
undirected graph. International Journal of Computer Mathematics, 87(14):31753185,
2010.
[198] T. Yamada and H. Kinoshita. Finding all the negative cycles in a directed graph. Discrete
Applied Mathematics, 118(3):279291, 2002.
[199] V. Yegnanarayanan. Graph theory to pure mathematics: Some illustrative examples.
Resonance, 10(1):5059, 2005.
[200] Y.-N. Yeh and I. Gutman. On the sum of all distances in composite graphs. Discrete
Mathematics, 135(1-3):359365, 1994.
[201] W. W. Zachary. An information flow model for conflict and fission in small groups.
Journal of Anthropological Research, 33(4):452473, 1977.
Index
Cn , 16
En , 48
Gc , 33
Kn , 15
Km,n , 16
Ln , 34
Pn , 16, 34
Qn , 34
Wn , 28
, 28
(G), 9
adj, 4
L, 21, 41
Ni , 246
=, 22
deg, 5, 9
deg+ , 6
deg , 6
(G), 9
depth(v), 105
diam(G), 199
dir, 91
, 197
height(T ), 105
iadj, 5
id, 5
(G), 202
e (G), 203
v (G), 201
(G), 203
lg, 157
oadj, 5
od, 5
, 13
G, 33
per(G), 199
rad(G), 199
, 34
td, 90
, 134, 137
(n), 114
f -augmenting, 224
f -saturated, 224
f -unsaturated, 224
f -zero, 224
k-connected, 202
k-edge-connected, 203
n-queens problem, 100
n-space, 131
graph6, 53, 55, 57, 58
sparse6, 55, 57
Lukaszewicz, J., 122
acyclic, 104, 116, 117, 145
Adelson-Velski, G. M., 179
adjacency matrix, 18
reduced, 19
algorithm
greedy, 75, 76, 116, 119
optimization, 110
random, 50, 103, 145147, 234, 235, 239,
241, 242, 244, 251, 253, 254, 257
recursive, 150
alphabet, 39, 40, 129, 133
binary, 134, 135
English, 129, 147
weighted, 129, 136
Altito, Noelie, 84
arcs, 3
Argentina, 93, 94
ASCII, 55, 57, 129
augmenting path, 224
Australia, 93, 94
Australian National University, 55
automata theory, 38, 84
AVL tree, 179, 195
height-balance property, 179
backtrack, 63
algorithm, 101
Baker, Matthew, 197
balanced bracket problem, 97, 98
289
290
Bangkok, 93, 94
Barabasi-Albert model, 248
Batagelj, Vladimir, 237, 256
Batagelj-Brandes algorithm, 237, 239
Baudot, E., 130
Beijing, 93, 94
Bell, Jordan, 101
Bellman, Richard E., 76
Bellman-Ford algorithm, 72, 7679, 84, 85,
92
Benjamin, Arthur T., 151
Berlin, 93, 94
Bernoulli family, 106
BFS, 5963, 65, 69
big-endian, 57, 58
Biggs, Norman, 136
binary heap, 152, 154, 179
heap-structure property, 179
maximum, 191
minimum, 191
order property, 155, 193
sift-down, 160
sift-up, 158
structure property, 155
binary search, 88, 90, 174
binary search tree, 152, 172, 174, 179, 181,
182
left subtree property, 172
property, 172, 182, 193
recursion property, 172
right subtree property, 172
binary tree, 107, 109, 126128, 147, 152
complete, 126, 179
nearly complete, 155, 179
random, 128, 147
Binet
formula, 195
Jacques Philippe Marie, 195
binomial
coefficient, 165, 191
distribution, 234
random graph, 236
tree, 165
binomial heap, 152, 164, 165, 167, 193
maximum, 193
minimum, 193
order property, 167, 168, 193
properties, 167
Index
root-degree property, 167, 168
biology, 196
bipartite graph, 16, 17, 50, 256, 257
complete, 16, 17
bit, 57, 129, 134
least significant, 57
most significant, 57
parity, 57
bit vector, 55, 57, 58
length, 57
bond, 31, 110
Bor
uvka
algorithm, 116, 122, 124, 144, 145, 150
Otakar, 116, 122
bowtie graph, 12
braille, 129
branch cut, 107, 109
Brandes, Ulrik, 237, 256
Brasilia, 93, 94
Brazil, 93, 94
breadth-first search, 5963, 68, 69, 71, 72,
88, 91, 97, 109, 139, 140
tree, 59, 62
bridge, 31, 105, 111, 122, 203
bridgeless, 203
Briggs
algorithm, 255
Keith M., 255
BST, 172
bubble sort, 95, 96
Buenos Aires, 93, 94
butterfly graph, 12
Caenorhabditis elegans, 231
Canada, 93, 94
canonical label, 23, 24
Cantor-Schroder-Bernstein theorem, 50
capacity, 223
cut, 225
card, 65
cardinality, 9
Carroll, Lewis, 2
Cartesian product, 34, 35
Catalan
number, 48, 127
recursion, 127
Chazelle, Bernard, 193
check matrix, 19
Index
chemistry, 81, 196
chess, 63, 100, 129
chessboard, 63
knight, 63
knight piece, 63
knights tour, 6365, 100
queen, 100, 101
child
left, 126, 139, 144
right, 126, 139, 144
China, 93, 94
Chinese ring puzzle, 130, 131
CHKNS model, 255
Choquet, G., 122
Chu Shi-Chieh, 191
Chvatal graph, 145, 146
circuit, 12
board, 59
electronic, 115
classification tree, 105, 106, 110
claw graph, 202
closed form, 127
code, 129, 134
r-ary, 150
binary, 129, 134, 135
block, 129
economy, 129
error-correcting, 19, 129
linear, 131
optimal, 136
prefix, 129
prefix-free, 129, 135, 147
radix, 150
reliability, 129
security, 129
tree representation, 134
uniquely decodable, 135
variable-length, 129
codeword, 129, 134
length, 136
coding function, 129
Cohen, Danny, 57
Collatz
conjecture, 148
graph, 148, 149
length, 148
sequence, 148
tree, 148, 149
291
color code, 129
coloring
edge, 37
vertex, 37, 38
combinatorial generation, 191
combinatorial graphs, 2
combinatorics, 131
communications network, 205
complement, 33
complete graph, 15, 146, 147, 234, 239, 241,
244, 245, 253, 254
component, 13, 28, 111
connected, 116
computer science, 38, 139
condensed matter, 232
connected graph, 13, 110
connectivity, 97
cost, 70
Coward, Noel, 10
cryptosystem, 129
cut
set, 31, 112
cut-edge, 202, 203
cut-point, 201
cut-vertex, 201, 202
cycle, 12, 71, 72, 86, 104, 105, 111, 113
fundamental, 113, 147
negative, 71, 72, 77, 79, 8486, 103
cycle double cover conjecture, 203
cycle graph, 16, 50
DAngelo, Anthony J., 144
Dorrie, Heinrich, 48
data structure, 52, 152
de Moivre, Abraham, 195
de Montmort, Pierre Remond, 99
decode, 129
degree, 5, 9
matrix, 21
maximum, 9, 109
minimum, 9, 114
sequence, 24, 114
weighted, 7
degree distribution, 230232, 239, 249, 250
depth-first search, 59, 63, 6569, 72, 88, 91,
97, 109, 140
tree, 65, 68
de Moivre, Abraham, 99
292
DFA, 39, 40
DFS, 63, 6569
diameter, 62, 63
Digital Signature Algorithm, 150
digraph, 5, 105
weighted, 70
Dijkstra
algorithm, 14, 7277, 84, 85, 92, 152
E. W., 72, 119
Diracs theorem, 209
directedness, 91
disconnected graph, 13
disconnecting set, 31
distance, 52, 62, 6971, 73, 75, 77, 79, 105
characteristic, 232
function, 70, 71, 196, 197, 211
matrix, 21, 71
minimum, 73
total, 90
distance distribution, 233
distribution
binomial, 239
geometric, 238, 240
Poisson, 240
uniform, 239
divide and conquer, 103
Dryden, John, 58
dynamic programming, 79
eccentricity, 197, 198
mutual, 211
path, 211
vertex, 211
edge, 3
capacity, 223
contraction, 32
cut, 31, 110, 207
deletion, 31
deletion subgraph, 31
directed, 4
endpoint, 116
head, 6
incident, 3
multigraph, 6
multiple, 3
tagging game, 110
tail, 6
weight, 5, 6
Index
edge-cut, 202
Edmonds, Jack, 110
eigenvalue, 150
element
random, 127
Elkies, Noam D., 65
encode, 129
endianness, 57
England, 93, 94, 101
entropy
encoding, 129
function, 129
Erdos, Paul, 24, 25
error rate, 129
Euclidean algorithm, 87
Euler
Leonhard, 1, 9, 47, 48, 114
phi function, 114, 148
phi sequence, 114, 115
polygon division problem, 47, 48
subgraph, 12
Eulerian trail, 1
Faber, Xander, 197
family tree, 14, 105, 106
fault-tolerant, 205
Fermats little theorem, 50
Fibonacci
number, 179
sequence, 195
tree, 179, 181183, 194
FIFO, 60, 65
filesystem, 105
hierarchy, 105
finite automaton, 38, 39, 50
deterministic, 39, 40
nondeterministic, 40, 41
first in, first out, 60
flag semaphore, 129
Florek, K., 122
Florentine families, 37
flow, 223
value, 224
flow chart, 52
Floyd, Robert, 79
Floyd-Roy-Warshall algorithm, 77, 7981,
83, 84, 92, 200, 201
football, 129
Index
forbidden minor, 36
Ford, Lester Randolph, Jr., 76
forest, 104, 105
Foulds, L. R., 37
Franklin graph, 22
Frederickson, Greg N., 193
FreeBSD, 52
frequency distribution, 243
friendship graph, 210
FRW, 77, 79
function plot, 2
Gallai, Tibor, 24, 25
Garlaschelli, Diego, 255
genetic code, 129
Germany, 93, 94
Gilbert, E. N., 240
girth, 12
Goldbach, Christian, 47
Goldberg, R., 77
golden ratio, 151, 195
Graham, Ronald L., 116, 201
graph, 3
applications, 36
connected, 13, 69, 70
dense, 54, 79
directed, 5
disconnected, 13
intersection, 28
join, 28
nonisomorphic, 48
simple, 7
sparse, 17, 54, 79, 84, 85
traversal, 58, 59
trivial, 114, 122
undirected, 3
union, 27, 28
unweighted, 3
weighted, 5, 69, 70, 116, 119
graph isomorphism, 22, 24
graph minor, 36
graphical sequence, 24, 26
Gray code, 130, 131
m-ary, 130
binary, 130, 131
reflected, 131133
Gray, Frank, 130
Gribkovskaia, Irina, 1
293
grid, 35
graph, 102, 103, 105, 107, 118, 119, 145,
147
Gros, L., 130
group theory
computational, 131
Gullivers Travels, 57
Hakimi, S. L., 25
Halskau Sr., yvind, 1
Hamming distance, 34
Hampton Court Palace, 101
handshaking lemma, 9
Havel, Vaclav, 25
Havel-Hakimi
test, 26
theorem, 25
heap
2-heap, 119
k-ary, 76
binary, 76
binary minimum, 137
Fibonacci, 76, 84, 119
heapsort, 154
Heinrich, Katherine, 50
Hell, Pavol, 116
hierarchical structure, 14, 104, 105
Hoare, C. A. R., 97
Hopcroft, John E., 63
Hopkins, Brian, 1
Horak, Peter, 50
Horner
method, 87
W. G., 87
house graph, 3
Huffman
David, 136
tree, 152
Huffman code, 135138, 140, 147
binary, 136
encoding, 139
tree construction, 136
tree representation, 137, 138, 140
Humpty Dumpty, 2
hypercube graph, 34, 35, 131
in-neighbor, 5
incidence
function, 6
294
matrix, 20
incidence matrix
oriented, 21
unoriented, 20
indegree, 5
unweighted, 6, 7
India, 93, 94
induction, 111, 114, 126, 135, 136, 145
structural, 145
infix notation, 98
information channel, 129
insertion sort, 96
Internet, 248
topology, 249
interpolation search, 90
invariant, 23, 26
isomorphism, 114
Japan, 93, 94
Jarnk, V., 119
Johnson
algorithm, 72, 84, 85, 92
Donald B., 84
join, 113
Jordan, Camille, 200
Konigsberg, 1
graph, 2, 5
seven bridges puzzle, 1, 9
Kaliningrad, 1
Kaplan, Haim, 193
Kataoka, Seiji, 144
Kinoshita, Harunobu, 103
Kleene
algorithm, 84
Stephen, 84
Klein, Felix, 2
Kneser graph, 5355
Knuth
Algorithm S, 255
Donald E., 49, 90, 95, 96, 130, 255
Kraft
inequality, 150, 151
Leon Gordon, 151
theorem, 151
Kruskal
algorithm, 116119, 144, 145, 150
Joseph B., 116
Index
ladder graph, 34
Lagarias, Jeffrey C., 148
Landis, E. M., 179
language, 41
regular, 41
Laplacian matrix, 21, 150
Laporte, Gilbert, 1
last in, first out, 65
Latora, V., 255
Latora-Marchiori model, 255
lattice, 35
Lee, C. Y., 59
Lehman, A., 110
Lehmer, D. H., 49
level
binary tree, 155
tree, 194
LIFO, 65
Lima, 93, 94
linear search, 88
Linux, 105
list, 53, 59, 60, 62, 65, 76, 137
adjacency, 53, 54, 62
contiguous edge, 55, 244
edge, 55
element, 53
empty, 53
length, 53
little-endian, 57
Loberman, H., 116
Loebbing, Martin, 65
London, 93, 94
Lucas
M. Edouard,
63, 151
number, 151
Madrid, 93, 94
Marchiori, M., 255
marriage ties, 37
matrix, 17
adjacency, 18, 53, 54, 58
bi-adjacency, 19
distance, 201
main diagonal, 58
transpose, 47
upper triangle, 58
Matthew effect, 248
max-flow min-cut theorem, 225
Index
generalized, 226
maximum flow problem, 224
maze, 59, 63, 101
McKay, Brendan D., 55, 65
McMillan
Brockway, 150
theorem, 150
Menezes, Alfred J., 95
Mengers theorem, 206208
merge sort, 169
Merkle, Ralph C., 150
Merris-McKay theorem, 150
mesh, 35, 36
message, 133
metabolic network, 248
metric, 71, 197
function, 70
metric graph, 197
metric space, 71
finite, 71
Milgram, Stanley, 242
minimum cut problem, 225
minimum spanning tree problem, 116
molecular graph, 36, 37, 81
Montmort-Moivre strategy, 100
Moore, Edward F., 59, 76
Morse code, 130, 135, 147
Moscow, 93, 94
MST, 116
multi-undirected graph, 5
multidigraph, 5, 39
multigraph, 5
adjacency, 7
in-neighbor, 7
out-neighbor, 7
Munroe, Randall, 52, 63, 72, 76, 104, 152,
215, 216
musical score, 129
neighbor graph, 246
network, 38, 223
biological, 231, 242
citation, 249
collaboration, 249
communication, 110
flow, 38
information, 242
social, 232, 242, 249
295
technological, 231, 242
Zachary karate club, 230
New Delhi, 93, 94
NFA, 40, 41
node, 3
noisy channel, 129
null graph, 4, 248
Nuutila, Esko, 84
operations research, 38
order, 3
organism, 106, 110
orientation, 6, 21
probability, 91
oriented graph, 235
random, 235, 238
Ottawa, 93, 94
out-neighbor, 5, 60, 68, 72, 73
outdegree, 5
unweighted, 6, 7
overfull graph, 47
Oxley, James, 110
parallel forest-merging, 122
parallelization, 122
partition, 50
Pascal
formula, 167, 191
path, 11, 12, 104, 105
closed, 12
distance, 70
even, 12
geodesic, 13
graph, 34
Hamiltonian, 131
internally disjoint, 205
length, 70, 71, 105
odd, 12
shortest, 52, 7073, 75, 77, 79, 84
tree, 112, 113
weighted, 84
path graph, 16
pendant, 9, 113
perfect square, 106
Perkal, J., 122
permutation
equivalent, 23
random, 147
Peru, 93, 94
296
Index
regular graph, 9, 50
Petersen
graph, 37, 38, 68, 69, 202, 203
k-circulant, 50, 51, 242, 244
Julius, 68
r-regular, 9, 51
planar graph, 37, 48
relative complement, 33
plane, 102
remainder, 57
Pollak, O., 201
Renaissance, 37
postfix notation, 98
reservoir sampling, 255
power grid, 231
residual digraph, 225
preferential attachment, 247, 248, 251
residual network, 225
prefix-free condition, 129
reverse Polish notation, 98
Pregel River, 1
rich-get-richer effect, 248
Pretoria, 93, 94
river crossing problem, 97
Prim
Robertson, Neil, 36
algorithm, 116, 119121, 123, 144, 145, Robertson-Seymour theorem, 36
150, 152
Rogets Thesaurus, 242
R. C., 116, 119
root directory, 105
priority queue, 152, 153
root list, 168
probability, 136
Roy, Bernard, 79
expectation, 136
RSA, 150
sample space, 127
Runde, Volker, 71
space, 234
Russia, 1, 93, 94
pseudorandom number, 49, 238
Python, 17
saturated edge, 224
scale-free network, 251, 256, 257
queue, 59, 62, 65, 69, 73, 140
scatterplot, 2, 115, 133
dequeue, 60, 62, 140, 142
Schulz, Charles M., 189
end, 60
scientific collaboration, 232
enqueue, 60, 62, 140, 142
Sedgewick, Robert, 90, 95, 96, 101
front, 60
selection sort, 95, 96
length, 60
self-complementary graph, 33
minimum-priority, 84, 119, 137
self-loop, 4
priority, 137
separating set, 31, 207
rear, 60
set, 53
start, 60
n-set, 2
quicksort, 97
totally ordered, 153
Seymour
random graph, 229
Paul, 36, 203
Bernoulli, 234
Shannon
binomial, 234, 242
Claude E., 7, 110
Erdos-Renyi, 240
multigraphs, 7, 8
uniform, 240
switching game, 110, 111
weighted, 255
shellsort,
96
random variable
Shimbel, A., 76
geometric, 238
Shirali, Satish, 71
Rasmussen, Rune, 110
shortest path, 13
recurrence relation, 194, 195
recursion, 79, 110, 111, 122, 124, 136, 142, Simon, Herbert, 248
simple graph, 7, 147, 242
144, 145
random, 234, 239, 241, 242, 253, 254
regular expression, 41
Index
single-source shortest path, 72, 76
six degrees of separation, 242
size, 3
component, 256
tree, 111113
small-world, 51, 63, 244
algorithm, 244
characteristic path length, 244
clustering coefficient, 244, 246
effect, 241
experimental results, 243
network, 244
social network, 196
social network analysis, 37
South Africa, 93, 94
Spain, 93, 94
spanning forest, 122
spanning subgraph, 14
spanning tree, 37, 105, 107, 110, 115117,
145, 147, 151
maximum, 144
minimum, 115117, 119124, 145
randomized construction, 145, 146, 150
sparse graph, 237
stack, 65, 68, 69, 140
length, 65
pop, 65, 68, 140, 142, 144
push, 65, 68, 140, 142, 144
Stanley, Richard P., 65
star graph, 17
state, 39, 40
accepting, 39, 40
diagram, 39, 40
final, 39, 40, 97
initial, 39, 40, 97
Steinhaus, H., 122
Stevens, Brett, 101
Stinson, Douglas R., 95
string, 39, 129, 133
accepted, 41
empty, 137
Strogatz, Steven H., 242
subgraph, 10, 14
edge-deletion, 115, 116, 150
subtree, 110, 142
left, 126, 144
right, 126, 144
supergraph, 14
297
Swift, Jonathan, 57
Sydney, 93, 94
symbol, 133
symbolic computation, 97
symmetric difference, 28, 29
Szekeres, G., 203
Takaoka, Tadao, 191, 193
Tanner graph, 19, 20
Tarjan, Robert Endre, 63
Tarry, Gaston, 63
telegraph, 130
Thailand, 93, 94
The Brain puzzle, 131
Thoreau, Henry David, 7
threshold
probability, 91
Through the Looking Glass, 2
Tokyo, 93, 94
topology, 71
total order, 153
Tower of Hanoi puzzle, 131
trail, 11, 12
closed, 12
transition
function, 39, 40
table, 39
transitive closure, 83, 84
trapdoor function, 129
Trappe, Wade, 95
traveling salesman problem, 37, 52
traversal
bottom-up, 142, 143, 150
in-order, 142144, 150, 172
level-order, 139142, 150
post-order, 140, 142, 150
pre-order, 140, 141, 150
treasure map, 1
tree, 14, 59, 104, 105, 113
2-ary, 126
n-ary, 105
binary, 105, 135, 137, 142
complete, 105, 110
depth, 105
directed, 105
expression, 105, 106
height, 105
nonisomorphic, 105, 107, 108
298
ordered, 105, 139
recursive definition, 110, 111, 145
rooted, 14, 59, 68, 105, 109, 126
subtree, 111
traversal, 139, 141
triangle inequality, 70, 71, 86, 197
Tripathi, Amitabha, 24
trivial graph, 15, 105, 255
tuple, 2
union
digraph, 114, 115
union-find, 145
Unix, 105
unweighted degree, 7
USA, 93, 94, 231, 249
value of flow, 224
van Oorschot, Paul C., 95
Vandermonde
Alexandre-Theophile, 191
convolution, 191
Vanstone, Scott A., 95
Vasudeva, Harkrishan L., 71
vending machine, 3840
vertex, 3
adjacent, 3
child, 105, 109
cut, 31, 207
degree, 9
deletion, 30, 142
deletion subgraph, 30, 201
endpoint, 104, 105
head, 4
internal, 105, 205
isolated, 9, 53, 110
leaf, 104, 105, 134, 142
multigraph, 6
parent, 105
root, 14, 105, 107, 109, 134
set, 3
source, 73, 86
tail, 4
union, 28
vertex connectivity, 201
vertex-cut, 201
Vijay, Sujith, 24
Vitter
algorithm, 255
Index
Jeffrey Scott, 255
Vuillemin, Jean, 167
Wagner
conjecture, 36
Klaus, 36
walk, 11, 12
closed, 12
length, 11
trivial, 11
Walther, Hansjoachim, 37
Warshall, Stephen, 79, 84
Washington DC, 93, 94
Washington, Lawrence C., 95
Watanabe, Kohtaro, 144
Watts, Duncan J., 242
Watts-Strogatz model, 242, 244, 255
Wegener, Ingo, 65
weight, 69, 71, 79, 116
correcting, 71
function, 71, 85, 116
graph, 5
minimum, 75, 116, 119, 122
multigraph, 6
negative, 71, 72, 76, 77
nonnegative, 7073, 75, 84
path, 196
positive, 71
reweight, 84, 85
setting, 71
unit, 70, 71
Weinberger, A., 116
wheel graph, 28, 37, 151
Whitney
Hassler, 205
inequality, 204
theorem, 209
Wiener
Harold, 81, 148
number, 81, 150
Williams, J. W. J., 154
Wilson, Robin, 1
wine, 212, 213
word, 39, 133
World Wide Web, 248
Yamada, Takeo, 103, 144
Yegnanarayanan, V., 50
Yerger, Carl R., 151
Index
Zachary, Wayne W., 230
zero padding, 57
Zubrzycki, S., 122
Zwick, Uri, 193
299