Chen, Computational Geometry Methods and Applications
Chen, Computational Geometry Methods and Applications
Introduction
Geometric objects such as points, lines, and polygons are the basis of a
broad variety of important applications and give rise to an interesting set
of problems and algorithms. The name geometry reminds us of its earliest
use: for the measurement of land and materials. Today, computers are being
used more and more to solve larger-scale geometric problems. Over the past
two decades, a set of tools and techniques has been developed that takes
advantage of the structure provided by geometry. This discipline is known
as Computational Geometry.
The discipline was named and largely started around 1975 by Shamos,
whose Ph.D. thesis attracted considerable attention. After a decade of devel-
opment the eld came into its own in 1985, when three components of any
healthy discipline were realized: a textbook, a conference, and a journal.
Preparata and Shamos's book Computational Geometry: An Introduction
[23], the rst textbook solely devoted to the topic, was published at about
the same time as the rst ACM Symposium on Computational Geometry was
held, and just prior to the start of a new Springer-Verlag journal Discrete and
Computational Geometry. The eld is currently thriving. Since 1985, sev-
eral texts, collections, and monographs have appeared [1, 10, 18, 20, 25, 26].
The annual symposium has attracted 100 papers and 200 attendees steadily.
There is evidence that the eld is broadening to touch geometric modeling
and geometric theorem proving. Perhaps most importantly, the rst students
who obtained their Ph.D.s in computer science with theses in computational
geometry have graduated, obtained positions, and are now training the next
generation of researchers.
Computational geometry is of practical importance because Euclidean
1
2 INTRODUCTION
space of two and three dimensions forms the arena in which real physical
objects are arranged. A large number of applications areas such as pattern
recognition [28], computer graphics [19], image processing [22], operations
research, statistics [4, 27], computer-aided design, robotics [25, 26], etc., have
been the incubation bed of the discipline since they provide inherently geo-
metric problems for which ecient algorithms have to be developed. A large
number of manufacturing problems involve wire layout, facilities location,
cutting-stock and related geometric optimization problems. Solving these
eciently on a high-speed computer requires the development of new geo-
metrical tools, as well as the application of fast-algorithm techniques, and
is not simply a matter of translating well-known theorems into computer
programs. From a theoretical standpoint, the complexity of geometric algo-
rithms is of interest because it sheds new light on the intrinsic diculty of
computation.
In this book, we concentrate on four major directions in computational
geometry: the construction of convex hulls, proximity problems, searching
problems and intersection problems.
Chapter 2
Algorithmic Foundations
For the past twenty years the analysis and design of computer algorithms
has been one of the most thriving endeavors in computer science. The funda-
mental works of Knuth [14] and Aho-Hopcroft-Ullman [2] have brought order
and systematization to a rich collection of isolated results, conceptualized
the basic paradigms, and established a methodology that has become the
standard of the eld. It is beyond the scope of this book to review in detail
the material of those excellent texts, with which the reader is assumed to
be reasonably familiar. It is appropriate however, at least from the point of
view of terminology, to brie
y review the basic components of the language
in which computational geometry will be described. These components are
algorithms and data structures. Algorithms are programs to be executed on
a suitable abstraction of actual \von Neumann" computers; data structures
are ways to organize information, which, in conjunction with algorithms,
permit the ecient and elegant solution of computational problems.
Complexity of problems
While time complexity for an algorithm is xed, this is not so for problems.
For example, Sorting can be implemented by algorithms of dierent time
complexity. The time complexity of a known algorithm for a problem gives us
the information about at most how much time we need to solve the problem.
We would also like to know the minimum amount of time we need to solve
the problem.
2.3.1 Member
The algorithm for deciding the membership of an element in a 2-3 tree is
given as follows, where T is a 2-3 tree, t is the root of T , and u is the element
to be searched in the tree.
Algorithm MEMBER(T, u)
BEGIN
IF T is a leaf node then report properly
ELSE IF L(t) >= u then MEMBER(child1(T), u)
ELSE IF M(t) >= u then MEMBER(child2(T), u)
ELSE IF t has a third child
THEN MEMBER(child3(T), u)
ELSE report failure.
END
Since the height of the tree is O(log n), and the algorithm simply follows a
path in the tree from the root to a leaf, the time complexity of the algorithm
MEMBER is O(log n).
8 ALGORITHMIC FOUNDATIONS
2.3.2 Insert
To insert a new elment x into a 2-3 tree, we proceed at rst as if we were
testing membership of x in the set. However, at the level just above the
leaves, we shall be at a node v that should be the parent of x. If v has only
two children, we simply make x the third child of v , placing the children in
the proper order. We then adjust the information contained by the node v
to re
ect the new situation.
Suppose, however, that x is the fourth child of the node v . We cannot
have a node with four children in a 2-3 tree, so we split the node v into two
nodes, which we call v and v 0 . The two smallest elements among the four
children of v stay with v , while the two larger elements become children of
node v 0 . Now, we must inset v 0 among the children of p, the parent of v .
The problem now is solved recursively.
One special case occurs when we wind up splitting the root. In that case
we create a new root, whose two children are the two nodes into which the
old root was split. This is how the number of levels in a 2-3 tree increases.
The above discussion is implemented as the following algorithms, where
T is a 2-3 tree and x is the element to be inserted.
Algorithm INSERT(T, x)
BEGIN
1. Find the proper node v in the tree T such that
v is going to be the parent of x;
2. Create a leaf node d for the element x;
3. ADDSON(v, d)
END
Algorithm ADDSON(v, d)
BEGIN
1. IF v is the root of the tree, add the node d properly.
Otherwise, do the following.
2. IF v has two children, add d directly
3. ELSE
3.1. Suppose v has three children c1, c2, and c3. Partition c1,
DATA STRUCTURE 9
c2, c3 and d properly into two groups (g1, g2) and (g3, g4).
Let v be the parent of (g1, g2) and create a new node v' and
let v' be the parent of (g3, g4).
3.2. Recursively call ADDSON(father(v), v').
END
Analysis: The algorithm INSERT can nd the proper place in the tree for
the element x in O(log n) time since all it needs to do is to follow a path
from the root to a leaf. Step 2 in the algorithm INSERT can be done in
constant time. The call to the procedure ADDSON in Step 3 can result in
at most O(log n) recursive calls to the procedure ADDSON since each call
will jump at least one level up in the 2-3 tree, and each recursive call takes
constant time to perform Steps 1, 2, and 3.1 in the algorithm ADDSON. So
Step 3 in the algorithm INSERT takes also O(log n) time. Therefore, the
overall time complexity of the algorithm INSERT is O(log n).
2.3.3 Minimum
Given a 2-3 tree T we want to nd out the minimum element stored in
the tree. Recall that in a 2-3 tree the numbers are stored in leaf nodes
in ascending order from left to right. Therefore the problem is reduced to
going down the tree, always selecting the left most link, until a leaf node is
reached. This leaf node should contain the minimum element stored in the
tree. Evidently, the time complexity of this algorithm is O(log n) for a 2-3
tree with n leaves.
BEGIN
IF T is a leaf THEN
min := T;
ELSE call MINIMUM(child1(T), min);
END
2.3.4 Delete
When we delete a leaf from a 2-3 tree, we may leave its parent v with only
one child. If v is the root, delete v and let its lone child be the new root.
Otherwise, let p be the parent of v . If p has another child, adjacent to v on
10 ALGORITHMIC FOUNDATIONS
either the right or the left, and that child of p has three children, we can
transfer the proper one of those three to v . Then v has two children, and we
are done.
If the children of p adjacent to v have only two children, transfer the lone
child of v to an adjacent sibling of v , and delete v . Should p now have only
one child, repeat all the above, recursively, with p in place of v .
Summarizing these discussions together, we get the algorithm DELETE,
as shown below. Where procedure DELETE() is merely a driver for sub-
procedure DEL() in which the actual work is done.
The variables done and 1son in DEL() are boolean
ags used to indicate
successful deletion and to detect the case when a node in the tree has only
one child, respectively.
In the worst case we need to traverse a path in the tree from root to a
leaf to locate the node to be deleted, then from that leaf node to the root,
in case that every non-leaf node on the path has only two children in the
original 2-3 tree T . Thus the time complexity of DELETE algorithm for a
tree with n nodes is O(log n).
Algorithm DELETE(T, x)
BEGIN
Call DEL(T, x, done, 1son);
IF done is true THEN
IF 1son is true THEN T := child1(T)
ELSE x was not found in T, handle properly
END
BEGIN
1. IF children of T are leaves THEN process properly, i.e., if
x is found, delete it; update the variables done and 1son;
2. ELSE IN CASE OF
x <= L(T): son := child1(T);
L(T) < x <= M(T): son := child2(T);
M(T) < x <= H(T): son := child3(T);
3. Call DEL(son, x, done, 1son1);
4. IF 1son1 is true THEN
4.1. IF the node T has another child b that has three children,
THEN reorganize the grandchildren among the nodes son and
DATA STRUCTURE 11
b to make both have two children, and set 1son := false;
4.2. ELSE make the only child of the node son a child of a
sibling of it, and delete the node son from T. If T has
only one child then set 1son := true.
END
2.3.5 Splice
Splicing two trees into one big tree is a special case of the more general
operation of merging two trees. Splice assumes that all the keys in one of the
trees are larger than all those in the other tree. This assumption eectively
reduces the problem of merging the trees into \pasting" the smaller tree into
a proper position in the larger tree. \Pasting" the smaller tree is actually
no more than performing an ADDSON operation to a proper node in the
larger tree.
To be more specic, let T1 and T2 be 2-3 trees which we wish to splice into
the 2-3 tree T , where all keys in T1 are smaller than those in T2. Furthermore,
assume that the height of T1 is less than or equal to that of T2 so that T1 is
\pasted" to T2 as a left child of a leftmost node at the proper level in T2. In
the case where the heights are equal, both T1 and T2 are made children of
the common root T ; otherwise the proper level in T2 is given by
height(T2) ? height(T1) ? 1
It is clear that the algorithm SPLICE runs in time O(log n). In fact,
the running time is proportional to the height dierence height(T2) ?
height(T1) ? 1.
The implementation of the algorithm SPLICE is given below.
{ Suppose that all elements in T1 are less than any elements in T2,
and that the height of T1 is at most that of T2. Other cases can
be dealt with similarly.}
BEGIN
IF height(T1) = height(T2)
THEN make T a parent of T1 and T2.
ELSE
WHILE height(T2)-1 > height(T1) DO
12 ALGORITHMIC FOUNDATIONS
T2 := child1(T2)
Call ADDSON(T2, T1).
END
2.3.6 Split
By splitting a given 2-3 tree T into two 2-3 trees, T1 and T2, at a given
element x, we mean to split the tree T in such a way that all elements in T
that are less than or equal to x go to T1 while the remaining elements in T
go to T2 .
The idea is as follows: as the tree is searched for x, we store the subtrees
to the left and right of the traversed path (split path). For this purpose two
stacks are used, one for each side of the split path. As we go deeper into
T , subtrees are pushed into the proper stack. Finally, the subtrees in each
stack are spliced together to form the desired trees T1 and T2, respectively.
The algorithm is given as follows.
BEGIN
1. WHILE T is not leaf DO
IF x <= L(T) THEN
S2 <-- child3(T), child2(T);
T := child1(T);
IF L(T) < x <= M(T) THEN
S1 <-- child1(T); S2 <-- child3(T);
T := child2(T);
IF M(T) < x <= H(T) THEN
S1 <-- child1(T), child2(T);
T := child3(T);
{Reconstruct T1}
2. T1 <-- S1;
3. WHILE S1 is not empty DO
t <-- S1;
Call SPLICE(T1, t, T1);
DATA STRUCTURE 13
{Reconstruct T2}
4. T2 <-- S2;
5. WHILE S2 is not empty DO
t <-- S2;
Call SPLICE(T2, T2, t);
END
It is easy to see that the WHILE loop in Step 1 takes time O(log n). The
analysis for the rest of the algorithm is a bit more complicated. Note that
the use of the stacks S1 and S2 to store the subtrees guarantees that the
height of a subtree closer to a stack top is less than or equal to the height of
the subtree immediately deeper in the stack. A crucial observation is that
since we splice shorter trees rst (which are on the top part of the stacks),
the dierence between the heights of two trees to be spliced is always very
small. In fact, the total time spent on splicing all these subtrees is bounded
by O(log n). We give a formal proof as follows.
Assume that the subtrees stored in stack S1 are
t1; t2; ; tr (2:1)
in the order from the stack top to stack bottom. Let h(t) be the height of
the 2-3 tree t. According to the algorithm SPLIT, we have
h(t1) h(t2 ) h(tr )
and no three consecutive subtrees in the stack have the same height. Thus,
we can partition sequence (1) into \segments" which contains the subtrees
of the same height in the sequence:
s1 ; s2; ; sq
Each si either is a single subtree or consists of two consecutive subtrees of
the same height in sequence (1). Moreover, q log n. Let h(si ) be the
height of the subtrees contained in the segment si . We have
h(s1 ) < h(s2) < < h(sq )
The WHILE loop in Step 3 rst splices the subtrees in segment s1 into
a single 2-3 tree T1(1), then recursively splices the 2-3 tree T1(i?1) and the
subtrees in segment si into a 2-3 tree T1(i), for i = 2; : : :; q . We have the
following lemma.
14 ALGORITHMIC FOUNDATIONS
Lemma 2.3.2 For all i = 2; : : :; q, we have
h(si?1 ) h(T1(i?1)) h(si) < h(si+1 ) < < h(sq )
v1
f4
e1 e3
f1 f2 e2
e4 v4 e5
f3
v2 e6 v3
BEGIN
1. a := HF[i];
2. a0 := a;
3. IF (DCEL[a][F1] = i) THEN
a := DCEL[a][P1];
ELSE a := DCEL[a][P2];
4. WHILE (a <> a0) DO
IF (DCEL[a][F1] = i) THEN
a := DCEL[a][P1]
ELSE a := DCEL[a][P2];
END.
GEOMETRIC GRAPHS 19
For example, if we start with HF [3] = 4, and use the DCEL for the
planar imbedding I of the complete graph K4, then we will get the region
f3 as e4 , e5 , and e6.
Note that if the rotation of edges incident on each vertex of the PSLG G
is given in counterclockwise order in a DCEL, then the regions are traveled
clockwise by the above algorithm. On the other hand, if the rotation of edges
incident on each vertex of the PSLG G is given in clockwise order in a DCEL,
then the regions are traveled counterclockwise by the above algorithm. Given
a PSLG G, it is easy to see that a DCEL for G in which the rotation of
edges incident on each vertex of G is given in counterclockwise order can be
transformed in linear time into a DCEL for G in which the rotation of edges
incident on each vertex of G is given in clockwise order, and vice versa. The
detailed implementation of this transformation is straightforward and left to
the reader as an exercise.
20 ALGORITHMIC FOUNDATIONS
Chapter 3
Geometric Preliminaries
According to the nature of the geometric objects involved, we can identify
basically ve categories into which the entire collection of geometric problems
can be conveniently classied, i.e., convexity, proximity, geometric searching,
intersection, and optimization.
In this chapter, we will give the precise denitions of these problems
and give an \intuitive" discussion on the mathematical background of them.
Some of our statements and proofs are informal. This is because of the fact
that some geometric theorems are \intuitively obvious" but no easy proofs
are known though many great mathematicians have tried. An example is
the following famous \Jordan Curve Theorem", which will actually serve as
a fundamental basis for all of our discussions.
where jAj, jB j, and jC j denote the lengths of the edges A, B , and C , respec-
tively.
Suppose that is a triangle in the plane E 2 with the vertices p1 =
(x1 ; y1), p2 = (x2; y2) and p3 = (x3 ; y3). Then the signed area of is half of
the determinant
x1 y1 1
D(p1; p2; p3) = x2 y2 1 (3:2)
x 3 y3 1
where the sign is positive if (p1p2 p3) form a counterclockwise cycle, and
negative if (p1 p2p3 ) form a clockwise cycle. We say that the path from point
p1 through the line segment p1 p2 to point p2 then through the line segment
p2p3 to point p3 is a left turn if D(p1; p2; p3) is positive, otherwise, we say
the path makes a right turn.
With the formulas (8.1) and (8.3), given three points p1 , p2, and p3 in
the plane E 2, we can determine completely the value of the angle from the
line segment p1p2 to the line segment p1p3 (denote this angle by 6 p2 p1p3 ).
A line L on the plane can be represented by a linear equation:
Ax + By + C = 0
CONVEX HULLS 23
such that a point p = (x; y ) is on the line if and only if the coordinates of p
satisfy the equation. A half plane dened by the line L can be represented
by either
Ax + By + C 0
or
Ax + By + C 0
p
i
p
H(p , pj ) j
i
MAXIMUM-EMPTY-CIRCLE
Find a largest circle containing no points of the set S yet whose center
is interior to the convex hull of S .
The problems posed above are related in the sense that they all deal with
the respective distances among points in the plane. In the following, we will
introduce a single geometric structure, called the Voronoi diagram, which
contains all of the relevant proximity information in only linear space.
Let us get some motivation from the CLOSEST-PAIR problem. Let S
be a set of n points in the plane. For any two points pi and pj in S , the set of
points closer to pi than to pj is just the half-plane containing pi that is dened
by the perpendicular bisector of the segment pi pj . See Figure 3.1. Denote
this half-plane by H (pi; pj ) (note that H (pi; pj ) 6= H (pj ; pi)). Therefore, the
set Vi of points in the plane that are closer to the point pi than to any other
points in the set S is the intersection of the sets H (pi; pj ) for all pj 2 S ?fpi g
\
Vi = j 6=i H (pi; pj )
Each H (pi; pj ) is a half-plane so it is convex. By Theorem 3.1.1, the set
Vi, which is the intersection of these convex sets H (pi; pj ), is also convex. It
is also easy to see that the set Vi is in fact a convex polygonal region. Observe
28 GEOMETRIC PRELIMINARIES
that every point in the plane must belong to some region Vi . Moreover, no
set Vi can be empty since all points in a small enough disc centered at the
point pi must be in Vi.
Thus these n convex polygonal regions V1, V2, , Vn partition the plane
into a convex net. Motivated by this discussion, we introduce the following
denition.
Denition A Voronoi diagram of a set S = fp1; ; png of n planar points
is a partition of the plane into n regions V1, V2, , Vn such that any point
in the region Vi is closer to the point pi than to any other point in the set
S.
The convex polygonal region Vi is called the Voronoi polygon of the point
pi in S . The vertices of the diagram are called Voronoi vertices and the line
segments of the diagram are called Voronoi edges. The Voronoi diagram of
a set S is denoted by Vor(S). Note that Voronoi vertices are in general not
the points in the set S .
3.3 Intersections
Intersection problems and their variations arise in many disciplines, such as
architectural design, computer graphics, pattern recognition, etc. An archi-
tectural design cannot place two interpenetrable objects to share a common
region. When displaying objects on a 2-dimensional display device, obscured
portions (or intersecting portions) should be eliminated to enhance realism,
a long standing problem known as hidden line/surface elimination problem
[19]. In integrated circuit design two distinct components must be separated
by a certain distance, and the detection of whether or not the separation
rule is obeyed can be cast as an instance of intersection problems; since the
task may involve thousands of objects, fast algorithms for detecting or re-
porting intersecting or overlapping objects are needed. Another motivation
for studying the complexity of intersection algorithms is that light may be
shed on the inherent complexity of fundamental geometric problems. For
example, how dicult is it to decide if a given polygon with n vertices is
simple or how much time is needed to determine if any two of n given objects
in the plane, such as polygons, line segments, etc., intersect?
We list a few typical geometric intersection problems.
SEGMENT INTERSECTION
SEARCHING 29
Given n line segments in the plane, nd all intersections.
HALF-PLANE INTERSECTION
Given n half-planes in the plane, compute their common intersection.
POLYGON INTERSECTION
Given two polygons P and Q with m and n vertices, respectively, com-
puter their intersection.
Geometric Sweeping
Geometric sweeping technique is a generalization of a technique called plane
sweeping, that is primarily used for 2-dimensional problems. In most cases,
we will illustrate the technique for 2-dimensional cases. The generalization
to higher dimensions is straightforward. This technique is also known as the
scan-line method in computer graphics, and is used for a variety of applica-
tions, such as shading, polygon lling, among others.
The technique is intuitively simple. Suppose that we have a line in the
plane. To collect the geometric information we are interested in, we slide
the line in some way so that the whole plane will be \scanned" by the line.
While the line is sweeping the plane, we stop at some points and update our
recording. We continue this process until all interesting objects are collected.
There are two basic structures associated with this technique. One is for
the sweeping line status, which is an appropriate description of the relevant
information of the geometric objects at the sweeping line, and the other is
for the event points, which are the places we should stop and update our
recording. Note that the structures may be implemented in dierent data
structures under various situations. In general, the data structures should
support ecient operations that are necessary for updating the structures
while the line is sweeping the plane.
Algorithm SEGMENT-INTERSECTION
BEGIN
1. Sort the endpoints of the segments and put them in EVENT;
2. STATUS = {};
3. WHILE EVENT is not empty DO BEGIN
p = MINIMUM(EVENT);
DELETE p from EVENT;
IF p is a right-end of some segment S
Let Si and Sj be the two segments adjacent to S in STATUS;
IF p is an intersection point of S with Si or Sj
REPORT(p);
DELETE S from STATUS;
IF Si and Sj intersect at p1 and x(p1) >= x(p)
INSERT p1 into EVENT
ELSE IF p is a left-end of some segment S
INSERT S into STATUS;
Let Si and Sj be the adjacent segments of S in STATUS;
IF p is an intersection point of S with Si or Sj
REPORT(p);
IF S intersects Si at p1, INSERT p1 into EVENT;
IF S intersects Sj at p2, INSERT p2 into EVENT
ELSE IF p is an intersection point of segments Si and Sj
such that Si is on the left of Sj in STATUS
34 GEOMETRIC SWEEPING
REPORT(p);
swap the positions of Si and Sj in STATUS;
Let Sk be the segment left to Sj and let Sh be the segment
right to Si in STATUS;
IF Sk and Sj intersect at p1 and x(p1) > x(p)
INSERT p1 into EVENT;
IF Sh and Si intersect at p2 and x(p2) > x(p)
INSERT p2 into EVENT;
END; {WHILE}
END.
BEGIN
Let p(1) be the point in the set S that has the smallest
y-coordinate;
Let p(2) be the point in the set S such that the slope of
the line segment p(1)-p(2) is the smallest, with respect
to the x-axis;
PRINT(p(1), p(2));
i := 2 ;
WHILE p(i) <> p(1) DO
Let p(i+1) be the point in the set S such that the angle
<p(i-1)p(i)p(i+1) is the largest;
i := i + 1 ;
PRINT(p(i));
END.
{St is a stack}
BEGIN
1. Let p(0) be the point in S that has the smallest y-coordinate.
{ Without loss of generality, we can suppose that p(0) is the
origin, otherwise, we make a coordinate transformation }
2. Sort the points in the set S - p(0) by their polar angles.
Let the sorted list of the points be
L' = { p(1), p(2), ..., p(n-1) }
{in increasing polar angle ordering.}
3. Let
L = { p(1), p(2), ..., p(n-1), p(n) }
where p(n) = p(0);
q(1) = p(0); q(2) = p(1); PUSH(St, q(1));
PUSH(St, q(2)); i = 2; j = 2;
4. WHILE i <= n DO
IF q(j-1)q(j)p(i) is a left turn
THEN q(j+1) = p(i);
PUSH(St, q(j+1));
i++;
j++
38 GEOMETRIC SWEEPING
ELSE POP(St);
j--;
END.
In Graham Scan, the sweeping line rotates around a xed point p0 . All
points in the set S are event points. Since the event points are presorted in
Step 2, it takes only constant time to nd the next event point in the sorted
list L. This makes Graham Scan very ecient.
Let us consider the time complexity of the algorithm in detail. Step 1 can
be done by comparing the y -coordinates of all points in the set S , thus it takes
time O(n); Step 2 can be done by any O(n log n) time sorting algorithm, for
example, MergeSort; Step 3 obviously takes constant time. To discuss the
time complexity of the loop in Step 4, observe that each point of the set
S can be pushed into the stack St and then popped out of the stack at
most once. Whenever a point is popped out from the stack, it will never
be considered any more. Therefore, there are at most 2n stack pushes and
pops. Now each execution of the loop in Step 4 either pushes a point into the
stack (Step 4.2) or pops a point out the stack (Step 4.3). Thus the loop is
executed at most 2n times. Since each execution of the loop obviously takes
constant time, we conclude that the total time taken by Step 4 is bounded
by O(n).
Therefore, the time complexity of Graham Scan is O(n log n).
We remark that most of the time in Graham Scan algorithm is spent on
Step 2's sorting. Besides sorting, Graham Scan runs in linear time.
The Step 2 in Graham Scan sorts the points in the given set S by their
polar angles. This involves in trigonometric operations. Although we have
assumed that our RAMs can perform trigonometric operations in constant
time, trigonometric operations can be very time consuming in a real com-
puter. We present a modied version of Graham Scan which avoids using
trigonometric operations.
The idea is as follows. Suppose we are given a set S of n points in the
plane. We add a new point p0 to the set S such that p0 's y -coordinate is
smaller than that of any point in the set S . Then we perform Graham Scan
on this new set. Draw a line segment p0p for each point p in the set S . It
can be easily seen that if the point p0 moves toward the negative direction
of the y -axis, these line segments are getting more and more parallel each
other. Imagining that eventually p0 reaches the innite point along the
negative direction of the y -axis, then all these line segments become vertical
FARTHEST PAIR 39
rays originating from the points of the set S . Now the ordering of the polar
angles of the points of S around p0 is identical with the ordering of the
x-coordinates of these points. (In fact, p0 does not have to be the innite
point, when p0 is far enough from the set S , the above statement should
be true.) Therefore, the convex hull of the new set can be constructed by
rst sorting the points in S by their x-coordinates instead of their polar
angles. It is also easy to see that the convex hull of the new set consists of
two vertical rays, originating from the two points pmin and pmax in the set
S with smallest and largest x-coordinates, respectively, and the part UH of
the convex hull of the original set S . This part UH of the convex hull CH(S)
is in fact the upper hull of CH(S) in the sense that all points of the set S lie
between the vertical lines x = xmin and x = xmax and below the part UH .
Similarly, the lower hull of the convex hull CH(S) can be constructed by the
idea of adding an innite point in the positive direction of the y -axis. The
convex hull CH(S) is simply the circular catenation of the upper hull and
the lower hull.
Now we give the formal algorithm as follows.
BEGIN
Sort the points of the set S in decreasing x-coordinate
ordering;
Let pmax and pmin be the points of S that have the
largest and smallest x-coordinates, respectively.
Suppose pmax = (x, y), let p(0) = (x, y-1),
and p(1) = pmax;
Perform Graham Scan on the sorted list until the point
pmin is included as a hull vertex;
The ordered list of hull vertices found in this process
minus the point p(0) is the upper hull;
Construct the lower hull similarly;
Catenate the upper hull and lower hull to form the convex
hull CH(S).
END
The Modied Graham Scan obviously also takes time O(n log n).
40 GEOMETRIC SWEEPING
4.3 The farthest pair problem
The problem we shall discuss in this section is formally dened as follows:
FARTHEST-PAIR
Find a pair of points in a given set which are farthest.
A brute force algorithm is to examine every pair of points to nd the
maximum distance thus determined. The brute force algorithm obviously
runs in time O(n2 ).
To get a more ecient algorithm, let us rst investigate what kind of
properties a farthest pair of points in a set has. Let us suppose that S is a
set of n points in the plane, and call a segment linking two farthest points
in the set S a diameter of the set S .
Lemma 4.3.1 Let uv be a diameter of the set S . Let lu and lv be two
straight lines that are perpendicular to the segment uv such that lu passes
through u and lv passes through v . Then all points of S are contained in the
slab between lu and lv .
proof. Without loss of generality, suppose that the segment uv is horizon-
tal and the point u is on the left of the point v . Draw a circle C centered at
u of radius juvj, then the line lv is tangent to C because lv is perpendicular
to uv. Thus the circle C is entirely on the left of the line lv . Since v is the
farthest point in the set S from the point u, all points of S are contained
in the circle C . Consequently, all points of S are on the left of the line lv .
Similarly, we can prove that all points of S are on the right of the line lu .
Therefore, all points of the set S are between the lines lu and lv .
Corollary 4.3.2 Let uv be a diameter of the set S , then the points u and
v are hull vertices of CH(S).
proof. As we discussed in Chapter 2, a point p in S is a hull vertex of
CH(S) if and only if there is a line passing through p such that all points of
S are on one side of the line.
Let u and v be two hull vertices of CH(S). The vertices u and v are called
an antipodal pair if we can draw two parallel supporting lines lu and lv of
CH(S) such that lu passes through u and lv passes through v , and the convex
hull CH(S) is entirely contained in the slab between the lines lu and lv .
FARTHEST PAIR 41
Corollary 4.3.3 Let uv be a diameter of the set S , then u and v are an
antipodal pair.
proof. By Corollary 4.3.2, u and v are hull vertices of CH(S). By
Lemma 4.3.1, we can draw two parallel lines lu and lv such that lu passes
through u, that lv passes through v , and that all points of S are contained
in the slab between lu and lv . The slab between lu and lv is clearly a convex
set. Since the convex hull CH(S) of S is the smallest convex set containing
all points of S , i.e., the convex hull CH(S) is contained in all convex sets
containing all points of S , so the convex hull CH(S) is contained in the slab
between the lines lu and lv .
According to Lemma 4.3.1 and its corollaries, to nd a farthest pair of
a set S of n points in the plane, we only need to nd a farthest pair of the
hull vertices of the convex hull CH(S). Moreover, we only need to consider
the antipodal pairs on the convex hull CH(S). This greatly simplies our
problem. We now consider the following problem: given a vertex u of a
convex polygon P , what vertices of P can constitute an antipodal pair with
the vertex u? To answer this question, we suppose that the vertices of the
convex polygon P are given in counterclockwise ordering: fu1 ; u2; ; um g.
For simplicity, we say that a vertex ui of P is the farthest from an edge
uk?1 uk of P if ui is the farthest vertex in P from the straight line on which
uk?1 uk lies.
Lemma 4.3.4 Let uk?1uk be an edge of P . We scan the vertices of P
in counterclockwise order, starting with the vertex uk . Let ui be the rst
farthest vertex from the edge uk?1 uk . Then no vertex between uk and ui can
constitute an antipodal pair with uk .
proof. Without loss of generality, suppose that the edge uk?1 uk is hori-
zontal and the vertex uk is on the right of the vertex uk?1 . First note that
for any vertex ui of P , the angle between the edge ui ui+1 and the x-axis
is between 0 and 2 . Let be the angle between the edge uk uk+1 and the
x-axis. Suppose that 1 (2 ) is the angle between the edge ui?1 ui (uiui+1 )
and the x-axis. Since P is convex, 1 2 . See Figure 4.1 for illustration.
It is easy to see that the vertex ui constitutes an antipodal pair with the
vertex uk if and only if the angle region [1 ; 2] contains an angle between
and + . Let uj be a vertex between uk and ui , (uj 6= uk ; ui). Then uj is
not farthest from the edge uk?1 uk . Thus the angle between the edge uj uj +1
42 GEOMETRIC SWEEPING
α2
ui
α1
ui + 1
ui-1
u k+1
α
uk-1 uk
and the x-axis, and the angle between the edge uj ?1 uj and the x-axis are all
strictly less than . That is, the vertex uj does not constitute an antipodal
pair with uk .
Lemma 4.3.5 Let uk?1uk be an edge of P . We scan the vertices of P in
counterclockwise order, starting with the vertex uk . Let ur be the last far-
thest vertex from the edge uk?1 uk . Then no vertex between ur and uk?1 (in
counterclockwise ordering on the boundary of P ) can constitute an antipodal
pair with uk?1 .
proof. Completely similar to the proof of Lemma 4.3.4.
Now it is clear how we nd all antipodal pairs on the convex polygon P :
starting with an edge uk?1 uk , we scan the vertices of P counterclockwise until
we hit the rst farthest vertex ui from the edge uk?1 uk . By Lemma 4.3.4,
ui is the rst vertex of P that constitutes an antipodal pair with the vertex
uk . Now we continue scanning the vertices until we hit a vertex ur that
is the last farthest vertex to the edge uk uk+1 . By Lemma 4.3.5, ur is the
last vertex that constitutes an antipodal pair with the vertex uk . Now a
vertex constitutes an antipodal pair with uk if and only if it is between ui
and ur . Moreover, since we suppose that no three vertices of P are co-linear,
there are at most two farthest vertices from an edge on P . The algorithm of
nding all antipodal pairs of a convex polygon P is given in detail as follows.
FARTHEST PAIR 43
Algorithm ANTIPODAL-PAIRS
BEGIN
1. Starting with the edge {u(0), u(1)}, where we let
u(0) be the vertex u(m). Set k = 1 and i = 2.
2. WHILE u(i) is not a farthest vertex from the edge
{u(k-1), u(k)}
i = i + 1;
3. { At this point u(i) is a farthest vertex from the
edge {u(k-1), u(k)}. }
WHILE u(i) is not a farthest vertex from the edge
{u(k), u(k+1)}
OUTPUT [u(k), u(i)] as an antipodal pair;
i = i + 1;
4. { At this point u(i) is the first farthest vertex
from the edge {u(k), u(k+1)}. We check if u(i)
is the last farthest vertex from the edge
{u(k), u(k+1)}. }
IF u(i+1) is also a farthest vertex from the edge
{u(k), u(k+1)}
OUTPUT [u(k), u(i)], [u(k+1), u(i)] as
antipodal pairs;
i = i + 1;
5. { Now u(i) must be the last vertex that can consti-
tute an antipodal pair with u(k). }
OUTPUT [u(k), u(i)] as an antipodal pair;
6. IF k < m, THEN
k = k + 1;
GOTO Step 3;
END.
Algorithm FARTHEST-PAIR
BEGIN
1. Construct the convex hull CH(S) of S;
2. Call ANTIPODAL-PAIRS on CH(S);
3. Scan the result of Step 2 and select the pair
with the longest distance.
END.
By the discussions given in this section, the above algorithm nds the
farthest pair for a given set S correctly. Moreover, the algorithm runs in
time O(n log n) since it is dominated by the rst step.
4.4. TRIANGULATIONS 45
4.4 Triangulations
TRIANGULATING a set S of n points in the plane is to joint the points
in the set S by non-intersecting straight line segments so that every region
interior to the convex hull of S is a triangle. In this section we shall discuss a
more general version of TRIANGULATION: given a set S of n points in the
plane and a set E of non-intersecting straight line segments whose endpoints
are the points in S , construct a triangulation T (S ) of S such that all the
segments in the set E appear in the triangulation T (S ).
Recall that a planar straight line graph (PSLG) G = (S; E ) is a nite
set S of points in the plane plus a set E of non-intersecting straight line
segments whose endpoints are the points in the set S . We always suppose
that a PSLG G is represented by a doubly-connected edge list (DCEL).
The problem we shall discuss is called Constrained Triangulation.
CONSTRAINED TRIANGULATION
Given a PSLG G = (S; E ), construct a triangulation T (S ) of S such that
all segments of E are edges of T (S ).
Algorithm TRIANGULATING-MONOTONE-POLYGON
BEGIN
1. Sort the vertices of P in decreasing y-coordinate,
Let the sorted list be
L = { v(1), v(2), ...., v(n) }
2. Push the vertices v(1) and v(2) into the stack
STACK. Let i = 3.
3. Suppose that the vertices in the STACK are
STACK = { u(1), u(2), ...., u(s) }
where u(s) is the top and u(1) is the bottom.
4. IF v(i) is adjacent to u(1) but not to u(s)
{ we will prove later that in this case, stack
vertices u(2), u(3), ...., u(s) are all visible
from v(i). } THEN
add edges {v(i), u(2)}, {v(i), u(3)}, ....,
{v(i), u(s)}, pop all STACK vertices, then
push u(s) and v(i) into the STACK;
i++;
GOTO Step 7;
5. IF v(i) is adjacent to u(s) but not to u(1)
{ in this case, u(s) is not visible from v(i), we
check if any other STACK vertices are visible
from v(i). } THEN
WHILE the second top vertex of the STACK
(call it u') is visible from v(i) DO
add an edge {v(i), u'};
pop the top vertex from STACK;
PUSH v(i) into STACK;
i++;
GOTO Step 7;
6. IF v(i) is adjacent to both u(s) and u(1)
{ in this case, v(i) is the last vertex in the
list L, and all STACK vertices except u(s) and
u(1) are visible from v(i). } THEN
add edges {v(i), u(2)}, {v(i), u(3)}, ......,
{v(i), u(s-1)};
POP all STACK vertices and STOP.
7. IF i <= n, go back to Step 3.
END.
Algorithm ADD-UPPER-EDGES
BEGIN
1. Sort the vertices of G in increasing y-coordinates,
let { v(1), ...., v(n) } be the sorted vertex list;
2. Create an empty 2-3 tree T; insert the upper edges
of v(1) into T if they exist , otherwise hang v(1);
3. FOR i = 2 up to n DO
3a. Using the x-coordinate of the vertex v(i) to
find two edges e(1) and e(r) in T that are the
nearest left and the nearest right edges of
v(i) in T. All the edges e(2), ...., e(r-1)
that are between e(1) and e(r) in the tree T
are lower edges of v(i).
3b. For j = 1 to r-1
IF there is a hanged vertex v(h) between
e(j) and e(j+1) THEN
add a new edge {v(h), v(i)};
unhang v(h);
3c. Delete the lower edges e(2), ...., e(r-1) of
v(i) from T if they exist;
3d. IF v(i) has upper edges THEN
insert the upper edges of v(i) into T
ELSE
hang v(i) between the nearest left and
right edges e(1) and e(r) if i <> n.
END.
We give the analysis of the algorithm. Step 1 can be done in time (n log n)
by any optimal sorting algorithm. Since each leaf of T corresponds to an edge
in G and G is a planar graph, T contains at most O(n) leaves. Consequently,
the depth of the tree T is at most O(log n). Thus Step 3a can be done in
time O(log n) for each vertex of G. Each vertex of G can be hanged and
unhanged at most once so the total time used to hang and unhang vertices
of G is bounded by O(n). Finally, each edge of G is inserted exactly once
(at its lower endpoint) then deleted exactly once (at its upper endpoint) in
the tree T , thus the time spent on inserting and deleting a single edge of
G is bounded by O(log n). Summarizing all these discussions, we conclude
that the algorithm ADD-UPPER-EDGES has time complexity O(n log n).
56 GEOMETRIC SWEEPING
Chapter 5
BEGIN
0. IF n = 1 THEN
Solve the problem P directly and STOP;
1. Divide the problem P into k subproblems of size n/k;
2. Recursively solve each subproblem;
3. Combine the solutions to the subproblems to obtain
a solution to the problem P;
END.
Algorithm MERGEHULL
BEGIN
1. Sort S by x-coordinates;
2. Call MHULL(S)
END.
Algorithm MHULL(S)
BEGIN
1. IF S contains less than four points, construct the
convex hull CH(S) directly. Otherwise, do the
following.
2. Split S into two subsets S_1 and S_2 of roughly
equal size, such that the x-coordinate of any point
in S_1 is less than the x-coordinate of any point
in S_2;
3. Recursively call MHULL(S_1) and MHULL(S_2) to
construct the convex hulls CH(S_1) and CH(S_2);
4. MERGE(CH(S_1), CH(S_2)) to obtain CH(S).
END.
60 DIVIDE AND CONQUER
All that is left to specify is how to perform the subroutine
MERGE(CH(S1), CH(S2)). For this, we must nd two lines: one that is
tangent to the top of both CH (S1) and CH (S2) (the upper bridge) and one
that is tangent to the bottom of both hulls (the lower bridge). Let u(S1 )
and l(S1) be the vertices in set S1 that are on the upper and lower bridges,
respectively (similarly dene u(S2) and l(S2)). Then all vertices in CH (S1)
proceeding clockwise from u(S1) to l(S1) can be discarded. Similarly, all
vertices in CH (S2) proceeding counterclockwise from u(S2) to l(S2) can be
discarded. All the remaining vertices form the convex hull CH (S ).
Now we nd the upper bridge (lower bridge is a symmetric operation).
Let us assume that the convex hulls of CH (S1) and CH (S2) are each stored
as a doubly-linked list. In constant time, we can add a point , delete a point,
or nd the clockwise or counterclockwise neighbor of a point. Suppose we
had a guess for the endpoints of the upper bridge. How can we verify the
guess? Suppose we guess that some line l through p 2 CH (S1) is tangent
to the hull CH (S1) at point p. Let p0 and p00 be the two neighbors of the
point p in the hull CH (S1). The line l is tangent to the top of CH (S1) at
the point p if and only if both points p0 and p00 are on or below the line l.
Therefore, to construct the upper bridge, we can pick any hull vertex
p from CH (S1) and any hull vertex q from CH (S2) and let l be the line
through p and q . Now we try to \lift" the line l as much as possible with the
condition that l intersects both hulls CH (S1) and CH (S2). Once we cannot
lift the line l anymore, the line l must be tangent to the top of both CH (S1)
and CH (S2), i.e., l is the upper bridge of CH (S1) and CH (S2). Note that if
the two neighbors p0 and p00 of the point p are on the two sides of the line l,
we can always use \signed triangle area" to decide which neighbor is above
the line l.
We give the detailed algorithm as follows.
BEGIN
1. Let p be the point in CH(S_1) with the smallest
x-coordinate, and let q be the point in CH(S_2)
CONVEX HULLS 61
with the largest x-coordinate. Let L be the
line through p and q;
2. WHILE L is not the upper bridge DO
2.1. WHILE there is a neighbor p' of p in CH(S_1)
above the line L, replace the point p by the
point p' and construct the new line L;
2.2. WHILE there is a neighbor q' of q in CH(S_2)
above the line L, replace the point q by the
point q' and construct the new line L;
END.
Algorithm QUICKHULL(S)
BEGIN
1. Find the points p_min and p_max in S, with
the smallest and largest x-coordinates,
respectively;
2. Let S' be the subset of points in S that are
above the line L through p_min and p_max,
and let S'' be the set of points in S that
are below the line L;
3. Call UpperHULL(S', p_min, p_max) and
LowerHULL(S'', p_min, p_max);
4. Catenate the upper and lower hulls.
END.
Algorithm UpperHULL(S, l, r)
BEGIN
1. Find a point p in S that is furthest to the
THE VORONOI DIAGRAM 63
line L;
2. Let S_1 be the subset of S that contains all
the points above the line through l and p,
and let S_2 be the subset of S that contains
all the points above the line through p and r;
3. Recursively call UpperHULL(S_1, l, p) and
UpperHULL(S_2, p, r);
4. Catenate the two parts obtained in Step 3;
END.
e2
e1 V1
V2
e3
Vk
v
ek
pk e1
u
pi v pj
e
C
e2
Therefore, the number of vertices, the number of edges, and the number
of regions of a Voronoi diagram are all of order O(n).
We can use the Doubly-Connected Edge List (DCEL), as introduced in
Section 1.4 to represent in computers a Voronoi diagram of a set of points
in the plane. For this we need a slight generalization. For each unbounded
Voronoi polygon V in a Voronoi diagram, we call the semi-innite ray r of
V the rst ray of V if when we travel from innity along the ray r toward
the Voronoi vertex from which r originates, the region V is on our right.
The other semi-innite ray of V is called the last ray of V . Now given a
semi-innite ray r of a Voronoi diagram, suppose that r is the last ray of
a Voronoi polygon Vi . Then in the edge node corresponding to the ray r,
the pointer P 2 will point to the semi-innite ray that is the rst ray of the
Voronoi polygon Vi. Moreover, each region V , which is a Voronoi polygon
of the Voronoi diagram, can be named by its corresponding point in the set
S.
BEGIN
1. Presort the points in the set S by x-coordinate;
70 VORONOI DIAGRAM
2. Call the subroutine Voronoi(S)
END.
Algorithm Voronoi(S)
BEGIN
1. Split the set S into two approximately equal
size subsets S_L and S_R by a vertical line
L such that all points in S_L are in the left
side of L and all points in S_R are in the
right side of L;
2. Recursively call Voronoi(S_L) and Voronoi(S_R);
3. Merge Vor(S_L) and Vor(S_R) to construct Vor(S).
END.
Step 1 in the algorithm Voronoi(S) can be done in linear time, since the
given set S is sorted by x-coordinate. If the merge part (Step 3) in the
algorithm Voronoi(S) can also be done in linear time, then by the standard
technique in Algorithm Analysis, the algorithm Voronoi(S) runs in time
O(n log n). Consequently, the algorithm VORONOI DIAGRAM runs in time
O(n log n).
Therefore, the problem of constructing the Voronoi diagram of the set
S in time O(n log n) is reduced to the problem of merging in linear time
the two Voronoi diagrams Vor(SL) and Vor(SR) into the Voronoi diagram
Vor(S), where SL and SR are two sets separated by a vertical line l and
SL [ SR = S .
Consider the Voronoi diagrams Vor(S), Vor(SL), and Vor(SR). We rst
discuss what of Vor(S) can be missing in Vor(SL) and Vor(SR). Let e be
a Voronoi edge of Vor(S) dened by two points pi and pj of S , that is, e is
a Voronoi edge on the boundary between the Voronoi polygons Vi and Vj
of the points pi and pj , respectively. By the denition of Voronoi polygons,
the points pi and pj are the closest points in the set S to the points on the
edge e. If both pi and pj are in the set SL, then the points pi and pj must
CONSTRUCTING VORONOI DIAGRAM 71
be the closest points in the set SL to the points on the edge e since the set
SL is a subset of the set S . Therefore, the edge e must be also present in the
Voronoi diagram Vor(SL ), either as a Voronoi edge or as part of a Voronoi
edge of Vor(SL). Similarly, if both pi and pj are in the set SR, then the edge
e must be also present in the Voronoi diagram Vor(SR), either as a Voronoi
edge or as part of a Voronoi edge of Vor(SR). Therefore, a Voronoi edge e
of Vor(S) that is missing in both Vor(SL) and Vor(SR) must be dened by
two points such that one is in the set SL and the other is in the set SR.
Let be the subgraph of Vor(S) that consists of the Voronoi edges of
Vor(S) that are dened by the pairs (pi; pj ) of points in S such that pi 2 SL
and pj 2 SR . We do not presume that is a connected graph. We rst
discuss what looks like.
Lemma 5.3.1 Each vertex of has degree exactly 2.
proof. Since each vertex v of is also a Voronoi vertex of Vor(S), by
Lemma 5.2.1, the degree of v is at most 3 in . Suppose that e1 , e2 , and
e3 are the three Voronoi edges incident at v in the Voronoi diagram Vor(S),
and that V1, V2, and V3 are the Voronoi polygons incident at v such that e1
is between V1 and V2 , e2 is between V2 and V3, and e3 is between V3 and V1 .
Let p1 , p2, and p3 be the three points in the set S that correspond to the
Voronoi polygons V1, V2, and V3, respectively.
If the vertex v has degree 3 in , then all Voronoi edges e1 , e2 , and e3
are in . Since e1 is in , by the denition of , without loss of generality,
we can suppose that the point p1 is in the set SL and the point p2 is in the
set SR. Then because e2 is between V2 and V3 and e2 is in , the point p3
must be in the set SL . Finally, because e3 is between V3 and V1 and e3 is
in , we must also have that p1 is in SR . This gives us a contradiction that
the point p1 is in both sets SL and SR. Therefore, the vertex v cannot have
degree 3 in .
If the vertex v has degree 1 in . Then suppose that the unique Voronoi
edge that is incident on v and in is e1 . Thus we can suppose, without loss
of generality, that the point p1 is in the set SL and the point p2 is in the set
SR . However, now if the point p3 is in the set SL then the edge e2 should be
in , while if the point p3 is in the set SR, then the edge e3 should be in ,
either case contradicts the assumption that the vertex v has degree 1 in .
This proves that each vertex of has degree 2 in .
Therefore, each connected component of is either a closed simple cycle,
or a simple chain whose both ends are semi-innite rays.
72 VORONOI DIAGRAM
v1 p2
v2
v
p1
l
v p3 l
v1 v2
v3
p2 e
p1
C1 C2
l1 , while p2 is on the left side of l2 and p3 is on the right side of l2. But
this contradicts the assumption that the sets SL and SR are separated by a
vertical line.
This proves that the connected component C of must be monotone.
BEGIN
1. Construct the separating chain SIGMA;
2. Delete all edges and partial edges of Vor(S_L) that are
entirely on the right side of SIGMA;
3. Delete all edges and partial edges of Vor(S_R) that are
entirely on the left side of SIGMA;
END.
None of the steps can be obviously done in linear time. In the remaining
of this section, we will discuss how to construct the separating chain . At
the meantime, we nd all intersections of with Vor(SL) and Vor(SR), and
delete the proper edges and partial edges from Vor(SL) and Vor(SR).
First we consider how to construct the two semi-innite rays of the chain
. Let the two semi-innite rays of the chain be l1 and l2. Suppose that
76 VORONOI DIAGRAM
l1 is the Voronoi edge of Vor(S) that is shared by two unbounded Voronoi
polygons V1 and V2 of two points p1 and p2 in the set S , respectively. By
Lemma 5.2.4, the points p1 and p2 are two consecutive hull vertices of the
convex hull CH(S), and the ray l1 is on the perpendicular bisector of the
segment p1p2 . Since l1 is in , we can suppose that the point p1 is in the
set SL and the point p2 is in the set SR . Therefore, the segment p1p2 is in
fact a supporting bridge of the two convex hulls CH(SL) and CH(SR) (see
Section 4.1 and note that the two sets SL and SR are separated by a vertical
line). Similarly, the ray l2 is on the perpendicular bisector of the other
supporting bridge of the two convex hulls CH(SL) and CH(SR). Therefore,
if the two convex hulls CH(SL) and CH(SR) are known, then we can nd the
two bridges of CH(SL) and CH(SR) in linear time (see Section 4.1). With
these two bridges, the two semi-innite rays of can be found in constant
time. Note that at meantime, we have also constructed in linear time the
convex hull CH(S) of the set S as a by-product, which can be used for the
later induction steps. Therefore, the algorithm of constructing the chain
looks as follows.
Algorithm CONSTRUCTING-SIGMA
BEGIN
1. Find the upper bridge b_u and the lower bridge
b_l of the two convex hulls CH(S_L), CH(S_R);
2. Construct the perpendicular bisectors l_u and
l_l of the bridges b_u and b_l, respectively;
3. With the bridges b_u and b_l, construct the
convex hull CH(S);
4. traverse the chain SIGMA in the direction of
decreasing y, starting from the infinite end
of the upper ray l_u of SIGMA, construct SIGMA
edge by edge, until the lower ray l_l is
reached;
END.
Step 1 and Step 3 can be done in linear time, by the discussion of Sec-
CONSTRUCTING VORONOI DIAGRAM 77
tion 4.1. Step 2 can be easily done in constant time. We must discuss how
Step 4 is done in linear time. In the meantime, we also have to discuss how we
nd the intersections of with the Voronoi diagrams Vor(SL) and Vor(SR),
and delete proper edges and partial edges from Vor(SL ) and Vor(SR) and
construct the Voronoi diagram Vor(S).
Remember that we can use Doubly-Connected-Edge-List (DCEL) to rep-
resent a Voronoi diagram. We suppose that the Voronoi diagrams Vor(SL )
and Vor(SR) are represented by two DCELs. Moreover, we suppose that
the rotation of edges incident on each vertex of Vor(SL) is given in coun-
terclockwise order in the corresponding DCEL, while the rotation of edges
incident on each vertex of Vor(SR) is given in clockwise order. Therefore,
the regions of Vor(SL) will be traced clockwise, while the regions of Vor(SR)
will be traced counterclockwise, by the algorithm TRACE-REGION given
in Section 1.4.
Now suppose inductively that we are traversing the chain in the di-
rection of decreasing y , and we are in the intersection area of the Voronoi
polygon VL of Vor(SL) of some point pL 2 SL and the Voronoi polygon VR
of Vor(SR) of some point pR 2 SR . Since in this area, the closest point of
SL is pL and the closest point of SR is pR , we must follow the perpendicular
bisector of the segment pL pR , in the direction of decreasing y . Suppose along
this direction, we are traversing an edge e0 in . We keep going along this
direction until we hit an Voronoi edge e of Vor(SL) or of Vor(SR). Without
loss of generality, suppose that e is a Voronoi edge of Vor(SR). The edge e
is on the boundary of the Voronoi polygon VR , so e must be dened by the
point pR and another point p0R 2 SR . Let the Voronoi polygon of the point
p0R in Vor(SR) be VR0 . If we keep going the same direction, we will cross the
edge e and enter the Voronoi polygon VR0 of Vor(SR). Now the closest point
in the set SR is the point p0R. The closest point in the set SL is still the point
pL. Therefore, to continue traversing the chain , we should go along the
perpendicular bisector of the segment pL p0R , in the direction of decreasing y .
To make this change, at the intersection of the chain and the edge e, we
simply switch our direction from the perpendicular bisector of pL pR to the
perpendicular bisector of pLp0R , both in the direction of decreasing y . Now
we are on the next edge of the chain . We inductively work in this way to
nd the next edge of the chain , and so on, until we hit the low ray ll of .
Note that we have no diculty to initialize this process. We can start
at a point p on the upper ray lu that is \far enough" from the upper bridge
bu = (pL; pR), where pL 2 SL and pR 2 SR. Then we must be in the
intersection area of the Voronoi polygon of pL in Vor(SL) and the Voronoi
78 VORONOI DIAGRAM
polygon of pR in Vor(SL).
Summarizing this discussion, we get the following algorithm.
Algorithm CONSTRUCTING-SIGMA
BEGIN
1. Let p_0 be a point on the upper ray l_u that
is far enough from the upper bridge
b_u = (p_L, p_R), where p_L is in S_L, and
p_R is in S_R. Let l_0 be the semi-infinite
ray originating from the point p_0 that has
the opposite direction of the ray l_u, and let
V_L and V_R be the Voronoi polygons of the
points p_L and p_R in the Voronoi diagrams
Vor(S_L) and Vor(S_R), respectively;
2. IF l_0 is not identical with the lower ray l_l
THEN
2.1. Compute the point q_L that is the intersection
of l_0 with the boundary of V_L, and compute
the point q_R that is the intersection of l_0
with the boundary of V_R;
2.2 IF p_0 is closer to q_L than to q_R, THEN
suppose that the point q_L is on a Voronoi edge
e_L of Vor(S_L) that is defined by the point
p_L and another point p_L' in S_L, then let
p_0 = q_L, and let l_0 be the semi-infinite ray
originating from q_L that is on the perpendicular
bisector of the segment {p_L', p_R} in the direction
of decreasing y. Finally, let the current Voronoi
polygon V_L of Vor(S_L) be the Voronoi polygon of
the point p_L';
2.3. IF p_0 is closer to q_R than to q_L THEN
update the parameters p_0, l_0, and V_R similarly;
3. Go back to Step 2.
END.
e0 p0
VL
qR l0
new l 0
eL qL
the ray l0 at point qL . (Note there is only one such a boundary edge of VL .)
Similarly nd the point qR . If the point p0 is closer to the point qR than to
the point qL , then the chain makes a turn at the point qR . Since the point
qR is in the interior of VL, by Lemma 5.3.5, the turn of at qR must be a
right turn. We modify the parameters p0 , l0 , and VR properly. Now we have
to nd the intersection of the new l0 with VL again. However, since the turn
of the chain at the point qR is a right turn, the new l0 cannot intersect any
edges between the edges e0 and eL we have already traced. See Figure 5.5.
Therefore, to nd the intersection of VL and the new l0, we can trace the
region VL starting from the edge eL . If the chain eventually exits VL, then
we must come to an exit edge eE of VL for before we trace back to the
edge e0 . Therefore, to traverse the partial chain of in the Voronoi polygon
VL from the entering edge e0 to the exit edge eE , we only have to trace the
boundary edges of VL between the edge e0 and the edge eE clockwise.
This is still not the end, however. Although traversing a continuous
partial chain of in the Voronoi polygon VL can be done eciently, there
may be more than one continuous partial chain of that are contained in
the Voronoi polygon VL. We must prove that traversing all these continuous
partial chains of in VL can also be done eciently. Let P1 and P2 be two
continuous partial chains of such that P1 enters VL at an edge e0 and exits
VL at an edge eE , while P2 enters VL at an edge e00 and exits VL at an edge
e0E . As we discuss above, to traverse P1, we need to trace the boundary edges
of VL between the edge e0 and eE clockwise. As we explained in the proof
CONSTRUCTING VORONOI DIAGRAM 81
of Lemma 5.3.5, the partial chains P1 and P2 are all on the boundary of the
Voronoi polygon V of the point pL in the Voronoi diagram Vor(S). Since
all turns on P1 are right turn, the area in VL between P1 and the partial
boundary of VL we have traced is excluded from the Voronoi polygon V of
the point pL in the Voronoi diagram Vor(S). Now the partial chain P2 is
also on the boundary of the Voronoi polygon V , so P2 cannot enter or exit
VL from an edge that is between e0 and eE . Therefore, the edges e00 and e0E
must be among those untraced boundary edges of VL (including the edges
e0 and eE ). In other words, the sequence of the boundary edges of VL we
trace for P1 and the sequence of the boundary edges of VL we trace for P2
are internally disjoint. This conclusion is easily generalized to more than
two continuous partial chains of in the Voronoi polygon VL .
Therefore, for a boundary edge of VL at which no partial chain of
enters or exits, our algorithm traces it at most once. On the other hand, for
a boundary edge of VL at which some partial chains of enter and/or exit,
each visit of the edge produces a new edge on the chain . Therefore, the
total time of the traversing of the chain is bounded by O(n + mL ), where
n is the number of edges on the chain , and mL is the sum of the region
sizes over all regions of Vor(SL). Since n is bounded by n, the number of
points in the set S , and mL equals two times the number of edges of Vor(SL),
which is bounded by 3n, by Lemma 5.2.6, the total time to construct the
chain is bounded by O(n).
The traversing of the chain in a Voronoi polygon VR of the Voronoi
diagram Vor(SR) can be done symmetrically. Here since the rotation of
edges incident on each vertex of Vor(SR) is clockwise in the DCEL, the
regions of Vor(SR) are traced counterclockwise. Completely similar as we
did in Lemma 5.3.5, we can prove that if makes a turn at an interior point
of VR , then the turn must be a left turn. Therefore, the chain can also be
traversed eciently in the Voronoi polygons of Vor(SR ), and the total time
is also bounded by O(n).
Finally we explain how to delete the edges and partial edges of Vor(SL )
that are on the right side of and the edges and partial edges of Vor(SR)
that are on the left side of . Note that when we traverse the chain in the
direction of decreasing y , we can nd all intersections of with the Voronoi
diagrams Vor(SL) and Vor(SR). Therefore, it is easy for us to decide which
part of the Voronoi diagrams should be thrown away.
Therefore, we conclude that the running time of the algorithm MERGE(
Vor(SL), Vor(SR)) is O(n). Consequently, the running time of the algorithm
VORONOI DIAGRAM is O(n log n).
82 VORONOI DIAGRAM
Theorem 5.3.6 Given a set S of n points in the plane, the Voronoi diagram
of S can be constructed in time O(n log n).
Chapter 6
BEGIN
0. IF the size n of P is small
Solve P directly and STOP;
1. `Prune' the problem P into k smaller problems
P1, P2, ..., Pk, of size (c_1)n, (c_2)n, ...,
(c_k)n, respectively, such that
(c_1) + (c_2) + ... + (c_k) <= c < 1
where c is a fixed constant;
2. Recursively solve the problems P1, P2, ..., Pk;
3. Use the results of Step 2 to derive a solution
for the problem P;
END.
BEGIN
1. Arbitrarily pair up the points of S:
(p_1, q_1), (p_2, q_2), ..., (p_{n/2}, q_{n/2});
2. Let the slope of the segment [p_i, q_i] be s_i,
i = 1, ..., n. Using the Median Finding
algorithm to find a pair (p_l, q_l) such that
the slope s_l of it is the median in
s_1, s_2, ..., s_{n/2};
3. Construct an upper supporting line L with the
slope s_l. To do this, draw a line with the
slope s_l through each point in S. Then take
the line that has the highest intersection with
the y-axis;
4. If L passes through points in both S_L and S_R,
then L is the upper bridge we want, so we stop
and return; Otherwise, we do the following steps;
5. If L passes through only points in S_L, then scan
the list of pairs (p_i, q_i) we made in Step 1.
If the slope of a segment [p_i, q_i] is not less
than the slope of the supporting line L, then
throw away the point p_i;
6. If L passes through only points in S_R, then scan
the list of pairs (p_i, q_i) we made in Step 1.
If the slope of a segment [p_i, q_i] is not larger
than the slope of the supporting line L, then
throw away the point q_i;
7. Let S' be the set of the remaining points of S,
recursively call UpperBridge(S', l).
END.
KIRKPATRICK-SEIDEL'S ALGORITHM 87
The correctness of the algorithm UpperBridge can be proved using the
discussion preceding the algorithm: we never delete the points on the upper
bridge. Now let us consider the time complexity of the algorithm. Step 1,
Step 3 and Step 4 can be obviously done in time O(n). Step 2 can be done
in linear time using the Median Finding algorithm described before. Now let
us consider how many points are left for the recursive call of the algorithm
in Step 7. Since the slope sl of L is the median of the slopes of the segments
pi qi , for i = 1; ; n=2, if Step 5 is executed, at least half of the segments piqi
have a slope not less than sl . So the corresponding points pi are thrown away.
Therefore, at least one fourth of the points in S are thrown away. Similarly,
if Step 6 is executed, also at least one fourth points in S are thrown away.
Therefore, at most three fourth points in S are left for the recursive call in
Step 7. Let T (n) be the time complexity of the algorithm UpperBridge, then
we have the following recurrence relation:
T (n) = O(n) + T ( 34n )
It is easy to obtain that T (n) = O(n). Therefore, the algorithm UpperBridge
runs in linear time.
With this preparation, now we are able to present Kirkpatrick-Seidel
algorithm as follows.
Algorithm KIRKPATRICK-SEIDEL(S)
BEGIN
1. Let p_min and p_max be the points in S with
the smallest and the largest x-coordinates,
respectively, let the line through p_min and
p_max be L;
2. Split the set S into two subsets S' and S'',
such that S' is the set of points of S above
the line L, and S'' is the set of points of
S below the line L;
3. Call UpperHull(S', p_min, p_max);
4. Call LowerHull(S'', p_min, p_max);
END.
88 PRUNE AND SEARCH
Step 1 and Step 2 of the algorithm KIRKPATRICK-SEIDEL can be done
in linear time. The subroutines UpperHull and LowerHull are similar. We
only discuss the subroutine UpperHull as follows.
BEGIN
1. Using the Median Finding algorithm to find a
vertical line L_d which divides the set S
into two equal size subsets S_L and S_R;
2. Call UpperBridge(S, L_d) to construct the
upper bridge [p_l p_r] of S_L and S_R, where
p_l is in S_L and p_r is in S_R;
3. Let S' be the set of points in S that are
above the line through p_min and p_l, and let
S'' be the set of points in S that are above
the line through p_r and p_max;
4. Recursively call UpperHull(S', p_min, p_l) and
UpperHull(S'', p_r, p_max);
5. Merge the results of Step 4 with the upper
bridge [p_l p_r] properly;
END.
(log n).
Let us consider a simple example. Suppose that the PSLG G is a convex
polygon P of n vertices. So the vertices of P are given in, say, counter-
clockwise ordering fv1; v2; ; vn g. We rst organize P by the following
algorithm:
Algorithm PREPROCESSING (P)
Given: a convex polygon P
Output: an organized structure L for P
BEGIN
1. Find an internal point p_0 of P;
2. For each edge {v_i, v_(i+1)} of P, i = 1, ..., n,
(where we let v_(n+1) = v_1) construct the wedge
W_i formed by the ray started at the point p_0
and passing through v_i (call it the starting ray
of the wedge W_i) and the ray started at p_0 and
passing through v_(i+1) (call it the ending ray
of the wedge W_i);
3. Sort the wedges { W_i | 1 <= i <= n } by the slopes
of their starting ray. Let the sorted list be L;
4. Attach the edge {v_i, v_(i+1)} to the element of L
POINT LOCATION 91
corresponding to the wedge W_i, for i = 1, ..., n;
END.
With the list L, we can locate each query point by the following algo-
rithm.
BEGIN
1. Compute the slope of the ray started at p_0 and
passing through q;
2. Using binary search on the list L to locate the
point q in a wedge W_i;
3. The point q is inside the convex polygon P if and
only if the point q is inside the triangle formed
by the wedge W_i and the edge {v_i, v_(i+1)};
END.
BEGIN
1. Using the y-coordinate y_0 of the point p_0,
we perform binary search in the list L to find
a slab L[i] that contains the point p_0;
2. Using the x-coordinate x_0 of the point p_0,
we perform binary search in the list l_i to
find a pair of edge segments e_1 and e_2 such
that the point p_0 is between these two edge
segments;
END.
then represents exactly the list of the edge segments, ordered from left to
right, of the next slab. We print the leaves of each 2-3 tree, from left to right,
and obtain the lists li for i = 1; ; n + 1. The following is the algorithm
of the preprocessing of the slab method. For simplicity, we assume that no
two vertices of the PSLG G have the same y -coordinate. If this condition
is not satised, we either rotate the coordinate system slightly, or make a
straightforward modication on the algorithm.
BEGIN
1. Sort the vertices of G by increasing y-coordinate.
Let the sorted list of the vertices of G be
{ v_1, v_2, ..., v_n }
Then construct the list L;
(each slab L[i] of L, i = 1, ..., n+1, is
POINT LOCATION 95
associated with two vertices v_{i-1} and v_i
of G, one is on the lower boundary and the other
is on the upper boundary of the slab, where v_0
has a very large negative y-coordinate while v_(n+1)
has a very large positive y-coordinate.)
2. For slab L[1], construct an empty 2-3 tree T_1.
The list l_1 for the slab L[1] is also empty;
Set k = 2;
3. Look at the vertex v_(k-1), delete all lower edges
of the vertex v_(k-1) from the tree T_(k-1) and
insert all upper edges of the vertex v_(k-1) into
the tree T_(k-1). The resulting tree T_k is the
2-3 tree for the slab L[k].
4. Read the leaves of the 2-3 tree T_k, from left
to right, and produce the list l_k;
5. If k <= n then k = k + 1 and go back to Step 3;
END.
y5
y4
y3
v3,3
y2
y1
x1 x2 x3 x4 x5
Algorithm CONSTRUCTING-TREE(R_m)
BEGIN
1. If R_m is a 2 by 2 rectangle, then R_m is a
98 PRUNE AND SEARCH
single region. Create a tree node for R_m
and attach the name of the region to the
node; STOP.
2. { R_m is not a single region. }
Create a node N_m for R_m, attach the center
vertex v_(m_0,m_0) of R_m to N_m. Draw a
horizontal line and a vertical line passing
through the center vertex v_(m_0,m_0) that divides
the rectangle R_m into four (m/2) by (m/2)
subrectangles;
3. Recursively call the algorithm CONSTRUCTING-TREE
on the four (m/2) by (m/2) subrectangles. Let
the resulting four trees be T_1, T_2, T_3, and
T_4;
4. Let T_1, T_2, T_3 and T_4 be the children of the
node N_m;
END.
BEGIN
1. First use the four corner vertices v_(1,1),
v_(m,1), v_(m,m) and v_(1,m) to determine
POINT LOCATION 99
if p_0 is contained in the rectangle R_m.
If p_0 is out R_m, report so and STOP.
2. { p_0 is inside R_m. }
Starting by the root N_0 of the tree T_m,
compare p_0 with the center point of R_m
to find a child of N_0 that corresponds to
an (m/2) by (m/2) rectangle R_(m/2) containing
the point p_0;
3. Recursively search p_0 in the rectangle R_(m/2);
END.
It is clear that the algorithm LOCATING runs in time O(log n) for each
query point p0 .
Therefore, the point location problem on rectangles can be solved by
O(n) preprocessing time, O(n) storage, and O(log n) query time.
Let us summarize the above idea: We rst locate the query point into a
large m m rectangle Rm , then we rene the rectangle Rm into four smaller
(m=2) (m=2) rectangles by dividing the rectangle Rm by a horizontal line
and a vertical line passing through the center vertex of Rm, then we recur-
sively locate the point p0 in one of these smaller rectangles.
Two properties we have used heavily in this method:
A father and its children have the same geometric shape (here are
rectangles), so the recursive call is eective.
Each father has only constant many children so that in constant time
we can move one level down in the search tree Tm .
6.2.4 Renement method II: on general PSLGs
Now we try to extend the idea in the last section to solve the point location
problem on general PSLGs. The algorithm discussed in this section is due
to Kirkpatrick [12].
All the geometric objects in the renement method on rectangles are
simple rectangles. Moreover, it is easy to rene a rectangle into four smaller
rectangles by a horizontal line and a vertical line. However, in a general
PSLG, a region can be an arbitrary simple polygon, and it is not guaranteed
that a simple polygon can be rened into smaller polygons of the same
shape. Therefore, we must rst x a geometric shape we are going to use. It
is natural to consider the simplest geometric shape, the triangles. However,
100 PRUNE AND SEARCH
not every PSLG can be obtained by rening a triangle. Extra care should
be taken to make our idea work.
A PSLG G is completely triangulated if G is connected and the boundary
of every region of G (including the unbounded region) is a triangle. We
rst discuss how to convert a general PSLG into a completely triangulated
PSLG.
Given a general PSLG G which is not completely triangulated. We rst
add a big triangle 4 that encloses the whole G. This can be done by rst
scanning the vertices of G to nd the minimum x0 of the x-coordinates of
the vertices of G, the minimum y0 of the y -coordinates of the vertices of G,
and the maximum z0 of the values x + y where (x; y ) is a vertex of G. Now
the triangle formed by the horizontal line lh : y = y0 ? 1, the vertical line
lv : x = x0 ? 1, and the line l : x + y = z0 + 1 will enclose the whole PSLG
G. Let the PSLG consisting of G and 4 be G0. Now triangulating G0 gives
us a completely triangulated PSLG G0 .
Delete an internal vertex v from G0 and let the resulting PSLG be G00 . If
the vertex v has degree k in the PSLG G0 , then G00 has all its regions being
triangles except one region that is a k-gon Pk . To make G00 have the same
geometric property as G0 , we retriangulate the k-gon Pk of G00 . Of course,
we can perform the above operation on other vertices of G0 as well provided
that the vertices we delete are not adjacent to each other in G0. Let G1 be
the new completely triangulated PSLG obtained by this kind of deleting-
vertex-then-retriangulating operation on a set of non-adjacent vertices of
G0 . All regions of G0 are regions of G1 except those that disappear when we
delete the vertices of G0 (call these regions old triangles). All regions of G1
are regions of G0 except those that are created when we retriangulate the
non-triangle regions resulting from deleting vertices in G0 (call these regions
new triangles). We set a pointer from a new triangle to an old triangle if their
intersection is not empty. Note that the new PSLG G1 has less vertices than
the old PSLG G0. The old PSLG G0 thus can be regarded as a renement
of the new PSLG G1 .
This solves our rst problem: the inverse of the deleting-vertex-then-
retriangulating operation renes a completely triangulated PSLG G1 into
a larger completely triangulated PSLG G0 (here \larger" means containing
more vertices and more regions. In this sense, the regions of G0 are \smaller"
than that of G1).
The query algorithm now goes as follows: suppose that we have located
a query point p0 in a new triangle 4, then we look at all old triangles that
intersect the new triangle 4 and determine which old triangle contains the
POINT LOCATION 101
query point p0 .
However, how many old triangles intersect the new triangle 4? And
how many completely triangulated PSLGs should we go through in order
to locate the query point p0 in a triangle of the original PSLG? In order to
achieve an O(log n) query time, we must move from one completely trian-
gulated PSLG to another completely triangulated PSLG in constant time,
and go through at most O(log n) completely triangulated PSLGs to reach
the original completely triangulated PSLG. For this purpose, we require that
the vertices to be deleted from one completely triangulated PSLG in order
to construct the next PSLG satisfy the following conditions:
1. All these vertices should be internal vertices, that is, they are not the
three hull vertices of the completely triangulated PSLG.
2. No two of these vertices are adjacent.
3. The degree of these vertices is small.
4. There are enough vertices of the current completely triangulated PSLG
to be deleted.
The rst condition makes all our PSLGs completely triangulated. The
second condition ensures that the relationship between new triangles and old
triangles simple, that is, an old triangle incident to a deleted vertex v can
only intersect those new triangles that are obtained by retriangulating the
simple polygon resulting from deleting the vertex v from G0 . The second
and the third conditions together ensure that each old triangle intersects very
few new triangles, and each new triangle intersects very few old triangles.
Finally, the fourth condition ensures that the rate of the size-shrinking of the
completely triangulated PSLGs is fast so that a query point goes through
very few completely triangulated PSLGs to reach the original PSLG.
The existence of a set of vertices of a completely triangulated PSLG that
satises all conditions above is proved by a pure combinatorial counting
technique.
Let G be a completely triangulated PSLG. Suppose that the set of ver-
tices, the set of edges, and the set of regions of G are V , E , and F , respec-
tively. Since G is a planar imbedding, by Euler's formula:
jV j ? jE j + jF j = 2
Since G is a completely triangulated PSLG, each region of G has exactly 3
boundary edges. On the other hand, each edge of G is a boundary edge for
102 PRUNE AND SEARCH
exactly two regions. This gives us
3jF j = 2jE j
Replacing jF j in Euler's formula by 32 jE j, we obtain
jE j = 3jV j ? 6 < 3jV j
Let deg (v ) be the degree of the vertex v , then each vertex v of G incident
to exactly deg (v ) edge-ends. On the other hand, each edge has exactly two
edge-ends, thus we have
X
deg(v) = 2jE j < 6jV j
v is a vertex of G
Therefore, at least half of the vertices of G have degree less than 12. If we
exclude the three hull vertices of G, then there are at least jV j=2 ? 3 vertices
of G that have degree less than 12. For each vertex of degree less than 12,
there are at most 11 adjacent vertices, thus there are at least (jV j=2 ? 3)=12
vertices of degree less than 12 in G such that no two of them are adjacent.
When jV j 48, we have (jV j=2 ? 3)=12 jV j=48. Therefore, for an arbitrary
completely triangulated PSLG G with n vertices, with n 48, we can nd
at least n=48 internal non-adjacent vertices of G of degree less than 12.
This analysis gives us the following algorithm to construct a searching
hierarchy TG .
Algorithm CONSTRUCT-HIERARCHY(G)
BEGIN
1. Add an enclosing triangle that contains the whole
G, then triangulate the resulting PSLG. Let the
completely triangulated PSLG be G_0;
2. Using the TRACE-REGION algorithm in Section 1.4 to
find all triangles of G_0. For each triangle of
G_0, create a node in level 0 in the hierarchy T_G;
3. Set k = 0;
4. Suppose that the PSLG G_k contains n_k vertices.
Find at least (n_k)/48 internal non-adjacent
POINT LOCATION 103
vertices of G_k that have degree less than 12;
5. For each vertex v found in Step 4, delete v from
G_k, and retriangulate the simple polygon resulting
from this deletion. For each new triangle obtained
from this retriangulation, create a node in level
k+1 of the hierarchy T_G and set a pointer from
this node in the hierarchy T_G to a node corres-
ponding to an old triangle incident to the vertex
v in G_k if the intersection of the old triangle
and the new triangle is not empty.
6. Let the resulting completely triangulated PSLG be
G_(k+1), then set k = k + 1. If the PSLG has
more than 48 vertices, go back to Step 4.
END.
BEGIN
EXERCISES 105
1. In the highest level of the hierarchy T_G, locate
the point p_0 into one of the triangles;
2. Suppose p_0 is in a node N_0 of the hierarchy T_G.
Check each triangle whose node in the hierarchy
T_G is pointed by a pointer from N_0 to find a
node N' whose triangle contains the point p_0;
3. IF N' is at level 0, then we have located the
point p_0 into a triangle in the original PSLG.
ELSE let N_0 = N' and go back to Step 2;
END.
6.3 Exercises
1. Based on the idea described in the text, design a linear time algorithm
that nds the median given a set of numbers.
106 PRUNE AND SEARCH
2. Design an algorithm to solve the following problem: given a set S of
N points in the plane, with preprocessing, decide for a query point
if the point is in a triangle whose three vertices are points of S . If
it is, output the three vertices of the triangle (if there are more than
one such triangles, pick any one of them). Analyze your algorithm for
query time, preprocessing time, and space.
3. Solve the Point Location Problem for the set of PSLGs whose faces are
of size at most 5. What are the query time, preprocessing time and
space of your algorithm?
4. Given a PSLG G such that the number of intersection points of any
vertical line and G is bounded by 50. Moreover, a sorted list of the
vertices of the PSLG G is also given. Discuss the preprocessing time,
space, and query time of the point location problem on G.
5. A k-monotone polygon with respect to a line l is a simple polygon which
can be decomposed inot k chains monotone with respect to the line l.
Let k be a xed constant. Design an algorithm to solve Point Location
Problem for k-monotone polygons, i.e., given a k-monotone polygon
P , with preprocessing, determine if a query point is internal to P .
Analyze your algorithm for query time, preprocessing time and space.
6. Given two sets of points Sp = fp1 ; ; pn g and Sq = fq1 ; ; qmg. For
each point in Sq , nd the closest point in Sp . Solve this problem for
the case
(1). m is much larger than n, say m = 2n ;
(2). m is much smaller than n, say m = log log n.
Do you use the same algorithm to solve the problem for both cases or
you use dierent algorithms for the two cases? Give a detailed analysis
for yours algorithm(s).
7. A point p is said to be dominated by a point q if both x- and y -
coordinates of p are no greater than those of q , respectively. Solve
the following problem: given a set S of n points in the plane, with
preprocessing allowed, for each query point q , nd the number of points
in S dominated by q . That are the preprocessing time, storage, and
query time of your algorithm?
EXERCISES 107
8. Suppose that we can construct the kth order Voronoi diagram in time
O(k2N log N ). Analyze the query time, preprocessing time, and the
storage for the k-Nearest Points Problem.
9. Let p1 = (x1 ; y1) and p2 = (x2; y2) be two points in the plane. We say
that point p1 dominates point p2 if x1 x2 and y1 y2 .
Let S be a set of points in the plane. A point p 2 S is a maximal
element if p is not dominated by any other point in S .
Solve the following problem:
Given a set of n points in the plane, let k denote the number of maximal
elements in this set. Design a divide-and-conquer algorithm of time
O(n log k) for nding these maximal elements. Prove the correctness
of your algorithm.
108 PRUNE AND SEARCH
Chapter 7
Reductions
Let P and P 0 be two problems. We say that the problem P can be reduced
to the problem P 0 in time O(t(n)), express it as
P /t(n) P 0
if there is an algorithm T solving the problem P in the following way.
1. For any input x of size n to the problem P , convert x in time O(t(n))
into an input x0 to the problem P 0 ;
2. Call a subroutine to solve the problem P 0 on input x0;
3. Convert in time O(t(n)) the solution to the problem P 0 on input x0
into a solution to the problem P on input x.
Note that the subroutine in Step 2 that solves the problem P 0 is unspec-
ied. If the problem P 0 can be solved eciently, then the problem P can
also be solved eciently, as explained by the following theorem.
Lemma 7.0.1 Suppose that a problem P is reduced to a problem P 0 in time
O(t(n))
P /t(n) P 0
and that the problem P 0 can be solved in time O(T (n)). Then the problem
P can be solved in time O(t(n) + T (O(t(n)))).
proof. Suppose that the algorithm T gives a O(t(n))-time reduction
from the problem P to the problem P 0 , and suppose that an algorithm A0
109
110 REDUCTIONS
solves the problem P 0 in time O(T (n)). The problem P can be solved by
the algorithm T , in whose Step 2, calling a subroutine to solve the problem
P 0 on input x0, we use the algorithm A0 .
To analyze the algorithm T , note that Step 1 and Step 3 of the algo-
rithm T take time O(t(n)), as we have assumed. Since Step 1 takes time
O(t(n)), the size of x0 is also bounded by O(t(n)). Therefore, in Step 2 of
the algorithm T , the algorithm A0 of time complexity O(T (n)) on inputs of
size n takes time O(T (O(t(n)))) on input x0 , which is of size O(t(n)). This
concludes that the running time of the algorithm T is bounded by
O(t(n)) + O(T (O(t(n)))) = O(t(n) + T (O(t(n))))
(n) and T (O(n)) = O(T (n)), then the problem P can also be solved in time
O(T (n)).
proof. As shown in Lemma 7.0.1, the problem P can be solved by
the algorithm T in time O(n + T (O(n))). By our assumption, T (O(n)) =
O(T (n)). Moreover, T (n) =
(n). Therefore, the time complexity of the
algorithm T in this special case is bounded by
O(n + O(T (n))) = O(T (n))
Notice that most of the complexity functions T (n) we use in this book,
such as n, n log n, nk , and nk logh n satisfy the conditions T (n) =
(n) and
T (O(n)) = O(T (n)).
7.1. CONVEX HULL AND SORTING 111
7.1 Convex hull and sorting
Consider the algorithm of Graham Scan for constructing convex hulls of
points in the plane. If a given set S of n points in the plane is sorted by
x-coordinates, then the Graham Scan algorithm needs only linear time to
construct the convex hull for S . In fact, it is not hard to see that
CONVEX HULL /n SORTING
by the following argument. Given an instance of CONVEX HULL, which
is a set S of n points in the plane, we can simply regard S as an instance
of SORTING if we let the x-coordinate of a point p in S be the \key" of
the point p. Therefore, we can simply translates instances of the problem
CONVEX HULL to instances of the problem SORTING. Now the solution
of SORTING on input S is a list of the points in S which is sorted by the
x-coordinates. The generalized Graham Scan algorithm shows that with this
solution to the SORTING, the convex hull CH(S) of the set S , which is the
solution of CONVEX HULL on the input S , can be constructed in time
O(n).
It is interesting that we can prove that the problem SORTING can also
be reduced to the problem CONVEX HULL in linear time.
Theorem 7.1.1
SORTING /n CONVEX HULL
(t(n)) and T (O(n)) = O(T (n), then by Corollary 7.0.2, the other can also
be solved in time O(T (n)).
By the above discussions, we have already shown
SORTING n CONVEX HULL
In fact, construction of convex hulls for sets of points in the plane is
a generalization of sorting. In sorting n numbers, we are asked to nd the
ordering of a set of points in the real line, while in constructing a convex hull,
we are asked to nd the ordering of polar angles, relative to an interior point
of the convex hull, of the \extreme points". The dierence is that in sorting,
every given number will appear in the nal sorted list, while in constructing
a convex hull, we also have to make the decision whether a given point is a
non-extreme point, and if yes, exclude it from the nal output list. On the
CLOSEST-PAIR 113
other hand, as we have discussed in the section, sorting is not easier at all
than constructing convex hulls for points in the plane.
7.3 Triangulation
Given a Voronoi diagram Vor(S) for the set S of n points in the plane. We
draw a segment pi pj for each pair of points pi and pj that dene a Voronoi
edge in Vor(S). Let D(S ) be the collection of these segments, which is called
the straight-line dual of the Voronoi diagram Vor(S).
We prove that the straight-line dual D(S ) of the Voronoi diagram Vor(S)
is a triangulation of the set S . For this, we must show that the straight-line
dual D(S ) partitions the convex hull CH(S) of the set S into triangles such
that 1) no two triangles overlap in the interior, and 2) every point in
the convex hull CH(S) (more precisely, every point in the area bounded by
the convex hull CH(S)) must be contained in at least one such triangles.
Each Voronoi vertex v is incident to exactly three Voronoi edges e1 , e2 ,
and e3 , and exactly three Voronoi polygons V1, V2 , and V3 of three points
p1 , p2, and p3 in the set S . Each of the edges e1 , e2 , and e3 is dened by
a pair of the points p1, p2 , and p3 . Therefore, the segments p1 p2, p2 p3 , and
p3p1 are all in the straight-line dual D(S ) of Vor(S). Thus, each Voronoi
vertex v corresponds to a triangle 4p1p2 p3 in the straight-line dual D(S ).
Denote by 4(v ) the triangle 4p1p2 p3. On the other hand, since a Voronoi
TRIANGULATION 115
C(v) C(v')
q'
p
4 p
2
v'
v
q
q'
q"
p
3
p
1
p
2
p3
e
q
p1
V1
be the Voronoi polygon of the point p1 in the Voronoi diagram Vor(S), and
suppose that the segment p1p2 intersects the Voronoi polygon V1 at a point
q that is on the Voronoi edge e of V1 in Vor(S). (The point p2 cannot be
contained in V1 (including the boundary of V1) since V1 is the locus of points
closer to p1 than to any other points in S .) Suppose that the Voronoi edge
e is dened by the point p1 and another point p3 in S . See Figure 7.3. By
the denition, the points p1 and p3 are the closest points in S to the points
on the edge e. Therefore,
jp1p2j = jp1qj + jqp2j > jp1qj + jqp3j jp1p3j
Moreover, since we have 6 qp3 p1 = 6 p3 p1 q , and the point q is an internal
point of the segment p1p2 , we must have
6 p2 p3p1 > 6 qp3p1 = 6 p3 p1q = 6 p3 p1p2
Therefore, we have
jp1p2j > jp2p3j
Now we obtain a contradiction, since both segments p2p3 and p1 p3 are shorter
than the segment p1 p2 . Now if p3 2 S1 we pick p2 p3, and if p3 2 S2 we pick
p1p3. No matter what set the point p3 is in, we are always able to nd a
segment with one end in S1 and the other end in S2 such that the segment is
120 REDUCTIONS
shorter than p1 p2. This contradicts the assumption that p1 p2 is the shortest
such segment.
This contradiction proves that the segment p1 p2 must be an edge in the
Delaunay Triangulation D(S ) of the set S .
Lemma 7.4.2 Let p1 and p2 be two points in the set S . The segment p1p2
is an edge of some Euclidean minimum spanning tree if and only if there is
a partition of the set S into two non-empty sets S1 and S2 such that p1 p2 is
the shortest segment with one end in S1 and the other end in S2.
proof. Suppose that p1 p2 is an edge of a Euclidean minimum spanning
tree T . Then deleting the edge p1 p2 from T results in two disjoint subtrees
T1 and T2. Let S1 and S2 be the sets of points in S that are the vertices of
the trees T1 and T2, respectively. S1 and S2 obviously form a partition of
the set S and each of the sets S1 and S2 contains exactly one of the points
p1 and p2. We claim that the segment p1 p2 is the shortest segment with one
end in S1 and the other end in S2. In fact, if pp0 is a shorter segment with
one end in S1 and the other end in S2 , then in the tree T , replacing the
segment p1 p2 by the segment pp0 would give us a Euclidean spanning tree T 0
of S such that the sum of the edge lengths of T 0 is less than the sum of the
edge lengths of T . This contradicts the fact that T is a Euclidean minimum
spanning tree.
Conversely, suppose that there is a partition of S into two non-empty
subsets S1 and S2 such that p1p2 is the shortest segment with one end in
S1 and the other end in S2. Let T be a Euclidean minimum spanning tree
of S . If T contains p1p2 , then we are done. Otherwise, adding the segment
p1p2 to T results in a unique simple cycle C . Since the segment p1 p2 is on
the cycle C and p1 and p2 are in dierent sets of S1 and S2 , there must
be another segment pp0 on the cycle such that the points p and p0 are in
dierent sets of S1 and S2 . Since p1 p2 is the shortest segment with two
ends in dierent sets of S1 and S2, the segment pp0 is at least as long as the
segment p1p2 . Replacing the segment pp0 in T by the segment p1p2 gives
us a new Euclidean spanning tree T 0 of S such that the sum of the edge
lengths of T 0 is not larger than the sum of the edge lengths of T . Since T is
a Euclidean minimum spanning tree of S , the sum of the edge lengths of T 0
must be the same as that of T . Therefore, T 0 is also a Euclidean minimum
spanning tree and T 0 contains the segment p1p2 .
Corollary 7.4.3 If a segment p1p2 is an edge of some Euclidean minimum
MINIMUM SPANNING TREE 121
spanning tree of the set S , then p1 p2 is an edge in the Delaunay Triangulation
D(S ) of the set S .
proof. The proof follows from Lemma 7.4.1 and Lemma 7.4.2 directly.
BEGIN
1. Construct the Delaunay triangulation D(S);
2. Construct a weighted graph G_D(S) that is
isomorphic to D(S) such that the weight of an
edge {p_i, p_j} in G_D(S) is the length of the
corresponding edge in D(S);
3. Apply Kruskal's algorithm to find a minimum
weight spanning tree T for G_D(S). This tree
T is a Euclidean minimum spanning tree for S;
END.
Algorithm FIND-ALL-INTERSECTIONS
BEGIN
1. Find an intersecting point p_0;
2. Let p = p_0;
3. Travel a Voronoi polygon clockwise in the direction
of leaving the convex hull CH(S), starting from the
point p to find the successor p' of p;
4. If p' <> p_0 then replace p by p' and go back to
Step 3;
END.
Algorithm MAXIMUM-EMPTY-CIRCLE
128 REDUCTIONS
BEGIN
1. Construct the Voronoi diagram Vor(S) and the
convex hull CH(S);
2. Call the subroutine FIND-ALL-INTERSECTIONS to
find all intersecting points of Vor(S) and CH(S),
and mark all Voronoi vertices that are outside
the convex hull CH(S);
3. For each q of such intersecting points, compute
the largest empty circle centered at q;
4. For each unmarked Voronoi vertex v, compute the
largest empty circle centered at v;
5. The largest among the largest empty circles
constructed in Step 3 and Step 4 is the maximum
empty circle of S;
END.
Step 1 takes time O(n log n), by Theorem 5.3.6 and by, say, the Graham
Scan algorithm. Step 2 takes linear time, as we have discussed above. The
other steps in the algorithm trivially take only linear time, by Lemma 5.2.6
and Lemma 7.5.2. Therefore, we obtain the following theorem.
Theorem 7.5.5 The problem MAXIMUM-EMPTY-CIRCLE can be solved
in time O(n log n).
v5 v4
v6
v3
v7
v2
v8 v1
- ∞
-∞
where d is the Euclidean distance between v and u, while x and y are the
x- and y-coordinates of the vertex u, respectively. The distance D(v; u) is
ordered lexicographically.2 With this assumption, each vertex of P has a
unique farthest neighbor.
i1 i2
a
c
b
d j1
j2
BEGIN
1. Let L_c = L_col, let j and k be the first elements
in the list L_c and L_row, respectively;
2. WHILE the matrix is not square DO
2.1 CASE 1: k is the first element in the list L_row
IF a_{k, j} < a_{k, next(j)} THEN
let j = next(j) and delete the first element in
the list L_c
ELSE (* so a_{k, j} > a_{k, next(j)} *)
let j = next(j), and let k be the second element
in the list L_row,
2.2 CASE 2: k is neither the first nor the last in L_row
IF a_{k, j} < a_{k, next(j)} THEN
let j = last(j) and delete the old j from the
list L_c, and let k = last(k)
ELSE (* so a_{k, j} > a_{k, next(j)} *)
let k = next(k) and j = next(j);
2.3 CASE 3: k is the last element in the list L-row
IF a_{k, j} < a_{k, next(j)} THEN
let j = last(j) and delete the old j from the
list L_c, and let k = last(k);
ELSE (* so a_{k, j} > a_{k, next(j)} *)
let j = next(j), and delete the old j from the
list L_c
END of WHILE;
3. Output the list L_c;
END.
Algorithm ALL-FARTHEST-VERTEX(P)
138 REDUCTIONS
Given: a convex polygon P
Output: for each vertex of P, find the farthest vertex
BEGIN
1. Construct a doubly-linked list L_row containing
the indices 1, 2, ..., n, and a doubly-linked
list L_col containing the indices 1, 2, ...,
2n-1;
2. Call the subroutine SQUARE(L_row, L_col) to
obtain a list L_c of column indices of M_P
such that these columns constitute a square
submatrix of M_P that contains the maximal
element for each row of M_P;
3. Call the subroutine ROW-MAXIMAL(L_row, L_c);
4. Suppose that the subroutine ROW-MAXIMAL(L_row, L_c)
returns a list L, then for 1 <= i <= n,
if the ith element of L is k_i, then the vertex
v_{k_i'} is the farthest vertex from the vertex
v_i in the convex polygon P, where
k_i' = (k_i - 1)mod(n) + 1;
END.
BEGIN
1. IF L_c contains one element, return L_c directly;
2. Delete every other element from the list L_r. Let
the resulting list be L_r';
{ This is equivalent to deleting all rows with even
index from the matrix M. Let the resulting matrix
be M_1. M_1 consists of the rows of M that have
ALL-FARTHEST VERTEX 139
odd index. The matrix M_1 is an r/2 by r matrix. }
3. Call the subroutine SQUARE(L_r', L_c);
{ The algorithm SQUARE returns a list L_c' of size
r/2, which corresponds to a list of column indices
such that these columns constitute an r/2 by r/2
square matrix that contains all maximal elements
in the odd rows of the matrix M. }
4. Recursively call the subroutine ROW-MAXIMAL(L_r', L_c');
{ This recursive call will return a list L that contains
the column indices with which the maximal elements in
the odd rows are located. }
5. With the help of the list L, determine the column indices
for the maximal elements in the even rows of M. For this,
suppose in the (2 i - 1)st row of M, the maximal element
is in the j_1th column, and in the (2 i + 1)st row of M,
the maximal element is in the j_2th column, then scan the
elements in the (2i)th row only from column j_1 to column
j_2, the maximal element among these elements must be the
maximal element of the (2i)th row;
END.
7.7 Exercises
1. Give examples to show that a problem P 0 may have very high com-
plexity (e.g. NP-complete) even a linear time solvable problem P is
linear time reducible to P 0 .
2. A star-shaped polygon P = fp1; ; pn g is a simple polygon containing
at least one point q such that the segment qpi lies entirely within P
for all 1 i n. The problem STAR-POLYGON is to nd a star-
shaped polygon whose vertex set is the given set of points in the plane.
Show that the problem CONVEX HULL is linear time reducible to the
problem STAR-POLYGON.
3. Given a star-shaped polygon P , nd two vertices of P that are the
farthest apart.
4. Give a detailed proof that the problem CONVEX HULL is linear time
reducible to the problem VORONOI-DIAGRAM.
5. Consider the following problem in Robotics: Let S be a set of obstacles
on the plane. These obstacles are discs of the same radii. You have a
mobile \Robot" R which has shape of disc with a radius of 1. We want
an algorithm such that for any obstacle set S , and for any two points p
and q , the algorithm will nd a path for the robot R from position p to
position q , avoiding the obstacles. If no such path exists, the algorithm
EXERCISES 141
reports accordingly. Design and analyze an algorithm for this problem.
(Hint: construct the Voronoi diagram for the centers of the obstacles).
6. Given a set of n points in the plane, prove that the Delaunay triangu-
lation contains at most 2n ? 5 vertices and at most 3n ? 6 edges.
7. A monotone polygon is a simple polygon whose boundary can be de-
composed into two monotone chains (a chain is monoton if every ver-
tical line intersects it at at most 1 point). The problem MONOTON-
POLYGON is to nd a monoton polygon whose vertex set is the given
set of points in the plane. Show that the problem CONVEX HULL is
linear time reducible to the problem MONOTON-POLYGON.
8. Show that the problem CONVEX HULL is linear time reducible to the
following problem.
INTERSECTION-OF-HALF-PLANE
given a system of N linear inequalities of the form
ai x + bi y + ci 0 i = 1; 2; : : :; N:
nd the region of the solutions of it.
9. Show that the problem CONVEX HULL is linear time reducible to the
problem of constructing the convex hull of points in 3-dimension space
even if the points are given sorted with respect to the x-coordinates.
(Recall that the convex hull computation requires the reporting of
vertices, edges, and faces that lie on the convex hull and their adjacency
relations with respect to one another.)
10. Suppose that a problem P is reducible to a problem P 0 in O(n log n)
time and that the problem P 0 is solvable in time O(n log n). Is the
problem P necessarily solvable in time O(n log n)? Justfy your answer.
11. Given two sets A and B , with m and n planar points, respectively.
Find two points, one from each set, that are closest. (Hint: You
should consider the following three dierent cases: (1) m is much
larger than n; (2) n is much larger than m; (3) m and n are of the
same order.)
12. The problem All Nearest Neighbors is stated as follows: given a set S
of n points in the plane, nd a nearest neighbor of each. Show that
142 REDUCTIONS
this problem can be reduced in linear time to the problme Voronoi-
Diagram.
13. It has been recently shown that triangulating a simple polygon can
be done in linear time. Use this result to show that triangulating a
connected PSLG in which each face is a simple polygon can be done
in linear time.
14. Consider the following problem of SECOND CLOSEST PAIR: Given
a set S of n points in the plane, nd a pair of points p1 and p2 in S
such that the distance between p1 and p2 is the second shortest among
all pairs of points of S . (Of course, if there are two distinct closest
pairs, then either of them can be regarded as the second closest pair).
Show that the problem SECOND CLOSEST PAIR can be reduced to
the problem VORONOI DIAGRAM in linear time. Thus, it can be
solved in O(n log n) time.
15. Design an ecient algorithm that computes the area of an n-vertex
simple, but not necessarily convex polygon.
16. Design an ecient algorithm that nds the second farthest pair from
among n points in the plane.
17. Design a linear time algorithm for the following problem: given
Vor(S ), where S is a set of n points in the plane, nd a -chain (i.e.,
a path in Vor(S ) with both ends extended to innity) such that each
side of the -chain contains half of the points in S .
18. The Euclidean Traveling Saleman problem (ETS) is to nd a shortest
closed path through n given points in the plane. Show that an approx-
imate ETS tour whose length is less than twice the length of a shortest
tour can be constructed in time O(n log n). (Hint: reduce the problem
to the problem of Euclidean Minimum Spanning Tree problem.)
Chapter 8
8.1 Preliminaries
Let us rst have a brief review of geometry. Let S be a subset of the n-
dimensional Euclidean space E n . S is connected if for any pair of points
p and q of S , there is a curve C adjoining them such that C is entirely
contained in S . By the denition, a convex set in E n is connected. Now
suppose that W is a subset of E n that is not necessarily connected, then a
connected component of W is a maximal connected subset of W . We will use
#W to denote the number of connected components of the set W .
A function f (x1; ; xn ) is a polynomial if f is a sum of terms of the form
cxi11 xi22 xinn , where c is a constant, and all ij 's are non-negative integers.
The degree of the term cxi11 xi22 xinn is dened to be the number i1 + i2 + +
in. The degree of a polynomial is the maximum of the degrees of its terms.
The function f is a linear polynomial if in each term of the above form, we
have ij 1, for all 1 j n. An equation f (x1; ; xn ) = 0 with f being
a linear polynomial denes a hyperplane in the n-dimensional Euclidean
space E n . An open inequality f (x1 ; ; xn) > 0 (or f (x1 ; ; xn ) < 0)
denes an open halfspace in E n, with the hyperplane f (x1 ; ; xn) = 0
being its boundary. Similarly, a closed inequality f (x1 ; ; xn) 0 (or
f (x1 ; ; xn) 0) denes a closed halfspace in E n, with the hyperplane
f (x1 ; ; xn) = 0 being its boundary. It is easy to see that hyperplanes,
open halfspaces, and closed halfspaces are all convex sets in E n .
Let S be the set of points (x1; ; xn) satisfying a sequence of relations:
fi (x1; ; xn) = 0 i = 1; ; m1
PRELIMINARIES 145
gj (x1; ; xn) > 0 j = 1; ; m2
hk (x1; ; xn ) 0 k = 1; ; m3
where all functions fi , gj , and hk , where i = 1; ; m1, j = 1; ; m2, and
k = 1; ; m3 are linear polynomials. Then S is the intersection of the
hyperplanes fi = 0, 1 i m1, the open halfspaces gj > 0, 1 j m2 ,
and the closed halfspaces hk 0, 1 k m3 . Since all hyperplanes, open
halfspaces, closed halfspaces are convex, by Theorem 3.1.1, the set S is also
convex.
A problem is a decision problem if it has only two possible solutions,
either the answer YES or the answer NO. Abstractly, a decision problem
consists simply of a set of instances that contains a subset called the set of
YES-instances. As we have studied in Algorithm Analysis, decision prob-
lems play a very important role in the analysis of NP-completeness. In
practice, many general problems can be reduced to decision problems such
that a general problem and the corresponding decision problem have the
same complexity.
There are certain problems where it is realistic to consider the number
of branching instructions executed as the primary measure of complexity.
In the case of sorting, for example, the outputs are identical to the inputs
except for order. It thus becomes reasonable to consider a model in which
all steps are two-way branches based on a \decision" that we should make
when computation reaches that point.
The usual representation for a program of branches is a binary tree called
a decision tree. Each non-leaf vertex represents a decision. The test repre-
sented by the root is made rst, and \control" then passes to one of its sons,
depending on the outcome of the decision. In general, control continues to
pass from a vertex to one of its sons, the choice in each case depending on
the outcome of the decision at the vertex, until a leaf is reached. The desired
output is available at the leaf reached. If the decision at each non-leaf vertex
of a decision tree is a comparison of a polynomial of the input variables with
the number 0, then the decision tree is called an algebraic decision tree.
It should be pointed out that although the algebraic decision tree model
seems much weaker than a real computer, in fact this intuitive feeling is not
very correct. First of all, given a computer program, we can always represent
it by a decision tree by \unwinding" loops in the program. Secondly, the
operations a real computer can perform are essentially additions and branch-
ings. All other operations are in fact done by microprograms that consists of
those elementary operations. For example, the value of sin(x) for a number
146 LOWER BOUNDS
x is actually obtained by an approximation of the Taylor's extension of the
function sin(x). Finally, we simply ignore the computation instructions and
concentrate on only branching instructions because we are working on lower
bound of algorithms. If we can prove that for some problem, at least N
branchings should be made, then of course, the number of total instructions,
including computation instructions and branching instructions, is at least
N.
Let us now give a less informal denition. We will concentrate on decision
tree models for decision problems.
Denition An algebraic decision tree on a set of n variables (x1; ; xn) is
a binary tree such that each vertex of it is labeled with a statement satisfying
the following conditions.
So we have
log(#W ) log(n!) log ( n2 ) 2 = n2 log( n2 ) =
(n log n)
n
Theorem 8.3.2 Any bounded order algebraic decision tree that solves the
problem UNIFORM -GAP runs in time at least
(n log n).
proof. The proof is quite similar to the proof of Theorem 8.3.1.
Consider the following set in the n-dimensional Euclidean space
W = f(x1 ; ; xn) j (x1; ; xn) is a YES-instance of UNIFORM -GAPg
Thus a point (x1; ; xn ) in n-dimensional Euclidean space is in the set W if
and only if there is a permutation of (1; ; n) such that x(i) + = x(i+1) ,
for 1 i n ? 1.
PROVING DIRECTLY 155
Fix a point (x1; ; xn ) in n-dimensional Euclidean space such that xi +
= xi+1 , for all 1 i n ? 1. Consider the n! points in the n-dimensional
Euclidean space obtained by permuting (x1; ; xn )
P = (x(1); ; x(n)) is a permutation of (1; ; n)
Clearly, all these n! points are in the set W . We claim that no two of these
n! points share the same connected component of W . In fact, suppose that
and 0 are two dierent permutations of (1; ; n) and that the points
P and P0 are in the same connected component of W , then there is a
continuous curve C in W connecting P and P0 . That is, we can nd n
continuous functions fi (x), 1 i n, such that
fi (0) = x(i) and fi(1) = x0(i) for 1 i n
Exactly the same as in the proof of Theorem 8.3.1, we can nd two indices
k and h such that
fk (0) < fh (0) and fk (1) > fh (1)
So there exists a real number r in the interval (0; 1) such that fk (r) =
fh (r). But then the point (f1 (r); f2(r); ; fn(r)) on the curve C cannot
be in the set W since the distance between the numbers fk (r) and fh (r)
is less than . This contradiction proves that the set W has at least n!
connected components. By Ben-or's theorem (Theorem 8.2.5), any bounded
order algebraic decision tree that solves the problem UNIFORM -GAP runs
in time at least
(log(#W ) ? n) =
(n log n)
(m log m) =
(n log n).
Algorithm REDUCTION I
{ Reduce the problem EXTREME-POINTS to the problem
CONVEX-HULL. }
BEGIN
1. Given an input S of the problem EXTREME-POINTS,
where S is a set of n points in the plane, pass
the set S directly to the problem CONVEX-HULL;
2. The solution of CONVEX-HULL to the set S is the
convex hull CH(S) of the set S. Pass CH(S)
back to the problem EXTREME-POINTS;
3. If the convex hull CH(S) has n hull vertices,
and no hull vertex is at the middle of the
straight line segment passing through its two
neighbors, then the answer of the problem
EXTREME-POINTS to the input S is YES;
Otherwise, the answer should be NO.
LOWER BOUNDS BY REDUCTION 163
END.
Since both Step 1 and Step 3 take at most time O(n), the above algo-
rithm is a linear time reduction of the problem EXTREME-POINTS to the
problem CONVEX-HULL.
Thus constructing convex hulls of sets of points in the plane takes time
at least
(n log n). This result implies that many algorithms we discussed
before for construction of convex hulls, including Graham Scan, MergeHull,
Kirkpatrick-Seidel algorithm, are optimal.
As we have discussed in the last chapter, the problem CONVEX-HULL
can be reduced to the problem SORTING in time O(n). By Theorem 8.4.1
and Theorem 8.4.2, we also obtain
Theorem 8.4.3 Any bounded order algebraic decision tree sorting n real
numbers runs in time at least
(n log n).
This theorem is stronger than the one we got in Algorithm Analysis. In
Algorithm Analysis, it is proved that a linear decision tree model that sorts
runs in time
(n log n). On the other hand, Theorem 8.4.3 claims that even
the computation model is allowed to do multiplication, it still needs at least
This Lemma, together with Theorem 8.4.3 and Theorem 8.4.1 gives us
the following theorem.
Theorem 8.4.6 Any bounded order algebraic decision tree that constructs
the Euclidean minimum spanning tree for a set of n points in the plane runs
in time at least
(n log n).
Therefore, the algorithm presented in Section 6.4 that constructs the
Euclidean minimum spanning tree for sets of points in the plane is optimal.
Now we consider the problem TRIANGULATION.
LOWER BOUNDS BY REDUCTION 165
Lemma 8.4.7 SORTING can be reduced to TRIANGULATION in linear
time.
proof. The proof is very similar to the proof of Lemma 8.4.5. Given a
set S of n real numbers x1, x2, , xn , we construct a set S 0 of n + 1 points
in the plane
q = (x1; 2); p1 = (x1; 0); p2 = (x2; 0); ; pn = (xn; 0)
It is easy to see that the set S 0 has a unique triangulation that consists of
the n segments qpi for 1 i n, and the n ? 1 segments pi pj where the
number xj is the smallest number in S that is larger than xi .
Now using the similar argument as the one we used in the proof of
Lemma 8.4.5, we conclude that we can construct the sorted list of S from
the triangulation of S 0 in linear time.
Theorem 8.4.8 Any bounded order algebraic decision tree that constructs
the triangulation for a set of n points in the plane runs in time at least
(n log n).
Thus the problem TRIANGULATION also has an optimal algorithm,
which was presented in Section 6.3.
A simple generalization of the problem TRIANGULATION is the prob-
lem CONSTRAINED-TRIANGULATION, as introduced in Section 3.4. A
lower bound for the CONSTRAINED-TRIANGULATION can be easily ob-
tained from the lower bound of TRIANGULATION.
Theorem 8.4.9 Any bounded order algebraic decision tree solving the prob-
lem CONSTRAINED TRIANGULATION runs in time at least
(n log n).
proof. It is easy to prove that
TRIANGULATION /n CONSTRAINED TRIANGULATION
In fact, every instance of the problem TRIANGULATION, which is a
set S of n points in the plane, is an instance G = (S; ) of the problem
CONSTRAINED TRIANGULATION, in which the set of segments is empty.
Since the problem TRIANGULATION has a lower bound
(n log n),
by Theorem 8.4.1, the problem CONSTRAINED TRIANGULATION has a
lower bound
(n log n).
166 LOWER BOUNDS
To derive lower bounds for the problems CLOSEST-PAIR and ALL-
NEAREST-NEIGHBORS, we use the lower bound for the problem
ELEMENT-UNIQUENESS, derived in the last section.
Theorem 8.4.10 Any bounded order algebraic decision tree nding the clos-
est pair for a set of n points in the plane runs in time at least
(n log n).
proof. We prove that
ELEMENT-UNIQUENESS /n CLOSEST-PAIR
Given a set S of n real numbers x1 , , xn , we construct an instance for
CLOSEST-PAIR:
(x1; 0); (x2; 0); ; (xn; 0)
which is a set S 0 of n points in the plane. Clearly, all elements of S are
distinct if and only if the closest pair in S 0 does not consist of two identi-
cal points. So the problem ELEMENT-UNIQUENESS is reducible to the
problem CLOSEST-PAIR in linear time. Now the theorem follows from
Theorem 8.3.1 and Theorem 8.4.1.
Since it is straightforward that
CLOSEST-PAIR /n ALL-NEAREST-NEIGHBORS
by Theorem 8.4.10 and Theorem 8.4.1, we also obtain the following theorem.
Theorem 8.4.11 Any bounded order algebraic decision tree nding the
nearest neighbor for each point of a set of n points in the plane runs in
time at least
(n log n).
Thus the algorithms we derived in Section 6.2 for the problems
CLOSEST-PAIR and ALL-NEAREST-NEIGHBORS are also optimal.
To discuss the lower bound on the time complexity of the problem
MAXIMUM-EMPTY-CIRCLE, we use the
(n log n) lower bound for the
problem UNIFORM-GAP, derived in the last section.
Theorem 8.4.12 Any bounded order algebraic decision tree that constructs
a maximum empty circle for a set of n planar points runs in time at least
(n log n).
LOWER BOUNDS BY REDUCTION 167
proof. We prove
SET-DISJOINTNESS /n FARTHEST-PAIR
Given an instance I = (X; Y ) of the problem SET-DISJOINTNESS, we
transform I into an instance of FARTHEST-PAIR as follows. Without loss
of generality, suppose that all numbers in X and Y are positive. (Otherwise,
we scan the sets X and Y to nd the smallest number z in X [ Y , then
add the number z + 1 to each number in X and in Y .) Now nd the largest
number zmax in X [ Y . Convert each number xi in the set X into a point on
xi , and convert each
the unit circle in the plane which has a polar angle zmax
number yj in the set Y into a point on the unit circle in the plane which
yj + . Intuitively, we transform all numbers in the
has a polar angle zmax
set X into points in the rst and second quadrants of the unit circle in the
plane, while transform all numbers in the set Y into points in the third and
fourth quadrants of the unit circle. Such a transformation gives us a set S
of 2n planar points. It is easy to see that the diameter of S is 2 if and only
if the intersection of X and Y is not empty. This proves that the problem
SET-DISJOINTNESS can be reduced to the problem FARTHEST-PAIR in
linear time.
By Theorem 8.3.3, the problem SET-DISJOINTNESS has a lower bound
(n log n). Now by Theorem 8.4.1, the problem FARTHEST-PAIR also has
a lower bound
(n log n) on its time complexity.
(Note that some numbers above may not appear in the list L if the corre-
sponding bucket is empty.) The list L can be easily constructed in linear
time from the n ? 1 buckets.
Since there are n ? 1 buckets and only n ? 2 numbers in S ?fxmin; xmaxg,
at least one bucket is empty. Therefore, the maximum distance between a
pair of consecutive numbers in S is at least the length of a bucket. This
implies that no two consecutive numbers contained in the same bucket can
make the maximum distance. Thus the maximum distance must be made
170 LOWER BOUNDS
k) and x = x(h) for
by a pair of the numbers (xi ; xj ) that are either xi = x(max j min
some k and h (where all buckets Bk+1 , , Bh?1 are empty), or xi = xmin
and xj = x(mink) (where all buckets B , , B (k)
1 k?1 are empty), or xi = xmax
and xj = xmax (where all buckets B(k+1) , , B(n?1) are empty). Moreover,
all these pairs can be found in linear time by scanning the list L. Therefore,
the maximum distance between pairs of consecutive numbers in S can be
computed in linear time.
In the following, we give an even simpler linear time algorithm to solve
the problem UNIFORM -GAP2 . In this algorithm, we even do not require
oor function. The only non-algebraic operation we need is a test if a given
real number is an integer. Note that with the
oor function, the test \Is r
an integer" can be easily done in constant time.
Algorithm MAGIC
BEGIN
1. Find the minimum number x_min and the maximum number
x_max in S;
2. Let epsilon = (x_max - x_min)/(n-1);
3. For i = 1 to n do BEGIN
3.1 Let k = (x_i - x_min)/epsilon + 1;
3.2 IF k is not an integer OR A[k] is not empty THEN
STOP with an answer NO
3.3 ELSE
put x_i in the array element A[k];
END;
4. STOP with an answer YES;
END.
The above algorithm obviously runs in linear time. To see the correct-
ness, suppose that the algorithm stops at Step 4. Then if a number x is in
2 The author was informed of this algorithm by Roger B. Dubbs III.
EXERCISES 171
A[k], then the value of x must be xmin + (k ? 1). Moreover, no array element
of A holds more than one number. Consequently, every array element of A
holds exactly one number from the set S , and these numbers are xmin + i ,
for i = 0; 1; ; n ? 1. Therefore, the set S should be a YES-instance of the
problem UNIFORM - GAP.
On the other hand, if the algorithm stops at Step 3.2, then either S is
not uniformly distributed (otherwise all values (x ? xmin)= + 1 should be
integral) or the set S contains two identical numbers. In the latter case, the
set S again cannot be a YES-instance of the problem UNIFORM -GAP.
The examples bring up an interesting point: there are certain very com-
mon operations not included in the algebraic decision tree model that allow
us to do things that are not possible in the algebraic decision tree model.
The
oor function and the integral testing are examples of this kind of op-
erations. Note that these examples imply that the
oor operation and the
integral testing cannot be performed in constant time in the algebraic deci-
sion tree model.
8.6 Exercises
1. Let P be an arbitrary non-trivial problem (i.e., it has YES-instances
as well as NO-instances). Show that the problem MAX-ELEMENT
(given a set of numbers, nd the maximum) is linear time reducible to
P.
2. Use Ben-or's technique directly to prove that the following problem
has a lower bound
(n log n) on the time complexity.
SET-DISJOINTNESS
Given two sets X = fx1; ; xn g and Y = fy1; ; yn g of real numbers,
are they disjoint, i.e., X \ Y = ?
3. Prove that the problems STAR-POLYGON, INTERSECTION-OF-
HALF-PLANE, and MONOTON-POLYGON take
(n log n) time in
the algebraic decision tree model.
4. Prove that the problem VORONOI-DIAGRAM takes
(n log n) time
in the algebraic decision tree model.
5. Design an optimal algorithm that constructs convex hulls for sets of
points in 3-dimensional Euclidean space.
172 LOWER BOUNDS
6. Show that the problem SECOND CLOSEST PAIR takes
(n log n)
time in the algebraic decision tree model.
7. Given two sets A and B of points in the plane, each containing N
elements, nd the two closest points, one in A and the other in B .
Show that this problem requires
(N log N ) opertations (Hint: what
problem can we reduce to this problem?).
8. Give an optimal algorithm that, given a set of 2N points, half with pos-
itive x-coordinates, half with negative x-coordinates, nds the closest
pair with one member of the pair in each half.
9. Prove that the following problem has an
(N log N ) lower bound:
Given N points in the plane, construct a regular PSLG whose vertices
are these N points.
10. Given a PSLG G, design an algorithm regularizing G in time
O(n log n). Provide sucient details for the implementation of your
algorithm. (This does not mean you give a PASCAL or C program.
Instead, you should provide sucient detail for the data structure you
use to suppose your operations.)
11. Prove that your algorithm for the last question is optimal.
12. Prove that the following problem has a lower bound
(n log n):
Given a PSLG G, add edges to G so that the resulting graph is a PSLG
G0 such that each region of G0 is a simple polygon.
(Hint: You can suppose Chazelle's result.)
13. Prove that the following problem requires
(n log n) time in algebraic
decision tree models: given n points and n lines in the plane, determine
whether any point lies on any line.
14. Given a set of n points in the plane, let h denote the number of vertices
that lie on its convex hull. Show that any algorithm for computing the
convex hull must require
(n log h) time in the algebraic decision tree
model.
15. Given a convex n-gon, show that determining whether a query point
lies inside or outside this n-gon takes
(log n) time in the algebraic
decision tree model.
EXERCISES 173
16. Given a set S of n points in the plane, show that the problem of
nding the minimum area rectangle that contains these points requires
(n log n) time in the algebraic decision tree model.
17. Can you construct another example that requires
(n log n) time in
the algebraic decision tree model but is solvable in linear time?
174 LOWER BOUNDS
Chapter 9
Geometric Transformations
In this chapter, we will discuss an important technique in computational
geometry: The geometric transformations. We will introduce the method by
showing how this method is applied to solve geometric intersection problems,
such as half plane intersection and convex polygon intersection. We will also
apply the method to nd the smallest area triangles. We will see that the
geometric transformation techniques enable us to convert these geometric
problems into more familiar problems we have discussed.
Geometric transformations have their roots in the mathematics of the
early nineteenth century [6]. Their applications to problems of computing
dates back to the concept of primal and dual problems in the study of linear
programming (see, for example, [21]).
Brown [7] gives a systematic treatment of transformations and their ap-
plications to problems of computational geometry. Since his dissertation,
these methods have found vast application.
Typically, transformations change geometric objets into other geomet-
ric objects (for example, take points into lines) while preserving relations
which held between the original objects (for example, order or whether they
intersected). A number of geometric problems are best solved through the
use of transformations. The standard scheme is to transform the objects
under consideration, solve a simpler problem on the transformed objects,
and then use that solution to solve the original problem. No single transfor-
mation applies in all cases; a number of dierent transformations have been
used eectively. Here, we describe two commonly used transformations and
demonstrate their applications.
175
176 GEOMETRIC TRANSFORMATIONS
9.1 Mathematical background
Let l be a straight line on the Euclidean plane. If l is a vertical line, then l
can be characterized by an equation
x=a
if the line l intersects the x-axis at the point (a; 0). On the other hand, if
l is not a vertical line, let be the angle from the positive direction to the
line l,1 then l can be characterized by the equation
y = ax + b
If a = tan and the line l intersects the y -axis at the point (0; b). We will
call the direction of the line l, and call the value a = tan the slope of the
line l. The slope of a straight line l is denoted by slope(l).
The domain of all of our two-dimensional transformations will be the
projective plane, which is an enhanced version of the Euclidean plane in
which each pair of lines intersects. The projective plane contains all points
of the Euclidean plane (call them the proper points). We introduce a set of
improper points with one point Pa associated with every slope a in the plane.
Two parallel lines, then, intersect at that improper point indicated by the
slope of the parallel lines (this can be thought of as a point at innity). All
improper points are considered to lie on the same line: the improper line
or the line at innity. Thus, any two lines in the projective plane intersect
at exactly one point: two nonparallel proper lines intersect at a proper
point (i.e., one of the Euclidean plane); two parallel proper lines intersect
at the improper point bearing the same slope; and a proper line intersects
the improper line at the improper point dening the slope of the proper
line. Likewise, between every two points passes exactly one line: There is a
proper line passing through every pair of proper points; a proper line passes
through a given improper point and a given proper point; and the improper
line passes through any two improper points.
In general, the actual algorithms used to solve problems rely solely on
the Euclidean geometry. Therefore, although all the transformations will
map the projective plane onto itself, we will wish to choose a transformation
which maps the objects under consideration to \proper" objects. Thus, the
1 In this case, we always suppose that ?=2 < =2. That is, we always suppose
that the direction of the straight line l goes to the innity either in the rst quadrant or
in the fourth quadrant.
HALF PLANE INTERSECTION 177
l1
l2 l4
l3
lv
l i,j
p
k
p
j
p
i
θ
has the smallest distance d(k; i; j ) from the line li;j if and only if pk has the
smallest vertical distance dv (k; i; j ) from the line li;j .
proof. Let lv be the vertical line passing through the point pk which
intersects the line li;j at a point q . Let be the angle between the lines li;j
and lv . By the denition, the vertical distance dv (k; i; j ) from pk to li;j is the
length of the line segment pk q . Moreover, it is easy to see that the distance
d(k; i; j ) from pk to li;j is equal to jpk qj sin . See Figure 9.2. Thus, the
vertical distance from pk to li;j is proportional to the distance from pk to
li;j . The lemma follows immediately.
Therefore, to nd the smallest area triangle, for each pair of points pi
and pj in S , we only need to consider such a point pk in S ? fpi ; pj g such
that the vertical distance dv (k; i; j ) is the shortest. But how this observation
helps us?
We rst apply the transformation T1 on the set S of planar points. We
know that a point pk in S is mapped under T1 to a line T1(pk ) while a line
li;j passing through two points pi and pj in S is mapped under T1 to the
intersecting point T1 (li;j ) of the lines T1(pi ) and T1 (pj ). A nice property
of the transformation is that the vertical distance is preserved under the
188 GEOMETRIC TRANSFORMATIONS
transformation, as shown by the following lemma.2
Lemma 9.3.2 The vertical distance dv (pk; li;j ) from the point pk to the line
li;j is equal to the vertical distance dv (T1(li;j ); T1(pk )) from the point T1(li;j )
to the line T1(pk )
dv (pk ; li;j ) = dv (T1(li;j ); T1(pk ))
Final Remark:
POLYGON INTERSECTIONS 193
If the area of the smallest area triangle is zero, then the three points
forming this triangle are co-linear. Consequently, the above algorithm can
be used to check if there exist three points that are co-linear in a given set of
n planar points. The algorithm we presented in this section is not the best
algorithm. The best algorithm we know for the problem THE-SMALLEST-
TRIANGLE is due to Edelsbrunner, O'Rourke, and Seidel, which runs in
time O(n2 ) and space O(n) [11]. Whereas, the only known lower bound
is
(n log n). In fact, even for checking whether there exist three co-linear
points, the only bounds that we know are O(n2) and
(n log n). Improving
the upper or lower bounds for either of these problems remains an extremely
tantalizing open problem in computational geometry.
Y
Y
O
r
X
p T2 ( l )
q l
T2 (q)
T2 ( p)
X
O
(a) (b)
on the ray r.
Lemma 9.4.1 The segment Oq intersects the line l if and only if the seg-
ment OT2(l) intersects the line T2(q ).
proof. Suppose that the segment Oq does not intersect the line l, as
shown in Figure 9.3(a). Then the point q is closer than the point p to the
origin. Thus the line T2(q ) is further than the line T2(p), by Observation 3.
Moreover, the lines T2 (q ) and T2(p) are in parallel, by Observation 4, and
the point T2(l) is on the line T2(p). Consequently, the segment OT2(l) does
not intersect the line T2(q ), see Figure 9.3(b). The inverse can be proved in
a very similar way, thus we omit it here.
Let the intersection of the convex polygons in S be I . Suppose that l is
a line on which an edge of some polygon in S lies. Then we know that T2(l)
is a point in the set S 0. We say that the line l contributes a boundary edge
to the intersection I if part of l is on the boundary of the intersection I .
Lemma 9.4.2 The line l contributes a boundary edge to the intersection I
if and only if the point T2(l) is on the convex hull of the set S 0.
proof. Suppose that the line l contributes an edge to the intersection
I but T2(l) is not a hull vertex of S 0. Let r be the ray starting from the
POLYGON INTERSECTIONS 197
Y
Y
l O
l r
1 1
T2 (l1 )
T2 ( l )
p
r T2 (l2 )
l2
X r
2 T2 ( p)
O
(a) (b)
origin and passing through the point T2(l). Then we must be able to nd
two points T2 (l1) and T2(l2) in the set S 0 such that if we let r1 and r2 be
the rays staring from the origin and passing through the points T2(l1) and
T2(l2), respectively, then the ray r is between the two rays r1 and r2, and
that the segment OT2(l) does not intersect the line T2(p) passing through the
points T2(l1) and T2 (l2), where p is the intersecting point of the lines l1 and
l2 , see Figure 9.4(b). By lemma 9.4.1, the segment Op does not intersect the
line l. Moreover, since the ray r is between the two rays r1 and r2, slope(l)
is between slope(l1) and slope(l2). Therefore, if we let H , H1, and H2 be
the half planes dened by the lines l, l1 , and l2 , respectively, then the area
H1 \ H2 is entirely contained in the half plane H , see Figure 9.4(a). But the
intersection I is entirely contained in H1 \ H2 thus is entirely contained in the
half plane H . But this contradicts the assumption that the line l contributes
an edge to I . This contradiction shows that the point T2(l) must be a hull
vertex of the set S 0.
The inverse that if T2(l) is a hull vertex of S 0 then the line l contributes
an edge to the intersection I can be proved similarly and is left as an exercise
to the reader.
Lemma 9.4.2 immediately suggests the following algorithm to solve the
problem CONVEX-POLYGON-INTERSECTION.
Algorithm CONVEX-POLYGON-INTERSECTION (S)
198 GEOMETRIC TRANSFORMATIONS
f Given a set S of convex polygons that contain the origin, compute their
intersection. g
begin
1. For each convex polygon Pi and for each edge e of the polygon Pi , if
the edge e lies on a line l, construct the point T2(l).
2. Let S 0 be the set of points produced in Step 1, construct the convex hull
CH(S 0) of S 0.
3. Let S 00 be the set of lines that are preimages of the hull vertices in
CH(S 0). Sort S 00 by slopes, let the sorted list be
l1; l2; ; lr
4. For i = 1; ; r compute the intersecting point pi of li and li+1 (here
lr+1 = l1), then sequence
p1 ; p2; ; pr
is a convex polygon that is the intersection of convex polygons in S .
end
The algorithm correctly nds the intersection of the polygons in the set
S , as we have discussed above. Moreover, if the sum of the number of edges
of the polygons in S is N , then the above algorithm trivially runs in time
O(N log N ).
Chapter 10
Geometric Problems in
Higher Dimensions
In this chapter, we introduce techniques for solving geometric problems in
more than two dimensions. Section 1 introduces the preliminaries of higher
dimensional geometry and representation of geometric objects in higher di-
mensions in a computer. Section 2 describes a divide-and-conquer algorithm
for constructing the convex hull of a set of points in 3-dimensional Euclidean
space. Section 3 gives an optimal algorithm for constructing the intersection
of a set of half-spaces in 3-dimensional Euclidean space. Section 4 demon-
strates an interesting relationship between a convex hull of a set of points
in the n-dimensional Euclidean space and the Voronoi diagram of a set of
projected points in the (n + 1)-dimensional Euclidean space. Section 3 and
Section 4 actually gives an optimal algorithm for constructing the Voronoi
diagram for a set of points in the plane using reduction techniques.
10.1 Preliminaries
10.2 Convex hulls in three dimension
From Preperata and Shamos.
Dynamization Techniques
The techniques are developed for problems whose database is changing over
(discrete) time. The idea is to make use of good data structures for a static
(xed) database and add to them certain dynamization mechanisms so that
insertions or deletions of elements in the database can be accommodated
eciently.
p' p'
p pi+1
pi+1
p pi+1 p
p' p''
p'' p''
vm
vl pi+1
vl pi+1
vm
(1) (2)
vm
vl pi+1
vl p
i+1
vm
(3) (4)
vm
vl pi+1
vl p
i+1
vm
(5) (6)
pi+1 vl.
4. The point vm is re
ex with respect to pi+1 , and vm is on the left of
pi+1 vl.
5. The point vm is right supporting with respect to pi+1 .
6. The point vm is left supporting with respect to pi+1 .
Figure 11.2 illustrates all these six cases.
From Figure 11.2, it is easy to decide in which subtree the right support-
ing point q1 with respect to pi+1 is stored. We discuss this case by case.
206 LOWER BOUNDS
Remember that the hull vertices of CH i are stored in the 2-3 tree from left
to right in the counterclockwise ordering, therefore, the points stored in the
subtree MSON(v), together with the point vl , correspond to the chain on
the convex hull CH i starting from the point vl , making travel in counter-
clockwise order, and ending at the point vm . Call this chain a MSON-chain.
CASE 1 In this case, vl is re
ex and vm is concave. If we travel the MSON-
chain from vl to vm , the points on the chain change from re
ex to concave
with respect to pi+1 , and all points are on the right of pi+1 vl . Thus we
must pass the right supporting point q1 . Therefore, in this case, the right
supporting point q1 is stored in the middle son MSON(v).
CASE 2 The analysis is similar to Case 1, the right supporting point q1 is
stored in the middle son MSON(v).
CASE 3 In this case, both vl and vm are re
ex, and vm is on the right side
of pi+1 vl . Therefore, if we travel the MSON-chain from vl to vm , the right
supporting point q1 must not be passed. Therefore, in this case the middle
son MSON(v) does not contain the right supporting point q1 . The point q1
must be stored in the left subtree LSON(v).
CASE 4 Similar to Case 2, the right supporting point q1 is stored in the
middle son MSON(v).
CASE 5 This is the most lucky case, since the right supporting point
q1 = vm.
CASE 6 Similar to Case 2, the right supporting point q1 is stored in the
middle son MSON(v).
Therefore, for each of these six cases, we can decide in constant time
which subtree we should further search. We summarize these discussions in
the following algorithm.
Algorithm RIGHTPOINT(v)
f Search the right supporting point q1 in the subtree rooted at the non-
leaf vertex v . The points vl and vm are the right most points in the subtrees
LSON(v) and MSON(v), respectively. g
begin
1. if pi+1 is external to CH i
1.1 if q1 is stored in RSON(v), call RIGHTPOINT(RSON(v));
1.2 else
1.2.1 if vl is re
ex
1.2.1.1 if vm is right supporting, then done;
1.2.1.2 else if vm is re
ex and on the right of pi+1vl
ON-LINE CONSTRUCTION 207
1.2.1.3 Call RIGHTPOINT(LSON(v));
1.2.1.4 else Call RIGHTPOINT(MSON(v));
1.2.2 else if vl is concave
1.2.3 else if vl is supporting
2. else if pi+1 is internal to CH i
2.1
end
We give a few remarks on the above algorithm.
1. To decide if the point q1 is stored in RSON(v) in Step 1.1, we use
the method similar to those in Steps 1.2.1 - 1.2.3. The only exception
is that the information used is MSON(v) and RSON(v), instead of
LSON(v) and MSON(v).
2. We actually do not need Step 2 to check if pi+1 is internal to CH i . In
fact, if pi+1 is internal to CH i , the recursive calls of Step 1 eventually
locate a single point q1 on the convex hull CH i , and this point q1 is
still concave with respect to pi+1 . Since if the point pi+1 is external to
CH i , then the nal point q1 must be the right supporting point, so if
we nd out that the nal point q1 is still concave with respect to pi+1 ,
then we conclude that the point pi+1 is internal to CH i .
3. The left supporting point q2 is found by a similar subroutind LEFT-
POINT(v).
4. With the above discussions and the similarities, the reader should have
no trouble to ll up the omitted part in the algorithm.
Therefore, to nd the right and left supporting points q1 and q2 in the
convex hull CH i , which is represented by a 2-3 tree T rooted at v , we simply
call
RIGHTPOINT(v); LEFTPOINT(v)
By the discussions above, these two supporting points can be found in
time O(log n).
Since the subroutines also tell us if pi+1 is internal to CH i , so if we are
told that pi+1 is internal to CH i , then CH i = CH i+1 and we are done.
Otherwise, the right and left supporting points q1 and q2 are returned. Let
C be the chain between q1 and q2 in the tree T . Pick any point q in the
208 LOWER BOUNDS
chain C . If the point q is re
ex, then all hull vertices in the chain C should
be deleted and all other hull vertices should be kept. On the other hand, if
the point q is concave, then all hull vertices in the chain C should be kept
and all other hull vertices should be deleted. Therefore, we rst split the
the tree T into three trees T1, T2, and T3 such that the leaves of the tree
T2 are those points that are in the chain C . In the case that q is re
ex, we
splice the two trees T1 and T3 into a new tree T 0, and in the case that q is
concave, we let T2 be the new tree T 0. It is clear to see that the new tree T 0
corresponds to the partial chain in CH i that should be kept in the convex
hull CH i+1 . Moreover, since the data structure we are using is a 2-3 tree,
these split and splice operations can be done in time O(log n). Finally, we
insert in time O(log n) the point pi+1 into the tree T 0 to form the 2-3 tree
representing the convex hull CH i+1 .
Summarize the above discussions, we conclude that constructing the con-
vex hull CH i+1 from the convex hull CH i can be done in time O(log n). This
consequently gives us the following theorem.
Theorem 11.1.1 The ON-LINE HULL problem can be solved by an optimal
algorithm.
Chapter 12
Randomized Methods
This chapter may contain the following materials: expected time for con-
structing convex hulls in 2-dimensional space (Preperata and Shamos, see
also Overmars Lecture Notes), expected time for constructing intersection
of half-spaces in 3-dimensional space. The papers by Clarson should be read
to nd more examples.
209
210 RANDOMIZED METHODS
Chapter 13
Parallel Constructions
Parallel random access machine (PRAM)
The computational model we are based on in this chapter is called parallel
random access machine (PRAM). This kind of machine model is also known
as the Shared-Memory Single Instruction Multiple Data computer. Here,
many processors share a common (random access) memory that they use in
the same way a group of people may use a bulletin board. Each processor
also has its own local memory in which the processor can save its own inter-
mediate computational results. When two processors wish to communicate,
they do so through the shared memory. Say processor Pi wishes to pass
a number to processor Pj . This is done in two steps. First, processor Pi
writes the number in the shared memory at a given register which is known
to processor Pj . Then, processor Pj reads the number from that register.
The number of processors of a PRAM, the size of the shared memory,
and the size of the local memory for each processor are all assumed to be
unbounded.
Depending on the way of simultaneous access of a register in the shared
memory, the class of PRAM can further be subdivided into four subclasses:
EREW PRAM, CREW PRAM, ERCW PRAM, and CRCW PRAM. We
are not going to discuss the details in this book.
LISTRANK
Given a linked list of n elements, compute the rank for each element.
That is, for the ith element in the list, we compute the number n ? i.
ARRAY-COMPRESSION
Let A be an array containing m = n + n0 elements, n of them are red and
0
n of them are blue. Delete all blue elements and compress all red elements
into an array A0 of size n.