Bertsekas A New Algorithm Assignment Problem Mathematical Programming 1981
Bertsekas A New Algorithm Assignment Problem Mathematical Programming 1981
Dimitri P. B E R T S E K A S
Department of Electrical Engineering and Computer Science, Massachusetts Institute of
Technology, Cambridge, MA 02139, U.S.A.
We propose a new algorithm for the classical assignment problem. The algorithm resembles
in some ways the Hungarian method but differs substantially in other respects. The average
computational complexity of an efficient implementation of the algorithm seems to be
considerably better than the one of the Hungarian method. In a large number of randomly
generated problems the algorithm has consistently outperformed an efficiently coded version
of the Hungarian method by a broad margin. The factor of improvement increases with
the problem dimension N and reaches an order of magnitude for N equal to several hundreds.
1. Introduction
(c) For each source i there are at least two links (i, j) in L.
Assumption (b) can be replaced by the more general assumption that aq are all
integers (not necessarily nonnegative) and for some integer R > 0 we have
To see this note that if for any source i ~ S we subtract the constant minjcT a~j
from the weights a~i, Vj C T then the value of all possible assignments of
cardinality N changes by the same constant. Assumption (c) involves no loss of
generality, since if for some source i there is only one link (i, j) E L, then (i, j) is
certainly part of any optimal assignment and as a result nodes i and j can be
removed from the problem thereby reducing dimensionality. We use assumption
(c) for convenience in stating our algorithm.
The assignment problem can be embedded into the linear program
maximize ~ a~ix~i,
(i,j)~L
~, x i j = l , V j = I . . . . . N,
(i,j)~L
The corresponding dual problem in the vectors m = (m~ .... , raN), p = (p~ ..... PN)
is
N N
minimize ~ ml + ~ pj,
i=l j=l
subject to m~ + pj - a~j, V(i, j) ~ L. (2)
D.P. Bertsekas/ Assignment problem 155
The scalars pj and (agj-pi) will be referred to as prices and profit margins
respectively. From complementary slackness we have that if (i, j) is part of an
optimal assignment, then for any optimal solution (m, p) of the dual problem we
have
m~+p~=a,j i f ( i , j ) ~ X k,
m~+p~>-ai, V(i,n)~L,
we stop if X k has cardinality N. Otherwise we use the following procedure to
generate (m k÷~,pk+~, Xk÷~) satisfying for all i E S
m~+~+p~ ÷~=aq i f ( i , j ) E X k÷j,
mki+l+pnk÷l ~ain
- V(i, n) E L.
Case 1: r~ > fit, or fit = fit and sink "f is unassigned under X k
Set
m~+l=
fro/k for i~ i,
(7)
lilt" for i = L
p~÷t= ~" p~ for j ~ T
t J + r ~ - fit
pk for j = )-. (8)
If j" is unassigned under X k, add (i, j) to X k, i.e.,
x k+' = x k n {(L/)}.
If (f, j') C X k for some ~ C S, obtain X k+' from X k by replacing (f, j-) by (L J), i.e.,
Step 1 (Labeling): Find a source i with an unscanned label and go to Step la, or
find a sink j ~ j" with an unscanned label and ~'j -- 0 and go to Step lb. If no such
source or sink can be found go to Step 3.
Step la: Scan the label of source i as follows. For each (i, j ) ~ L for which
m~ + p ~ - aij < 1rj give node j the label 'i' (replacing any existing label) and set
~rj ~-- m ~ + p k _ alj. Return to Step 1.
Step lb: Scan the label on the sink j # j with ~-j = 0 as follows. If j is unassigned
under X k go to Step 2. Otherwise identify the unique source i with (i, j) E X k and
give i the label 'j'. Return to Step 1.
(Recall here that m} was set equal to ff~ in the initialization of the labeling
procedure cf. (9)). Set
k+l=~p~+~ if ~rj=O,
P; [p~ if ~rj > O.
and each source i is a person for whom each item j is worth aii. After the k th
iteration of the algorithm some of the items have been temporarily assigned to
persons that have bid up their prices to levels p~. If Case 1 holds at the (k + 1)'t
iteration, the (unassigned) person i- selects the item 1- that offers maximum profit
margin and bids up its price by the amount (t~ - th)--the maximum amount for
which ]" will still offer maximum profit margin. The item ~- is then assigned to the
individual i- in place of any other person f that may have bid earlier for 1". In
these terms the algorithm may be viewed as a process whereby persons compete
for items by trying to outbid each other. One can interpret Case 2, Step 3
similarly except that this interpretation involves the (admittedly less intuitive)
idea of a cooperative bid whereby several persons simultaneously increase the
prices of corresponding items. During actual computation Case 1 occurs far
more frequently than Case 2, so, should someone wish to give a name to the
algorithm, we suggest calling it the auction algorithm.
Unfortunately the description of the algorithm in Case 2 is quite complicated.
For this reason some explanatory remarks are in order. In Case 2 we basically
try to find an augmenting path not containing f from source i- to an unassigned
sink. There are two possibilities. Either an augmenting path will be found
through Step 2 of the labeling procedure, or else a change in the dual variables
will be effected through Step 3. In the first case the link (~i)-) will be retained in
X g÷~ and the sink at which the augmenting path terminates will be assigned
under X k+~ as shown in Fig. 1. In the second case the link (f~ j) will be replaced
by (i, j-) in X k+~ and no new sink will be assigned under X g+~ as shown in Fig. 2.
The dual variables will change, however, by the minimum amount necessary to
obtain m k+~+ p k+~_ aq for some labeled source i and labeled but unscanned sink
j. A similar (but not identical) labeling procedure is used in the Hungarian
method (see Section 3). In the Hungarian method after a change in dual variables
S T S T
7
A.
Augmenting ~ j
Path
k Xk+l
X
S T S T
A A
i i i i
Xk X k+t
o o = Unassigned l i n k ( i , j ) w i t h m i + p j >oij.
Fig. 2.
p~ 0 0 0 0
j m~
10 1 3 6 1
10 2 4 7 3
10 2 5 7 2
10 I 3 5 1
160 D.P. Bertsekas/ Assignment problem
The matrix of weights is shown in the lower right portion of the tableau. The row
above the matrix gives the initial prices arbitrarily chosen to be zero. The
column to the left of the matrix shows the initial profit margins. We have chosen
mi = 10 for a l l / - - o n e of the m a n y choices satisfying feasibility. The extreme left
column gives the sinks j, if any, to which sources are assigned. H e r e we are
starting with the e m p t y assignment. We describe successive iterations of the
algorithm. The corresponding tableaus are given in Fig 3.
pj 0 0 3 0 pi 0 0 3 0
j m~ i m,
3 1 3 6 1 3 3 1 3 6 1
10 2 4 7 3 2 4 2 4 7 3
10 2 5 7 2 10 2 5 7 2
10 1 3 5 1 10 1 3 5 1
pj 0 1 3 0 pj 0 2 4 0
j mi i m~
3 3 1 3 6 1 3 2 I 3 6 1
4 2 4 7 3 4 2 4 7 3
2 4 2 5 7 2 4 2 5 7 2
10 1 3 5 1 2 1 1 3 5 1
pj 0 2 4 0 pj 0 2 4 0
j ml i m~
3 2 1 3 6 1 3 2 1 3 6 1
4 3 2 4 7 3 4 3 2 4 7 3
4 2 5 7 2 2 3 2 5 7 2
2 1 1 3 5 1 1 1 1 3 5 1
the degenerate augmenting path (2, 2) through Step 2 of the labeling procedure
and the end result would have been the same.)
3rd iteration: We choose the unassigned source 3. Here rfi = 5, rh = 4, j" = 2.
We are thus again in Case 1. But now source 2 will be driven out of the
assignment and will be replaced by source 3.
4 th iteration: We choose the unassigned source 4. Here rfi = rh = 2. Suppose
j- = 2. We are now in Case 2 with ~ = 3. Applying the labeling procedure we label
first source 4. A simple computation shows that sink 3 is labeled from source 4
and then source 1 is labeled from sink 3. No more labels can be scanned so we
are in Step 3 of Case 2. Source 4 will enter the assignment and source 3 will be
driven out. We have 6 = 1 and the corresponding tableau is shown in Fig. 3.
5th iteration: We choose the unassigned source 2. Here tfi = rh = 3. Suppose
j----4. We are in Case 1 and (2, 4) will be added to the assignment. (The result
would be the same if j = 3 in which case the degenerate augmenting path (2, 4)
would be obtained via Step 2 of Case 2.)
6 th iteration: We choose the unassigned source 3. Here ff~ = rh = 3. Suppose
= 3. We are in Case 2 with f = 1. Applying the labeling procedure we label first
source 3. Sink 2 is labelled from source 3 and then source 4 is labeled from sink
2. N e x t sinks 1 and 4 are labeled from source 4. Sink 1 is unassigned and this
yields the augmenting path (3, 2), (4, 2), (4, 1) in Step 2 of Case 2. The algorithm
terminates.
mk+p~=a0 if ( i , j ) ~ X k, (10)
m~ +pk,>>-ai, V ( i , n ) E L . (11)
From (10) and (11) we see that dual feasibility and complementary slackness
are maintained throughout the algorithm. Thus if the algorithm terminates (by
necessity at an assignment of cardinality N), then the assignment and dual
variables obtained are optimal. The following proposition shows that termination
is guaranteed under the assumption made earlier that there exists at least one
assignment of cardinality N.
162 D.P. Bertsekas / Assignment problem
Proof. Assume that the algorithm does not terminate. Then after a finite number
of iterations the cardinality of the current assignment will remain constant and at
each subsequent iteration at least one variable ml will decrease strictly and at
least one price will increase strictly (observation (c) above). Hence the sets
S~, T~ defined by
S~ = {i ~ S [lim m ~ = -oo},
k-->~
T~ = {j E T I lim p ~ = oo}
are nonempty. For all k sufficiently large the sinks in T~ are assigned under X k
(observation (b)) and they must be assigned to a source in S~ (observation (d)).
Furthermore, since the algorithm does not terminate, some source in S~ must be
unassigned under X k. It follows that the cardinality of S~ is strictly larger than
that of T~. From (1 l) it is clear that there cannot exist a link (i, j ) E L such that
i E S~ and j ~ T~. Thus we have
{j [ ( i , j ) E L , i ~ S ~ } C T~
while S~ has larger cardinality than T~. This contradicts the assumption that
there exists an assignment of cardinality N.
p~-p~<-aij-ain<-R i f ( i , j ) E X k. (12)
Suppose that B~ and B2 are lower and upper bounds for all initial prices, i.e.
For each k there must be at least one unassigned sink, say ~k and we must have
k 0
P~k = P~k -< B2. It follows from (12) and observation (a) that
It is easy to see that there is an integer y such that the k 'h iteration of the
algorithm requires at most ~/zkN computer operations where
1, in Case 1,
zk = number of labeled sources, in Case 2,
D.P. Bertsekas/ Assignment problem 163
While the worst case complexity of the algorithm is inferior to the one of the
Hungarian method for large R, we would like to emphasize that, as experience
with the simplex method has shown, worst case complexity and average com-
plexity of an algorithm can be quite different. Thus t w o algorithms with
comparable worst case complexity can differ substantially in their performance
in sovling randomly generated problems or problems typically arising in practice.
In our computational experiments with random problems we found that the
algorithm often performed surprisingly better than the Hungarian method. The
following example illustrates what we believe is the mechanism responsible for
this superior performance.
"N
N N-1
N N-1 N-2
3
2 2
3 2 1
N N-1 N -2
with all elements above the diagonal equal to zero. L e t us trace the iterations of
our algorithm for this problem starting with p = 0 and the empty assignment. In
the first iteration source 1 is chosen, we have ffl = N, rh = 0, j-= 1 and, under
Case 1, link (1, 1) is added to the assignment while price pl is increased to N. In
the second iteration source 2 is chosen, we have fit = N - 1, rh = 0, ~ = 2 and,
under Case 1, link (2, 2) is added to the assignment while price P2 is increased to
N - 1. Continuing in this manner we see that at the k th iteration link (k, k) will be
added to the assignment and the price pk is increased to k. Thus the algorithm
terminates in N iterations and its computation time for this problem is O(N2).
164 D.P. Bertsekas/ Assignment problem
If we apply the Hungarian method of the next section to the same problem
with the same initial conditions we find that at every iteration except the first all
sources will be scanned leading to a computation time O(N3).--essentially N
times slower than with our method. This type of example does not depend on the
initial prices as much as it may appear, since if the standard initialization
procedure of the Hungarian method were adopted (see next section), then by
adding a row of the form [N, N , - . . N ] and a column consisting of zeros in all
positions except the last to the assignment matrix, the computation times of the
two methods remain essentially unchanged for large N.
In analyzing the success of our method in this example we find that it is due to
the fact that by contrast with the Hungarian method, it tends to increase prices
by as large increments as is allowed by the complementary slackness constraint.
Thus in the first iteration p~ is increased by N. This has the effect of outpricing
sink 1 relative to the other sinks in the sense that the price of sink 1 is increased
so much that, together with source 1, it plays no further role in the problem.
Thus in effect after the first iteration we are dealing with a problem of lower
dimension. By contrast the Hungarian method in the first iteration will add link
(I, 1) to the assignment but will not change its price from zero. As a result source
1 and sink 1 are labeled and scanned at every subsequent iteration. Outpricing
sink 1 has another important effect namely it allows a large price increase and
attendant outpricing for sink 2 at the second iteration. This in turn allows
outpricing sink 3 at the third iteration and so on. This illustrates that outpricing
has the character of a chain phenomenon whereby outpricing of some sinks
enhances subsequent outpricing of other sinks.
The preceding example is obviously extreme and presents our method in the
most favorable light. If, for example, the first row contained some non-zero
elements other than N, the change in the price p~ at the first iteration would be
smaller than N. In this case the effect of outpricing, while still beneficial, would
not be as pronounced and it would drive source 1 and sink 1 out of the problem
only temporarily until the prices of other sources increase to comparable levels.
While we found that the algorithm of this section performs on the average
substantially better than the Hungarian method for randomly generated prob-
lems, we often observed a pattern whereby the algorithm would very quickly
assign most sinks but would take a disproportionally large number of iterations
to assign the last few sinks. For example for N = 100 we observed some cases
where 75% of the iterations were spent for assigning the last two or three sinks.
Predictably in view of the complexity estimate (15) this typically occured for
large values of R (over 100). This points to the apparent fact that the beneficial
effect of outpricing is most pronounced in the initial and middle phases of the
algorithm but sometimes tends to be exhausted when there are only few
unassigned sources. The remedy suggesting itself is to combine the algorithm
with some form of the Hungarian method so that if the algorithm does not make
sufficiently fast progress a switch is made to the Hungarian method. One of the
D.P. Bertsekas/Assignment problem 165
m°+p°>-a~j, V(i,j)~L.
For k = 0, 1 . . . . . N - 1, given (m k, pk, X k) satisfying for all i E S
m~+l+p~+l=aij i f ( i , ] ) E ~ "..k+l,
ml * >ain V(i,n)EL,
Step 0: Give the label '0' to all unassigned sources under X k. Set ~rj = o%
j = l . . . . . N.
Step 1 (Labeling): Find a source i with an unscanned label and go to Step la,
or find a sink j with an unscanned label and 7ri = 0 and go to Step lb. If no such
source or sink can be found, go to Step 3.
Step la: Scan the label of source i as follows. For each (i,j)E L for which
1
Actually throughout the algorithm the stronger condition m~k = max{a~,- p kn [ (i, n) E L} holds for all
k and i ~ S. We state the algorithm in this form in order to emphasize that (m k , p k , X k ) need only satisfy
the same conditions as in the algorithm of the previous section, thereby simplifying the transition from
one algorithm to the other.
166 D.P. Bertsekas/Assignment problem
m k + p ~ - a~ < ~rj give node j the label 'i' (replacing any existing label) and set
~rs ~ m ~ + p ks - aij. Return to Step 1.
Step lb: Scan the label on the sink j with ~i = 0 as follows. If j is unassigned
under X ~ go to Step 2. Otherwise identify the unique source i with (i, j ) ~ X ~,
and give i the label 'j'. Return to Step 1.
Step 2 (Augmentation): An augmenting path has been found that alternates
between sources and sinks originating at a source unassigned under X k and
terminating at the sink j identified in Step lb. The path is generated by
'backtracing' from label to label starting from the terminating sink j. Add to X k
all links on the augmenting path that are not in X k and remove from X k all links
on the augmenting path that are in X k. This gives the next assignment X k+'. Set
mk+~ = m ~, p~.~ = pk. (Note that m k and pk may have been changed through Step
3). This completes the iteration of the algorithm.
Step 3 (Change of dual variables): Find
;~ = min{~rs I J E T, ~rs > 0}. (16)
Set
m ki *-- m k~ - ~ for all i E S that are labeled,
p ~ <---p ~ + ~ for all j E T with irj = 0,
zrl <---,rj - ~ for all j ~ T that are labeled and ~rj > 0.
and go to Step 1.
Notice that the labeling procedure will terminate only upon finding an aug-
menting path at Step 2 and therefore at each iteration the cardinality of the
current assignment is increased by one. Thus X N has cardinality N and is an
optimal assignment. It can be shown that the worst case computational com-
plexity of this algorithm is 0(3/3).
As already discussed, for R > 100 it appears advantageous to combine our
new algorithm with the Hungarian method. A switch from the new algorithm to
the Hungarian method is very simple in view of the similarities of the two
methods. We have used the following scheme in our experiments. There are
several other possibilities along similar lines.
We are making use of two lists of unassigned sources during execution of the
algorithm. Each unassigned source is contained in one and only one of the two
lists. We select at each iteration the unassigned source which is at the top of the
first list. If in that iteration a new source becomes unassigned (Case 1, j-
assigned, or Case 2, Step 3) this source is placed at the bottom of the second list.
Initially the first list contains all sources and the second list is empty. As the
algorithm proceeds the size of the first list decreases while the size of the second
list increases. When the first list is emptied the contents of the second list are
transferred to the first and the second list becomes empty. We refer to the
portion of the algorithm between the time that the first list is full to the time it is
empty as a cycle. At the end of each cycle we compare the number of sources in
the second list with the number of sources contained in the first list at the
D.P. Bertsekas / Assignment problem 167
beginning of the cycle. If they are the same (implying that no augmentation
occured during the cycle) a counter initially set at zero is incremented by one.
The counter is also incremented by one if during the cycle Case 2, Step 3 was
reached more than a fixed prespecified number of times (4 in our experiments)
with the number of labeled sources being more than a fixed prespecified number
(10 in our experiments) 2. At the point where the counter exceeds a prespecified
threshold value a switch is made to the Hungarian method of the previous
section. The threshold value was set at 0.1N in all of our experiments, but the
average performance of the algorithm seems fairly insensitive to this value
within broad limits. It is a straightforward but tedious exercise to show that the
complexity of this combined algorithm is bounded by O(N3). The proof essen-
tially consists of showing that at most O(N 3) operations are necessary before a
switch to the Hungarian method takes place. In almost all the problems we
solved, the great majority (95-100%) of sinks were assigned by the new al-
gorithm and the remainder by the Hungarian method after a switch was made.
This was particularly true for small values of R when for most problems a
switch to the Hungarian method was not necessary.
Finally regarding initialization we have in all cases chosen X °-- empty, and
p 0__
j - 0, m ~0__
- R for all i and j. H o w e v e r at the end of the first cycle (i.e. at the end
of the N 'h iteration) the prices of all unassigned sinks j are changed from p ~ = 0
to
p~ = max{all - mY [i: assigned under Xn}.
The remaining prices and all values m~N are left unchanged. This is in effect an
initialization procedure quite similar to the one for the Hungarian method given
earlier. Its purpose is to reduce the prices of the unassigned sinks as much as
possible without violating the complementary slackness constraint. It has wor-
ked quite well in our experiments.
Tables 1 and 2 show the results of our computational experiments with
randomly generated full dense, N x N problems. Each entry represents an
average over five problems, which were the same for all three methods and for
each N. The weights were chosen from a uniform distribution over [0, 1] and
subsequent multiplication by R (Table 1), or from a normal distribution N(0, 1)
and subsequent multiplication by ~ (Table 2). T h e y were then truncated to the
nearest integer. The programs were written in Fortran and compiled with the
optimizing compiler in the OPT = 2 mode. The times given in the top entry of
each cell refer to the IBM 370/168 at M.I.T. We give in the bottom entry of each
cell the average number of sources scanned for each method (Case 1 in the new
algorithm corresponds to one source scanned). The average computation time
per source scanned does not differ much from one method to another, so the
2Actually this last device does not seem to play an important role for practical purposes. It was
introduced in order to make possible a proof of an O(N 3) complexity bound for the combined
algorithm.
168 D.P. Bertsekas/ Assignment problem
t~
<~
O~
¢q
t'~ ¢q
~5 TM
¢'4 "~"
¢q
tr~
z
D.P. Bertsekas[Assignment problem 169
Table 2
Top entry in each cell = time in secs on IBM 370. Bottom entry = number of sources scanned.
Average over five N × N full dense problems with weights chosen by normal distribution
N(0, 1) and subsequent multiplication by ~ and truncation to the nearest integer.
100 0.419 0.455 0.487 0.091 0.091 0.094 0.103 0.118 0.113
1317 1383 1447 285 288 283 326 388 395
150 1.25 1.40 1.53 0.260 0.267 0.292 0.342 0.425 0.420
2868 3128 3265 570 599 601 728 929 1024
200 2.78 3.07 3.43 0.492 0,486 0.533 0.603 1.00 0.800
4975 5395 5700 808 819 852 975 1607 1536
the Hungarian method. Computation times were not recorded as these are
meaningless in the absence of special data structure techniques exploiting
sparsity. However, when such techniques are implemented the comparison of
computation times should favor our algorithm even more since Step 3 (Case 2)
of our algorithm (which is relatively time consuming for sparse problems) is
executed far less frequently tban the corresponding Step 3 of the Hungarian
method.
As a final comparison with existing methodology it is worth observing that the
computation time of Table 1 for the combined method and 200 x 200 problems
with weights in the range [0, 100] is 0.526seconds. There are five 2 0 0 x
200 N E T G E N benchmark assignment problems with weights in the range [0, 100]
that have been solved by a number of presently available codes. The best
solution times achieved range from 0.96 to 1.68 secs on a CDC 6600 [3,7] and
0.38 to 0.90secs on an IBM 370/168 [12]. Making an adjustment for the
advantage in speed of the IBM 370 over the CDC 6600 we conclude that our time
is comparable ([14] gives an advantage in speed of 5 to 6 for the IBM 370 over
the CDC 6600 for network problems although there has been some question on
the accuracy of this figure). Yet the N E T G E N problems are only 3 - 12% dense
while our time corresponds to 100% dense problems. Since existing codes are
constantly improved, these figures do not constitute definite proof that our
algorithm is superior to other algorithms based on simplex or primal-dual
methods. They do, however, suggest that our algorithm can provide, with the aid
of sophisticated programming techniques, the basis for improved codes for
assignment.
Acknowledgment
The assistance of Eli Gafni with the computational experiments as well as helpful
discussions are gratefully acknowledged.
References
[1] E. Lawler, Combinatorial optimization: networks and matroids (Holt, Rinehart and Winston,
1976).
[2] H.W. Kubn, "The Hungarian method for the assignment problem", Naval Research Logistics
Quarterly 2 (1955) 83-97.
[3] R.S. Barr, F. Glover and D. Klingman, "The alternating basis algorithm for assignment
problems", Mathematical Programming 13 (1977) I.
[4] G.H. Bradley, G.G. Brown and G.W. Graves, 'Design and implementation of large scale primal
transhipment algorithms", Management Science 24 (1977) I.
[5] R.V. Helgason and J.L. Kennigton, "NETFLOW program documentation", technical report
IEOR 76011, Department of Industrial Engineering and Operations Research, Southern
Methodist University (1976).
[6] R.S. Hatch, "Bench marks comparing transportation codes based on primal simplex and
primal-dual algorithms", Operations Research 23 (1975) 1167.
D.P. Bertsekas / Assignment problem 171
[7] L.F. McGinnis, "Implementation and testing of a primal-dual algorithm for the assignment
problem", Industrial and Systems Engineering report series No. J-78-31, Georgia Institute of
Technology (November 1978).
[8] A. Weintraub, and F. Barahona, "A dual algorithm for the assignment problem", Departmento
de Industrias Report No. 2, Universidad de Chile-Sede Occidente (April 1979).
[9] F. Glover and D. Klingman, "Comment on a note by Hatch on network algorithms" Operations
Research 26 (1978) 370.
[10] J. Edmonds and R. Karp, "Theoretical improvements in algorithmic efficiency for network
flow problems", Journal of the Association for Computing Machinery 19 (1972) 248-264.
[11] D.P. Bertsekas, "An algorithm for the Hitchcock transportation problem", Proceedings of the 18th
AUerton conference on communication, control and computing, Allerton Park, 11 Oct. 1979.
[12] M.D. Grigoriadis, Talk at Mathematical Programming Symposium, Montreal, August 1979 (also
private communication).