A First Course in Linear Optimization
A First Course in Linear Optimization
Linear Optimization
— a dynamic book —
by
Jon Lee
ReEx PrEsS
Jon Lee
2013–2018
ReEx PrEsS
iii
http://creativecommons.org/licenses/by/3.0/
where you will see the summary information below and can click through to the full
license information.
Go Forward
This is a book on linear optimization, written in LATEX. I started it, aiming it at the
course IOE 510, a masters-level course at the University of Michigan. Use it as is, or
adapt it to your course! It is an ongoing project. It is alive! It can be used, modified
(the LATEX source is available) and redistributed as anyone
pleases, subject to the terms of the Creative Commons At-
tribution 3.0 Unported License (CC BY 3.0) c b. Please
take special note that you can share (copy and redistribute
in any medium or format) and adapt (remix, transform, and
build upon for any purpose, even commercially) this mate-
rial, but you must give appropriate credit, provide a link to
the license, and indicate if changes were made. You may do
so in any reasonable manner, but not in any way that sug-
gests that I endorse you or your use. If you are interested
in endorsements, speak to my agent.
I started this material, but I don’t control so much what
you do with it. Control is sometimes overrated — and I am a control freak, so I should
know!
I hope that you find this material useful. If not, I am happy to refund what you paid
to me.
Jon Lee
v
Preface
This book is a treatment of linear optimization meant for students who are reasonably
comfortable with matrix algebra (or willing to get comfortable rapidly). It is not a goal
of mine to teach anyone how to solve small problems by hand. My goals are to intro-
duce: (i) the mathematics and algorithmics of the subject at a beginning mathemati-
cal level, (ii) algorithmically-aware modeling techniques, and (iii) high-level computa-
tional tools for studying and developing optimization algorithms (in particular, MATLAB
and AMPL).
Proofs are given when they are important in understanding the algorithmics. I make
free use of the inverse of a matrix. But it should be understood, for example, that B −1 b
is meant as a mathematical expression for the solution of the square linear system of
equations Bx = b . I am not in any way suggesting that an efficient way to calculate the
solution of a large (often sparse) linear system is to calculate an inverse! Also, I avoid
the dual simplex algorithm (e.g., even in describing branch-and-bound and cutting-
plane algorithms), preferring to just think about the ordinary simplex algorithm ap-
plied to the dual problem. Again, my goal is not to describe the most efficient way to
do matrix algebra!
Illustrations are woefully few. Though if Lagrange could not be bothered1 , who am
I to aim higher? Still, I am gradually improving this aspect, and many of the algorithms
are illustrated in the modern way, with computer code.
The material that I present was mostly well known by the 1960’s. As a student at
Cornell in the late 70’s and early 80’s, I learned and got excited about linear optimiza-
tion from Bob Bland, Les Trotter and Lou Billera, using [1] and [5]. The present book
is a treatment of some of that material, with additional material on integer-linear op-
timization, mostly which I originally learned from George Nemhauser and Les. But
there is new material too; in particular, a “deconstructed post-modern” version of Go-
mory pure and mixed-integer cuts. There is nothing here on interior-point algorithms
and the ellipsoid algorithm; don’t tell Mike Todd!
Jon Lee
vii
Serious Acknowledgments
Throw me some serious funding for this project, and I will acknowledge you — seri-
ously!
Many of the pictures in this book were found floating around on the web. I am making
”fair use” of them as they float through this document. Of course, I gratefully acknowl-
edge those who own them.
Hearty thanks to many students and to Prof. Siqian Shen for pointing out typos in an
earlier version.
ix
Dedication
For students (even Ohio students). Not for publishers — not this time. Maybe next time.
xi
The Nitty Gritty
You can always get the released edition of this book (in .pdf format) from my web
page or github and the materials to produce them (LATEX source, etc.) from me.
If you decide that you need to recompile the LATEX to change something, then you
should be aware of two things. First, there is no user’s manual nor help desk. Second,
I use a lot of include files, for incorporating code listings and output produced by soft-
ware like Matlab and AMPL. Make sure that you understand where everything is pulled
from if you recompile the LATEX. In particular, if you change anything in the directory
that the LATEX file is in, something can well change in the .pdf output. For example,
Matlab and AMPL scripts are pulled into the book, and likewise for the output of those
scripts. If you want to play, and do not want to change those parts of the book, play in
another directory.
I make significant use of software. Everything seems to work with:
MATLAB R2016b
AMPL 20170711
CPLEX 12.7.1.0
Mathematica 11.1.0.0
WinEdt 10.1
MiKTeX 2.9
Use of older versions is inexcusable. Newer versions will surely break things. Nonethe-
less, if you can report success or failure on newer versions, please let me know.
I use lots of LATEX packages (which, as you may know, makes things rather fragile).
I could not possibly gather the version numbers of those — I do have a day job! (but I
do endeavor to keep my packages up to date).
xiii
Contents
2 Modeling 11
2.1 A Production Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Norm Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Network Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 An Optimization Modeling Language . . . . . . . . . . . . . . . . . . . . 15
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Duality 51
5.1 The Strong Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2 Complementary Slackness . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.3 Duality for General Linear-Optimization Problems . . . . . . . . . . . . . 54
5.4 Theorems of the Alternative . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
xv
xvi CONTENTS
6 Sensitivity Analysis 63
6.1 Right-Hand Side Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.1.1 Local analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.1.2 Global analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.1.3 A brief detour: the column geometry for the Simplex Algorithm . 67
6.2 Objective Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.2.1 Local analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.2 Global analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.3 Local sensitivity analysis with a modeling language . . . . . . . . 70
6.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Appendices 141
A.1 LATEX template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
A.2 MATLAB for deconstructing the Simplex Algorithm . . . . . . . . . . . . . 149
A.3 MATLAB for Gomory cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
A.4 AMPL for uncapacitated facility-location problem . . . . . . . . . . . . . . 161
Bibliography 171
• Review ideas from linear algebra that we will make use of later.
1
2 CHAPTER 1. LET’S GET STARTED
Definition 1.1
A solution of a linear-optimization problem is an assignment of real values to the vari-
ables. A solution is feasible if it satisfies the linear constraints. A solution is optimal
if there is no feasible solution with better objective value. The set of feasible solutions
(which is a polyhedron) is the feasible region.
min c0 x
Ax = b ;
x ≥ 0,
P P
• Next, if we have an inequality nj=1 αj xj ≤ γ , we simply replace it with nj=1 αj xj +
s = γ , where a real slack variable sPis introduced whichPis constrained to be non-
negative. Similarly, we can replace nj=1 αj xj ≥ γ with nj=1 αj xj −s = γ , where
a real surplus variable s is introduced which is constrained to be non-negative.
max y 0 b
(D)
y 0 A ≤ c0 .
It is worth emphasizing that (P) and (D) are both defined from the same data A , b
and c . We have the following very simple but key result, relating the objective values
of feasible solutions of the two linear-optimization problems.
Proof.
c0 x̂ ≥ ŷ 0 Ax̂ ,
because ŷ 0 A ≤ c0 (feasibility of ŷ in (D)) and x̂ ≥ 0 (feasibility of x̂ in (P)). Furthermore
ŷ 0 Ax̂ = ŷ 0 b ,
For a matrix A ∈ Rm×n , we denote the entry in row i and column j as aij . For a
matrix A ∈ Rm×n , we denote the transpose of A by A0 ∈ Rn×m . That is, the entry in
row i and column j of A0 is aji .
Except when we state clearly otherwise, vectors are “column vectors.” That is, we
can view a vector x ∈ Rn as a matrix in Rn×1 . Column j of A is denoted by A·j ∈ Rm .
Row i of A is denoted by Ai· , and we view its transpose as a vector in Rn . We will have
4 CHAPTER 1. LET’S GET STARTED
far greater occasion to reference columns of matrices rather than rows, so we will often
write Aj as a shorthand for A·j , so as to keep notation less cluttered.
For matrices A ∈ Rm×p Pp and B ∈ R
p×n , the (matrix) product AB ∈ Rm×n is defined
to be the matrix having k=1 aik bkj as the entry in row i and column j . Note that for the
product AB to make sense, the number of columns of A and the number of rows of B
must be identical. It is important to emphasize that matrix multiplication is associative;
that is, (AB)C = A(BC) , and so we can always unambiguously write the product
of any number of matrices without the need for any parentheses. Also, note that the
product and transpose behave nicely together. That is, (AB)0 = B 0 A0 . P
The dot product or scalar product of vectors x, z ∈ Rn is the scalar hx, zi := nj=1 xj zj ,
which we can equivalently see as x0 z or z 0 x , allowing ourselves to consider a 1 × 1 ma-
trix to be viewed as a scalar. Thinking about matrix multiplication again, and freely
viewing columns as vectors, the entry in row i and column j of the product AB is the
dot product h(Ai· )0 , B·j i .
Matrix multiplication extends to “block matrices” in a straightforward manner. If
A11 · · · A1p B11 · · · B1n
A21 · · · A2p B21 · · · B2n
A := . .. .. and B := .. .. .. ,
.. . . . . .
Am1 · · · Amp Bp1 · · · Bpn
where each of the Aij and Bij are matrices, and we assume that for all i and j the
number of columns of Aik agrees with the number of rows of Bkj , then
Pp Pp
k=1 A1k Bk1 ··· k=1 A1k Bkn
Pp Pp
k=1 A2k Bk1 ···
k=1 A2k Bkn .
AB =
.. ..
..
. . .
Pp Pp
k=1 Amk Bk1 · · · k=1 Amk Bkn
P
That is, block i, j of the product is pk=1 Aik Bkj , and Aik Bkj is understood as ordinary
matrix multiplication. P
For vectors x1 , x2 , . . . , xp ∈ Rn , and scalars λ1 , λ2 , . . . , λp , pi=1 λi xi is a linear
combination of x1 , x2 , . . . , xp . The linear combination is trivial if all λi = 0 . The vec-
tors x1 , x2 , . . . , xp ∈ Rn are linearly independent if the only representation of the zero
vector in Rn as a linear combination of x1 , x2 , . . . , xp is trivial. The set of all linear combi-
nations of x1 , x2 , . . . , xp is the vector-space span of {x1 , x2 , . . . , xp } . The dimension of
a vector space V , denoted dim(V ) , is the maximum number of linearly-independent
vectors in it. Equivalently, it is the minimum number of vectors needed to span the
space.
A set of dim(V ) linearly-independent vectors that spans a vector space V is a ba-
sis for V . If V is the vector-space span of {x1 , x2 , . . . , xp } , then there is a subset of
{x1 , x2 , . . . , xp } that is a basis for V . It is not hard to prove the following very useful
result.
1.3. LINEAR-ALGEBRA REVIEW 5
The span of the rows of a matrix A ∈ Rm×n is the row space of A , denoted r.s.(A) :=
{y 0 A : y ∈ Rm } . Similarly, the span of the columns of a matrix A is the column space
of A , denoted c.s.(A) := {Ax : x ∈ Rn } . It is a simple fact that, for a matrix A ,
the dimension of its row space and the dimension of its column space are identical,
this common number being called the rank of A . The matrix A has full row rank if
its number of rows is equal to its rank. That is, if its rows are linearly independent.
Similarly, the matrix A has full column rank if its number of columns is equal to its
rank. That is, if its columns are linearly independent.
Besides the row and columns spaces of a matrix A ∈ Rm×n , there is another very
important vector space associated with A . The null space of A is the set of vectors
having 0 dot product with all rows of A , denoted n.s.(A) := {x ∈ Rn : Ax = 0} .
An important result is the following theorem relating the dimensions of the row
and null spaces of a matrix.
dim(r.s.(A)) + dim(n.s.(A)) = n .
There are some simple operations on a matrix that preserve its row and null spaces.
The following operations are elementary row operations:
1. multiply a row by a non-zero scalar;
A = [A1 , A2 , . . . , An ] .
[Ir , M ] .
[B, Ir ]
−1 B −1 uv 0 B −1
B + uv 0 = B −1 − ,
1 + v 0 B −1 u
choice of j (taken at each step of the recursion). Moreover, we have det(B 0 ) = det(B) ,
so we can could as well choose any fixed row i of B , and we have
r
X
det(B) = (−1)i+j bij det(B ij ) ,
j=1
[B, Ir ]
to obtain
[Ir , B −1 ] .
As we carry out the elementary row operations, we sometimes multiply a row by a
non-zero scalar. If we accumulate the product of all of these multipliers, the result is
det(B −1 ) ; equivalently, the reciprocal is det(B) .
Finally, for an invertible r × r matrix B and a vector b , we can express the unique
solution x̄ of the system Bx = b , via a formula involving determinants. Cramer’s rule
is the following formula:
det(B(j))
x̄j = , for j = 1, 2, . . . , r ,
det(B)
where B(j) is defined to be the matrix B with its j-th column replaced by b . It is worth
emphasizing that direct application of Cramer’s rule is not to be thought of as a useful
algorithm for computing the solution of a system of equations. But it can be very useful
to have in the proof toolbox.3
1.4 Exercises
Exercise 1.0 (Learn LATEX)
Learn to use LATEX for writing all of your homework solutions. Personally, I use
MiKTEX, which is an implementation of LATEX for Windows. Specifically, within MiKTEX,
I am using pdfLATEX (it only matters for certain things like including graphics and also
pdf into a document). I find it convenient to use the editor WinEdt, which is very LATEX
friendly. A good book on LATEX is
In A.1 there is a template to get started. Also, there are plenty of tutorials and beginner’s
guides on the web.
8 CHAPTER 1. LET’S GET STARTED
max c0 x
Ax ≤ b ; (P0 )
x ≥ 0.
HINT: Convert (P0 ) to a standard-form problem, and then apply the ordinary Weak
Duality Theorem for standard-form problems.
min c0 x + f 0 w
Ax + Bw ≤ b ;
(P0 )
Dx = g;
x≥0 w≤0 .
HINT: Convert (P0 ) to a standard-form problem, and then apply the ordinary Weak
Duality Theorem for standard-form problems.
n1=7
1.4. EXERCISES 9
n2=15
m1=2
m2=4
rng('default');
rng(1); % set seed
A = rand(m1,n1);
B = rand(m1,n2);
D = rand(m2,n1);
options = optimoptions('linprog','Algorithm','dual−simplex');
if (exitflag < 1)
disp('fail 1: LP did not have an optimal solution');
return;
end;
Modeling
11
12 CHAPTER 2. MODELING
at any non-negative level, as long as the resources availabilities are respected. We as-
sume that any unused resource quantities have no value and can be disposed of at no
cost. The problem is to find a profit-maximizing production plan. We can formulate
this problem as the linear-optimization problem
max c0 x
Ax ≤ b ; (P)
x ≥ 0,
“Norms” are very useful as a measure of the “size” of a vector. In some applica-
tions, we are interested in making the “size” small. There are many different “norms”
2.3. NETWORK FLOW 13
(for example, the Euclidean norm), but two are particularly interesting for linear opti-
mization.
For x ∈ Rn , the ∞-norm (or max-norm) of x is defined as
min t
t − xi ≥ 0 , i = 1, 2, . . . , n ;
t + xi ≥ 0 , i = 1, 2, . . . , n ;
Ax = b ,
Now, we would like to formulate the problem of finding a 1-norm minimizing solu-
tion of the system of equations Ax = b . This is quite easy, via the linear-optimization
problem:
Pn
min j=1 tj
tj − xj ≥ 0 , j = 1, 2, . . . , n ;
tj + xj ≥ 0 , j = 1, 2, . . . , n ;
Ax = b ,
where t ∈ Rn is a vector of n auxiliary variables. Notice how the minimization “pres-
sure” ensures that an optimal solution (x̂, t̂) has t̂j = |x̂j | , for
Pjn = 1, 2, . . . , n (again,
this would not work for maximization!), and so we will have j=1 t̂j = kx̂k1 .
A finite network G is described by a finite set of nodes N and a finite set A of arcs.
Each arc e has two key attributes, namely its tail t(e) ∈ N and its head h(e) ∈ N . We
think of a (single) commodity as being allowed to “flow” along each arc, from its tail
to its head. Indeed, we have “flow” variables
for v ∈ N . A flow is conservative if the net flow out of node v , minus the net flow into
node v , is equal to the net supply at node v , for all nodes v ∈ N .
The (single-commodity min-cost) network-flow problem is to find a minimum-
cost conservative flow that is non-negative and respects the flow upper bounds on the
arcs. We can formulate this as follows:
X
min ce xe
e∈A
X X
xe − xe = bv , ∀ v ∈ N ;
e∈A : e∈A :
t(e)=v h(e)=v
0 ≤ xe ≤ ue , ∀e∈A.
2.4. AN OPTIMIZATION MODELING LANGUAGE 15
For a simple case, we consider the Production Problem of Section 2.1. Note that (i)
on each line of the AMPL input files, anything after a symbol “#” is treated as a comment,
and (ii) commands are terminated with “;” The *.mod file can be production.mod :
At the University of Michigan, College of Engineering, we have AMPL with the solver
CPLEX installed on the CAEN (Computer Aided Engineering Network: caen.engin.umich.edu)
machines running Red Hat Linux.
We check that we have the three files needed to run our Production Problem:
caen-vnc02% ls
Next, we invoke AMPL from a command prompt, and invoke the production.run
file:
caen-vnc02% ampl
ampl: include production.run;
ampl: quit;
z = 12.125
x(1) = 3.375
x(2) = 1.000
caen-vnc02%
set NODES;
param b {NODES};
param c {ARCS};
minimize z:
sum {(i,j) in ARCS} c[i,j] * x[i,j];
param: NODES: b :=
1 12
2 6
3 −2
4 0
5 −9
6 −7 ;
We leave it to the gentle reader to devise an appropriate file flow.run (the optimal
objective function value is 25).
2.5. EXERCISES 19
2.5 Exercises
Exercise 2.1 (Dual in AMPL)
Without changing the file production.dat , use AMPL to solve the dual of the Production
Problem example, as described in Section 2.1. You will need to modify production.mod
and production.run .
Exercise 2.2 (Sparse solution for linear equations with AMPL)
In some application areas, it is interesting to find a “sparse solution” — that is, one with
few non-zeros — to a system of equations Ax = b, on say the domain −1 ≤ xj ≤ +1,
for j = 1, 2, . . . , n.
It is empirically well known that a 1-norm minimizing solution is a good heuristic
for finding a sparse solution. The moral justification of this is as follows. We define the
function indicator function I6=0 : R 7→ R by
1, w 6= 0;
I6=0 (w) :=
0, w = 0.
It is easy to see (make a graph) that f (w) := |w| is the “best convex function Pnunder-
estimator” of I6=0 on the domain
P [−1, 1]. So we can hope that minimizing j=1 |x|j
comes close to minimizing nj=1 I6=0 (xj ) .
Using AMPL, try this idea out on several large examples, using 1-norm minimization
as a heuristic for finding a sparse solution.
HINT: To get an interesting example, try generating a random m × n matrix A of
zeros and ones, perhaps m = 50 equations and n = 500 variables, maybe with prob-
m
ability 1/2 of an entry being equal to one. Next, choose a random z̃ ∈ R 2 satisfying
−1 ≤ z̃j ≤ +1, for j = 1, 2, . . . , m/2, and z̃j = 0 for j = m/2 + 1, . . . , n. Now let
b := Az̃. In this way, you will know that there is a solution (i.e., z̃) with only m/2 non-
zeros (which is already pretty sparse). Your 1-norm minimizing solution might in fact
recover this solution (,), or it may be sparser (,,), or perhaps less sparse (/).
We are given a set of tasks, numbered 1, 2, . . . , n that should be completed in the mini-
mum amount of time. For convenience, task 0 is a “start task” and task n + 1 is an “end
task”. Each task, except for the start and end task, has a known duration di . For con-
venience, let d0 := 0 . Any number of tasks can be carried out simultaneously, except
that there are precedences between tasks. Specifically, Ψi is the set of tasks that must
be completed before task i can be started. Let t0 := 0 , and for all other tasks i , let ti be
a decision variable representing its start time.
Formulate the problem, mathematically, as a linear-optimization problem. The ob-
jective should be to minimize the start time tn+1 of the end task. Then, model the prob-
lem with AMPL, make up some data, try some computations, and report on your results.
assumed that money obtained from cashing out the investment at the end of the plan-
ning horizon (that is, at the end of period T ) is part of vt,T
k . Note that at the start of
time period t , the cash available is the external inflow of pt , plus cash accumulated
from all investment vehicles in prior periods that was not reinvested. Finally, assume
that cash held over for one time period earns interest of q percent.
Formulate the problem, mathematically, as a linear-optimization problem. Then,
model the problem with AMPL, make up some data, try some computations, and report
on your results.
Chapter 3
23
24 CHAPTER 3. ALGEBRA VERSUS GEOMETRY
matrix Aβ := [Aβ1 , Aβ2 , . . . , Aβm ] is an invertible m×m matrix. The connection with the
standard “linear-algebra basis” is that the columns of Aβ form a “linear-algebra basis”
for Rm . But for us, “basis” almost always refers to β.
We associate a basic solution x̄ ∈ Rn with the basic partition via:
x̄η := 0 ∈ Rn−m ;
x̄β := A−1
β b ∈R .
m
We can observe that x̄β = A−1β b is equivalent to Aβ x̄β = b , which is the unique way
to write b as a linear combination of the columns of Aβ . Of course this makes sense,
because the columns of Aβ form a “linear-algebra basis” for Rm .
Note that every basic solution x̄ satisfies Ax̄ = b , because
n
X X X
Ax̄ = Aj x̄j = Aj x̄j + Aj x̄j = Aβ x̄β + Aη x̄η = Aβ A−1
β b + Aη 0 = b .
j=1 j∈β j∈η
A basic solution x̄ is a basic feasible solution if it is feasible for (P). That is, if x̄β =
A−1
β b≥0.
It is instructive to have a geometry for understanding the algebra of basic solutions,
but for standard-form problems, it is hard to draw something interesting in two di-
mensions. Instead, we observe that the feasible region of (P) is the solution set, in Rn ,
of
xβ + A−1 β Aη xη = Aβ b ;
−1
xβ ≥ 0 , xη ≥ 0 .
Example 3.1
For this system, it is convenient to draw pictures when n−m = 2 , for example n = 6 and
m = 4 . In such a picture, the basic solution x̄ ∈ Rn maps to the origin x̄η = 0 ∈ Rn−m ,
but other basic solutions (feasible and not) will map to other points.
Suppose that we have the data:
1 2 1 0 0 0
3 1 0 1 0 0
A :=
3/2 3/2 0
,
0 1 0
0 1 0 0 0 1
b := (7, 9, 6, 33/10)0 ,
β := (β1 , β2 , β3 , β4 ) = (1, 2, 4, 6) ,
η := (η1 , η2 ) = (3, 5) .
3.1. BASIC FEASIBLE SOLUTIONS AND EXTREME POINTS 25
Then
1 2 0 0
3 1 1 0
Aβ = [Aβ1 , Aβ2 , Aβ3 , Aβ4 ] = 3/2 3/2 0
,
0
0 1 0 1
1 0
0 0
Aη = [Aη1 , Aη2 ] =
0
,
1
0 0
xβ = (x1 , x2 , x4 , x6 )0
xη := (x3 , x5 )0 .
We can calculate
−1 4/3
1 −2/3
Aβ−1 Aη =
2 −10/3 ,
−1 2/3
A−1 0
β b := (1, 3, 3, 3/10) ,
and then we have plotted this in Figure 3.1. The plot has xη1 = x3 as the abscissa, and
xη2 = x5 as the ordinate. In the plot,
besides the non-negativity of the variables x3 and
x4 , the four inequalities of Aβ−1 Aη xη ≤ A−1 β b are labeled with their slack variables —
these are the basic variables
x1 ,x2 , x4 , x6 . The correct matching of the basic variables
to the inequalities of A−1 −1
β Aη xη ≤ Aβ b is simply achieved by seeing that the i-th
inequality has slack variable xβi .
The feasible region is colored cyan, while basic feasible solutions project to green
points and basic infeasible solutions project to red points. We can see that the basic
solution associate with the current basis is feasible, because the origin (corresponding
to the non-basic variables being set to 0) is feasible.
A set S ⊂ Rn is a convex set if it contains the entire line segment between every pair
of points in S . That is,
Figure 3.1: Feasible region projected into the space of non-basic variables
Theorem 3.2
Every basic feasible solution of standard-form (P) is an extreme point of its feasible
region.
If
x̄ = λx1 + (1 − λ)x2 , with x1 and x2 feasible for (P) and 0 < λ < 1 ,
3.1. BASIC FEASIBLE SOLUTIONS AND EXTREME POINTS 27
then 0 = x̄η = λx1η + (1 − λ)x2η and 0 < λ < 1 implies that x1η = x2η = 0 . But then
Aβ xiβ = b implies that xiβ = A−1
β b = x̄β , for i = 1, 2 . Hence x̄ = x = x (but we
1 2
Theorem 3.3
Every extreme point of the feasible region of standard-form (P) is a basic solution.
Taken together, these last two results give us the main result of this section.
Corollary 3.4
For a feasible point x̂ of standard-form (P), x̂ is extreme if and only if x̂ is a basic
solution.
z̄η := ej ∈ Rn−m ;
z̄β := −A−1
β Aη j ∈ Rm .
So
A (x̂ + z̄) = b ,
3.2. BASIC FEASIBLE DIRECTIONS 29
for every feasible x̂ and every ∈ R . Moving a positive amount in the direction z̄ corre-
sponds to increasing the value of xηj , holding the values of all other non-basic variables
constant, and making appropriate changes in the basic variables so as to maintain sat-
isfaction of the equation system Ax = b .
There is a related point worth making. We have just seen that for a given basic
partition β, η , each of the n − m basic directions is in the null space of A — there is one
such basic direction for each of the n − m choices of ηj . It is very easy to check that
these basic directions are linearly independent — just observe that they are columns of
the n × (n − m) matrix
I
.
−A−1
β Aη
Because the dimension of the null space of A is n − m , these n − m basic directions
form a basis for the null space of A .
Now, we focus on the basic feasible solution x̄ determined by the basic partition
β, η . The basic direction z̄ is a basic feasible direction relative to the basic feasible
solution x̄ if x̄ + z̄ is feasible, for sufficiently small positive ∈ R . That is, if
A−1 −1
β b − Aβ Aηj ≥ 0 ,
x̄β − Āηj ≥ 0 ,
x̄βi − āi,ηj ≥ 0 ,
Theorem 3.5
For a standard-form problem (P), suppose that x̄ is a basic feasible solution relative to
the basic partition β, η . Consider choosing a non-basic index ηj . Then the associated
basic direction z̄ is a feasible direction relative to x̄ if and only if
z̄η := ej ∈ Rn−m ;
z̄β := −A−1
β Aη j ∈ Rm .
If the basic direction z̄ is a ray, then we call it a basic feasible ray. We have already
seen that Az̄ = 0. Furthermore, z̄ ≥ 0 if and only if Āηj := A−1 β Aηj ≤ 0.
Therefore, we have the following result:
Theorem 3.6
The basic direction z̄ is a ray of the feasible region of (P) if and only if Āηj ≤ 0 .
Recall, further, that z̄ is a basic feasible direction relative to the basic feasible solution x̄
if x̄ + z̄ is feasible, for sufficiently small positive ∈ R . Therefore, if z̄ is a basic feasible
ray, relative to the basic partition β, η and x̄ is the basic feasible solution relative to the
same basic partition, then z̄ is a basic feasible direction relative to x̄ .
A ray ẑ of a convex set S is an extreme ray if we cannot write
Similarly to the correspondence between basic feasible solutions and extreme points for
standard-form problems, we have the following two results.
Theorem 3.7
Every basic feasible ray of standard-form (P) is an extreme ray of its feasible region.
Theorem 3.8
Every extreme ray of the feasible region of standard-form (P) is a positive multiple of
a basic feasible ray.
3.4. EXERCISES 31
3.4 Exercises
Exercise 3.1 (Illustrate algebraic and geometric concepts)
Make a small example, say with six variables and four equations, to fully illustrate all
of the concepts in this chapter. In A.2, there are MATLAB scripts and functions that could
be very helpful here.
Exercise 3.3 (Extreme rays are positive multiples of basic feasible rays)
If you are feeling very ambitious, prove Theorem 3.8.
Exercise 3.4 (Dual basic direction — do this if you will be doing Exercise 4.2)
Let β, η be a basic partition for our standard-form problem (P). As you will see on the
first page of the next chapter, we can associate with the basis β, a dual solution
ȳ 0 := c0β A−1
β
of
max y 0 b
(D)
y 0 A ≤ c0 .
It is easy to see that ȳ satisfies the constraints y 0 Aβ ≤ c0β (of (D)) with equality; that is,
the dual constraints indexed from β are “active”.
Let us assume that ȳ is feasible for (D). Now, let β` be a basic index, and let w̄ := H`·
be row ` of H := A−1 β . Consider ȳ := ȳ − λw̄ , and explain (with algebraic justification)
˜ 0
what is happening to the activity of each constraint of (D), as λ increases. HINT: Think
about the cases of (i) i = `, (ii) i ∈ β, i 6= `, and (iii) j ∈ η .
Chapter 4
ȳ 0 := c0β A−1
β .
Lemma 4.2
If β is a basis, then the primal basic solution x̄ (feasible or not) and the dual solution
ȳ (feasible or not) associated with β have equal objective value.
Proof. The objective value of x̄ is c0 x̄ = c0β x̄β + c0η x̄η = c0β (A−1 0 −1
β b) + cη 0 = cβ Aβ b . The
0
33
34 CHAPTER 4. THE SIMPLEX ALGORITHM
Definition 4.3
The vector of reduced costs associated with basis β is
Lemma 4.4
The dual solution of (D) associated with basis β is feasible for (D) if
c̄η ≥ 0 .
ȳ 0 Aη ≤ c0η .
So we have
ȳ 0 [Aβ , Aη ] ≤ (c0β , c0η ) ,
or, equivalently,
ȳ 0 A ≤ c0 ,
Hence ȳ is feasible for (D). t
u
Proof. We have already observed that c0 x̄ = ȳ 0 b for the pair of primal and dual solutions
associated with the basis β . If these solutions x̄ and ȳ are feasible for (P) and (D),
respectively, then by weak duality these solutions are optimal. t
u
We can also take (P) and transform it into an equivalent form that is quite revealing.
Clearly, (P) is equivalent to
xβ + A−1 −1
β Aη x η = Aβ b .
4.2. THE SIMPLEX ALGORITHM WITH NO WORRIES 35
Using this equation to substitute for xβ in the objective function, we are led to the linear
objective function
c0β A−1
β b + min c0
η − c0 −1
A
β β A 0 −1 0
η xη = cβ Aβ b + min c̄η xη ,
which is equivalent to the original one on the set of points satisfying Ax = b . In this
equivalent form, it is now solely expressed in terms of xη . Now, if c̄η ≥ 0 , the best we
could hope for in minimizing is to set xη = 0 . But the unique solution having xη = 0
is the basic feasible solution x̄ . So that x̄ is optimal.
Example 4.6
This is a continuation of Example 3.1. In Figure 4.1, we have depicted the sufficient
optimality criterion, in the space of a particular choice of non-basic variables — not the
choice previously depicted. Specifically, we consider the equivalent problem
min c̄0η xη
Āη xη ≤ x̄β ;
xη ≥ 0 .
This plot demonstrates the optimality of β := (5, 4, 6, 3) (η := (2, 1)). The basic direc-
tions available from the basic feasible solution x̄ appear as standard unit vectors in the
space of the non-basic variables. The solution x̄ is optimal because c̄η ≥ 0 ; we can also
think of this as c̄η having a non-negative dot product with each of the standard unit
vectors, hence neither direction is improving.
If the sufficient optimality criterion is not satisfied, then we choose an ηj such that c̄ηj
is negative, and we consider solutions that increase the value of xηj up from x̄ηj = 0 ,
changing the values of the basic variables to insure that we still satisfy the equations
Ax = b , while holding the other non-basic variables at zero.
Operationally, we take the basic direction z̄ ∈ Rn defined by
z̄η := ej ∈ Rn−m ;
z̄β := −A−1
β Aηj = −Āηj ∈ Rm ,
and we consider solutions of the form x̄ + λz̄ , with λ > 0 . The motivation is based on
the observations that
That is, the objective function changes at the rate of c̄ηj , and we maintain satisfaction
of the Ax = b constraints.
Maximum step — the ratio test and a sufficient unboundedness criterion. By our
choice of direction z̄, all variables that are non-basic with respect to the current choice
of basis remain non-negative (xηj increases from 0 and the others remain at 0). So the
4.2. THE SIMPLEX ALGORITHM WITH NO WORRIES 37
only thing that restricts our movement in the direction z̄ from x̄ is that we have to make
sure that the current basic variables remain non-negative. This is easy to take care of.
We just make sure that we choose λ > 0 so that
Notice that for i such āi,ηj ≤ 0 , there is no limit on how large λ can be. In fact, it can well
happen that Āηj ≤ 0 . In this case, x̄ + λz̄ is feasible for all λ > 0 and c0 (x̄ + λz̄) → −∞
as λ → +∞ , so the problem is unbounded.
Otherwise, to insure that x̄ + λz̄ ≥ 0 , we just enforce
x̄βi
λ≤ , for i such that āi,ηj > 0 .
āi,ηj
Finally, to get the best improvement in the direction z̄ from x̄, we let λ equal
x̄βi
λ̄ := min .
i : āi,ηj >0 āi,ηj
Non-degeneracy. There is a significant issue in even carrying out one iteration of this
algorithm. If x̄βi = 0 for some i such that āi,ηj > 0 , then λ̄ = 0 , and we are not able to
make any change from x̄ in the direction z̄ . Just for now, we will simply assume away
this problem, using the following hypothesis that every basic variable of every basic
feasible solution is positive.
Definition 4.7
The problem (P) satisfies the non-degeneracy hypothesis if for every feasible basis β,
we have x̄βi > 0 for i = 1, 2, . . . , m .
Another basic feasible solution. By our construction, the new solution x̄ + λ̄z̄ is fea-
sible and has lesser objective value than that of x̄ . We can repeat the construction as
long as the new solution is basic. If it is basic, there is a natural guess as to what an ap-
propriate basis may be. The variable xηj , formerly non-basic at value 0 has increased
to λ̄ , so clearly it must become basic. Also, at least one variable that was basic now has
value 0 . In fact, under our non-degeneracy hypothesis, once we establish that the new
solution is basic, we observe that exactly one variable that was basic now has value 0.
Let
∗ x̄βi
i := argmin .
i : āi,ηj >0 āi,ηj
If there is more than one i that achieves the minimum (which can happen if we do not
assume the non-degeneracy hypothesis), then we will see that the choice of i∗ can be
any of these. We can see that xβi∗ has value 0 in x̄ + λ̄z̄ . So it is natural to hope we can
replace xβi∗ as a basic variable with xηj .
Let
β̃ := (β1 , β2 , . . . , βi∗ −1 , ηj , βi∗ +1 , . . . , βm )
38 CHAPTER 4. THE SIMPLEX ALGORITHM
and
η̃ := (η1 , η2 , . . . , ηj−1 , βi∗ , ηj+1 , . . . , ηn−m ) .
Lemma 4.8
Aβ̃ is invertible.
Lemma 4.9
The unique solution of Ax = b having xη̃ = 0 is x̄ + λ̄z̄ .
Proof. x̄η̃ + λ̄z̄η̃ = 0 . Moreover, x̄ + λ̄z̄ is the unique solution to Ax = b having xη̃ = 0
because Aβ̃ is invertible. t
u
Putting these two lemmata together, we have the following key result.
Theorem 4.10
x̄ + λ̄z̄ is a basic solution; in fact, it is the basic solution determined by β̃ .
min c0 x
Ax = b ; (P)
x ≥ 0,
5. GOTO 1.
Theorem 4.11
Under the non-degeneracy hypothesis, the Worry-Free Simplex Algorithm terminates
correctly.
Proof. Under the non-degeneracy hypothesis, every time we visit Step 1, we have a pri-
mal feasible solution with a decreased objective value. This implies that we never revisit
a basic feasible partition. But there are only a finite number of basic feasible partitions,
so we must terminate, after a finite number of pivots. But there are only two places
where the algorithm terminates; either in Step 1 where we correctly identify that x̄ and
ȳ are optimal by our sufficient optimality criterion, or in Step 3 because of our sufficient
unboundedness criterion. t
u
Remark 4.12
There are two very significant issues remaining:
• How do we handle degeneracy? (see Section 4.3).
• How do we initialize the algorithm in Step 0? (see Section 4.4).
40 CHAPTER 4. THE SIMPLEX ALGORITHM
4.3 Anticycling
min c0 x
Ax = b (B) ; (P (B))
x ≥ 0 ,
where
• 0 denotes a vector in which all entries are the zero polynomial (in );
The ordering is actually quite simple, but for the sake of precision, we describe it for-
mally.
4.3. ANTICYCLING 41
An ordered ring. The set of polynomials in , with real coefficients, form what is
known in mathematics
P as an “ordered
Pm ring”. The ordering < is simple to describe.
Let p() := m j=0 jp j and q() :=
j=0 jq j . Then p() < q() if the least j for which
pj 6= qj has pj < qj . Another way to think about the ordering < is that p() < q()
if p() < q() when is considered to be an arbitrarily small positive number. Notice
how the ordering < is in a certain sense a more refined ordering than < . That is, if
p(0) < q(0) , then p() < q() , but we can have p(0) = q(0) without having p() = q() .
Finally, we note that the zero polynomial “0 ”(all coefficients equal to 0) is the zero of
this ordered ring, so we can speak, for example about polynomials that are positive
with respect to the ordering < . Concretely, p() 6= 0 is positive if the least i for which
pi 6= 0 satisfies pi > 0 . Emphasizing that < is a more refined ordering than < , we see
that p() ≥ 0 implies that p(0) = p0 ≥ 0 .
For an arbitrary basis β, the associated basic solution x̄ has x̄β := A−1 β (b + B~) =
−1
x̄β + Aβ B~ . It is evident that x̄βi is a polynomial, of degree at most m, in , for
each i = 1, . . . , m . Because the ordering < refines the ordering < , we have that
x̄β ≥ 0 implies that x̄β ≥ 0 . That is, any basic feasible partition for (P (B)) is a basic
feasible partition for (P). This implies that applying the Worry-Free Simplex Algorithm
to (P (B)), using the ratio test to enforce feasibility of x̄ in (P (B)) at each iteration,
implies that each associated x̄β is feasible for (P). That is, the choice of a leaving variable
dictated by the ratio test when we work with (P (B)) is valid if we instead do the ratio
test working with (P).
The objective value associated with x̄ is c0β A−1
β (b+B~ ) = ȳ 0 b+ ȳ 0 B~ , is a polynomial
(of degree at most m) in . Therefore, we can order basic solutions for (P (B)) using
< , and that ordering refines the ordering of the objective values of the corresponding
basic solution of (P). This implies that if x̄ is optimal for (P (B)) , then the x̄ associated
with the same basis is optimal for (P).
Lemma 4.13
The -perturbed problem (P (B)) satisfies the non-degeneracy hypothesis.
Proof. For an arbitrary basis matrix Aβ , the associated basic solution x̄ has x̄β := A−1
β (b+
−1
B~) = x̄β + Aβ B~ . As we have already pointed out, x̄βi is a polynomial, of degree at
most m, in , for each i = 1, . . . , m . x̄βi = 0 implies that the i-th row of A−1β B is all
−1
zero. But this is impossible for the invertible matrix Aβ B . t
u
Theorem 4.14
Let β 0 be a basis that is feasible for (P). Then the Worry-Free Simplex Algorithm ap-
plied to (P (Aβ 0 )), starting from the basis β 0 , correctly demonstrates that (P) is un-
bounded or finds an optimal basic partition for (P).
42 CHAPTER 4. THE SIMPLEX ALGORITHM
Proof. The first important point to notice is that we are choosing the perturbation of the
original right-hand side to depend on the choice of a basis that is feasible for (P). Then
we observe that x̄β 0 := A−1
β0
(b + Aβ 0~) = A−1
β0
b +~ . Now because x̄ is feasible for (P), we
have A−1
β0
b ≥ 0 . Then, the ordering < implies that x̄β 0 = A−1β0
b + ~ ≥ 0 . Therefore,
the basis β is feasible for (P (Aβ 0 )), and the Worry-Free Simplex Algorithm can indeed
0
(Pivot.mp4)
Figure 4.2: With some .pdf viewers, you can click above to see or download a short
video. Or just see it on YouTube (probably with an ad) by clicking here.
Next, we will deal with the problem of finding an initial basic feasible solution for the
standard-form problem
min c0 x
Ax = b ; (P)
x ≥ 0.
4.4. OBTAINING A BASIC FEASIBLE SOLUTION 43
Otherwise, we have some work to do. We define a new non-negative variable xn+1 ,
which we temporarily adjoin as an additional non-basic variable. So our basic indices
remain as
β̃ = β̃1 , β̃2 , . . . , β̃m ,
while our non-basic indices are extended to
η̃ = η̃1 , η̃2 , . . . , η̃n−m , η̃n−m+1 := n + 1 .
This variable xn+1 is termed an artificial variable. The column for the constraint
matrix associated with xn+1 is defined as An+1 := −Aβ̃ 1 . Hence Ān+1 = −1 . Finally,
we temporarily put aside the objective function from (P) and replace it with one of
minimizing the artificial variable xn+1 . That is, we consider the so-called phase-one
problem
min xn+1
Ax + An+1 xn+1 = b ; (Φ)
x , xn+1 ≥ 0 .
With this terminology, the original problem (P) is referred to as the phase-two prob-
lem.
44 CHAPTER 4. THE SIMPLEX ALGORITHM
It is evident that any feasible solution x̂ of (Φ) with x̂n+1 = 0 is feasible for (P).
Moreover, if the minimum objective value of (Φ) is greater than 0, then we can conclude
that (P) has no feasible solution. So, toward establishing whether or not (P) has a
feasible solution, we focus our attention on (Φ). We will soon see that we can easily
find a basic feasible solution of (Φ).
Finding a basic feasible solution of (Φ). Choose i∗ so that x̄β̃i∗ is most negative. Then
we exchange β̃i∗ with η̃n−m+1 = n + 1 . That is, our new basic indices are
β := β̃1 , β̃2 , . . . , β̃i∗ −1 , n + 1, β̃i∗ +1 , . . . , β̃m ,
Lemma 4.15
The basic solution of (Φ) associated with the basic partition β, η is feasible for (Φ).
Proof. This pivot, from β̃, η̃ to β, η amounts to moving in the basic direction z̄ ∈ Rn+1
defined by
z̄η̃ := en−m+1 ∈ Rn−m+1 ;
z̄β̃ := −A−1
β̃
An+1 = 1 ∈ Rm ,
in the amount λ := −x̄β̃i∗ > 0 . That is, x̄ + λz̄ is the basic solution associated with the
basic partition β, η . Notice how when we move in the direction z̄ , all basic variables
increase at exactly the same rate that xn+1 does. So, using this direction to increase
xn+1 from 0 to −x̄β̃i∗ > 0 results in all basic variables increasing by exactly −x̄β̃i∗ > 0 .
By the choice of i∗ , this causes all basic variable to become non-negative, and xβ̃i∗ to
become 0 , whereupon it can leave the basis in exchange for xn+1 . t
u
The end game for (Φ). If (P) is feasible, then at the very last iteration of the Worry-
Free Simplex Algorithm on (Φ), the objective value will drop from a positive number to
zero. As this happens, xn+1 will be eligible to leave the basis, but so may other variables
also be eligible. That is, there could be a tie in the ratio-test of Step 4 of the Worry-Free
Simplex Algorithm. As is the case whenever there is a tie, any of the tying indices can
leave the basis — all of the associated variables are becoming zero simultaneously. For
our purposes, it is critical that if there is a tie, we choose i∗ := n + 1 ; that is, xn+1 must
be selected to become non-basic. In this way, we not only get a feasible solution to (P),
we get a basis for it that does not use the artificial variable xn+1 . Now, starting from
this basis, we can smoothly shift to minimizing the objective function of (P).
4.4. OBTAINING A BASIC FEASIBLE SOLUTION 45
min xn+1
Ax + An+1 xn+1 = b (B) ; (Φ )
x , xn+1 ≥ 0 ,
We do need to manage the final iteration a bit carefully. There are two different
ways we can do this.
“Early arrival”. If (P) has a feasible solution, at some point the value of xn+1 will
decrease to a homogeneous polynomial in . That is, the constant term will become 0.
At this point, although xn+1 may not be eligible to leave the basis for (Φ ), it will be
eligible to leave for (P). So, at this point we let xn+1 leave the basis, and we terminate
the solution process for (Φ ), having found a feasible basis for (P). In fact, we have just
constructively proved the following result.
Theorem 4.16
If standard form (P) has a feasible solution, then it has a basic feasible solution.
46 CHAPTER 4. THE SIMPLEX ALGORITHM
Note that because xn+1 may not have been eligible to leave the basis for (Φ ) when
we apply the “early arrival’ idea, the resulting basis may not be feasible for (P ). So we
will have to re-perturb (P) .
“Be patient”. Perhaps a more elegant way to handle the situation is to fully solve (Φ ).
In doing so, if (P) has a feasible solution, then the minimum objective value of (Φ ) will
be 0 (i.e., the zero polynomial), and xn+1 will necessarily be non-basic. That is because,
at every iteration, every basic variable in (Φ ) is positive. Because xn+1 legally left the
basis for (Φ ) at the final iteration, the resulting basis is feasible for (P ). So we do not
re-perturb (P) , and we simply revert to solving (P ) from the final basis of (Φ ) .
2. Solve the phase-one problem using the Worry-Free Simplex Algorithm, adapted
to algebraically perturbed problems, but always giving preference to xn+1 for
leaving the basis whenever it is eligible to leave for the unperturbed problem.
Go to the next step, as soon as xn+1 leaves the basis;
3. Starting from the feasible basis obtained for the original standard-form problem,
apply an algebraic perturbation (Note that the previous step may have left us with
a basis that is feasible for the original unperturbed problem, but infeasible for the
original perturbed problem — this is why we apply a perturbation anew);
4. Solve the problem using the Worry-Free Simplex Algorithm, adapted to alge-
braically perturbed problems.
It is important to know that the Simplex Algorithm will be used, later, to prove the
celebrated Strong Duality Theorem. For that reason, it is important that our algorithm
be mathematically complete. But from a practical computational viewpoint, there is
substantial overhead in working with the -perturbed problems. Therefore, in practice,
no computer code that is routinely applied to large instances worries about the potential
for cycling associated with degeneracy.
4.6 Exercises
Exercise 4.1 (Carry out the Simplex Algorithm)
In Appendix A.2, there are MATLAB scripts and functions which implement the primitive
steps of the simplex algorithm. Using these primitives only, write a script to carry out
the simplex algorithm. Be sure to read in the data using pivot_setup.m. Do not worry
about degeneracy/anti-cycling. But I do want you to take care of algorithmically finding
an initial feasible basis as described in Section 4.4.1. Make some small examples to fully
illustrate the different possibilities for (P) (i.e., infeasible, optimal, unbounded).
Aβ̃ is invertible). Let ȳ˜ be the dual solution associated with the basic partition β̃, η̃ , and
let H`· be row ` of H := A−1 β . Prove that
c̄ηj 0
ȳ˜ = ȳ + H
ā`,ηj `·
Let θ = 2π/k, with integer k ≥ 5. The idea is to use the symmetry of the geometric
circle, and complete a cycle of the Worry-Free Simplex Algorithm in 2k pivots. Choose
a constant γ satisfying 0 < γ < (sin θ)/(1 − cos θ) . Let
1 0
A1 := , A2 := .
0 γ
Let
cos θ − sin θ
R := .
sin θ cos θ
4.6. EXERCISES 49
We can observe that for odd j , Aj is a rotation of A1 by (j − 1)π/k radians, and for even
j , Aj is a rotation of A2 by (j − 2)π/k radians.
Let cj := 1 − a1j − a2j /γ , for j = 1, 2, . . . , 2k , and let b := (0, 0)0 . Because b = 0 ,
the problem is fully degenerate; that is, x̄ = 0 for all basic solutions x̄ . Notice that
this implies that either the problem has optimal objective value of zero, or the objective
value is unbounded.
For k = 5 , check that you can choose γ = cot θ , and then check that the following
is a sequence of bases β that are legal for the Worry-Free Simplex Algorithm:
You need to check that for every pivot, the incoming basic variable xηj has negative
reduced cost, and that the outgoing variable is legally selected — that is that āi,ηj > 0 .
Feel free to use MATLAB, MATHEMATICA, etc.
If you are feeling ambitious, check that for all k ≥ 5 , we get a cycle of the Worry-Free
Simplex Algorithm.
Note that it may seem hard to grasp the picture at all4 . But see Section 6.1.3 and
the following Mathematica code. Note that the animation can be activated within the
Acrobat Reader using the controls below the figure.
t =
2 Pi/5; c = Cos[t]; s = Sin[t]; n = 69; e = .75;
m {{c, −s, 0}, {s, c, 0}, {(c − 1)/c, s/c, 1}};
=
x =
{{1}, {0}, {0}};
y =
{{0}, {Cot[t]}, {0}};
T =
Partition[
Flatten[{x, y, m.x, m.y, m.m.x, m.m.y, m.m.m.x, m.m.m.y, m.m.m.m.x,
m.m.m.m.y, x}], {3}, {3}];
ListAnimate[
Table[Graphics3D[
Table[{GrayLevel[e], Polygon[{{0, 0, 0}, T[[i]], T[[i + 1]]}]},
{i,10}], Boxed −> False, PlotRangePadding −> 2.5,
BoxRatios −> {2, 2, 2}, SphericalRegion −> True,
ViewPoint −> {Sin[2 Pi*t/n], Cos[2 Pi*t/n], 0},
Lighting −> False], {t, 0, n}]]
50 CHAPTER 4. THE SIMPLEX ALGORITHM
Duality
min c0 x
Ax = b ; (P)
x ≥ 0
and its dual
max y 0 b
(D)
y 0 A ≤ c0 .
51
52 CHAPTER 5. DUALITY
• Weak Optimal Basis Theorem. If β is a feasible basis and c̄η ≥ 0 , then the primal
solution x̄ and the dual solution ȳ associated with β are optimal.
The Weak Duality Theorem directly implies that if x̂ is feasible in (P) and ŷ is feasible
in (D), and c0 x̂ = ŷ 0 b , then x̂ and ŷ are optimal. Thinking about it this way, we see that
both the Weak Duality Theorem and the Weak Optimal Basis Theorem assert conditions
that are sufficient for establishing optimality.
Proof. If (P) has a feasible solution and (P) is not unbounded, then the Simplex Algo-
rithm will terminate with a basis β such that the associated basic solution x̄ and the
associated dual solution ȳ are optimal. t
u
It is important to realize that the Strong Optimal Basis Theorem and the Strong
Duality Theorem depend on the correctness of the Simplex Algorithm — this includes:
(i) the correctness of the phase-one procedure to find an initial feasible basis of (P),
and (ii) the anti-cycling methodology.
5.2. COMPLEMENTARY SLACKNESS 53
Definition 5.3
With respect to the standard-form problem (P) and its dual (D), the solutions x̂ and ŷ
are complementary if
(cj − ŷ 0 A·j )x̂j = 0 , for j = 1, 2, . . . , n ;
ŷi (Ai· x̂ − bi ) = 0 , for i = 1, 2, . . . , m .
Theorem 5.4
If x̄ is a basic solution (feasible or not) of standard-form (P), and ȳ is the associated
dual solution, then x̄ and ȳ are complementary.
Proof. Notice that if x̄ is a basic solution then Ax̄ = b. Then we can see that comple-
mentarity of x̄ and ȳ amounts to
c̄j x̄j = 0 , for j = 1, 2, . . . , n .
It is clear then that x̄ and ȳ are complementary, because if x̄j > 0 , then j is a basic
index, and c̄j = 0 for basic indices. t
u
Theorem 5.5
If x̂ and ŷ are complementary with respect to (P) and (D), then c0 x̂ = ŷ 0 b .
Proof.
c0 x̂ − ŷ 0 b = (c0 − ŷ 0 A)x̂ + ŷ 0 (Ax̂ − b) ,
which is 0 by complementarity. t
u
Proof. This immediately follows from Theorem 5.5 and the Weak Duality Theorem. t
u
Proof. If x̂ and ŷ are optimal, then by the Strong Duality Theorem, we have c0 x̂− ŷ 0 b = 0 .
Therefore, we have
0 = (c0 − ŷ 0 A)x̂ + ŷ 0 (Ax̂ − b)
Xn m
X
0
= (cj − ŷ A·j )x̂j + ŷi (Ai· x̂ − bi ) .
j=1 i=1
Clearly this expression is equal to a non-negative number. Finally, we observe that this
expression can only be equal to 0 if
(cj − ŷ 0 A·j )x̂j = 0 , for j = 1, 2, . . . , n .
t
u
Theorem 5.8
• Weak Duality Theorem: If (x̂P , x̂N , x̂U ) is feasible in (G) and (ŷG , ŷL , ŷE ) is fea-
sible in (H), then c0P x̂P + c0N x̂N + c0U x̂U ≥ ŷG
0 b + ŷ 0 b + ŷ 0 b .
G L L E E
• Strong Duality Theorem: If (G) has a feasible solution, and (G) is not un-
bounded, then there exist feasible solutions (x̂P , x̂N , x̂U ) for (G) and (ŷG , ŷL , ŷE )
for (H) that are optimal. Moreover, c0P x̂P +c0N x̂N +c0U x̂U = ŷG 0 b + ŷ 0 b + ŷ 0 b .
G L L E E
Proof. The Weak Duality Theorem for general problems can be demonstrated as easily
as it was for the standard-form problem and its dual. But the Strong Duality Theorem
for general problems is most easily obtained by converting our general problem (G) to
the standard-form
Definition 5.9
With respect to (G) and its dual (H), the solutions (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are
complementary if
0
cj − ŷG AGj − ŷL0 ALj − ŷE
0
AEj x̂j = 0 , for all j ;
ŷi (AiP xP + AiN xN + AiU xU − bi ) = 0 , for all i .
Theorem 5.10
• Weak Complementary Slackness Theorem: If (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are
feasible and complementary with respect to (G) and (H), then (x̂P , x̂N , x̂U ) and
(ŷG , ŷL , ŷE ) are optimal.
56 CHAPTER 5. DUALITY
• Strong Complementary Slackness Theorem: If (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE )
are optimal for (G) and (H), (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are complementary
(with respect to (G) and (H)).
Proof. Similarly to the proof for standard-form (P) and its dual (D), we consider the
following expression.
X
0
0 = (cj − ŷG AGj − ŷL0 ALj − ŷE
0
AEj ) x̂j
j∈P
| {z } |{z}
≥0 ≥0
X
0
+ (cj − ŷG AGj − ŷL0 ALj − ŷE
0
AEj ) x̂j
j∈N
| {z } |{z}
≤0 ≤0
X
0
+ (cj − ŷG −
AGj −ŷL0 ALj 0
ŷE AEj ) x̂j
j∈U
| {z }
=0
X
+ ŷi (AiP xP + AiN xN + AiU xU − bi )
|{z} | {z }
i∈G ≥0 ≥0
X
+ ŷi (AiP xP + AiN xN + AiU xU − bi )
|{z} | {z }
i∈L ≤0 ≤0
X
+ ŷi (AiP xP + AiN xN + AiU xU − bi ) .
| {z }
i∈E =0
The results follows easily using the Weak and Strong Duality Theorems for (G) and
(H). t
u
The table below summarizes the duality relationships between the type of each pri-
mal constraint and the type of each associated dual variable. Highlighted in yellow
are the relationships for the standard-form (P) and its dual (D). It is important to note
that the columns are labeled “min” and “max”, rather than primal and dual — the table
is not correct if “min” and “max” are interchanged.
min max
≥ ≥0
constraints ≤ ≤0 variables
= unres.
≥0 ≤
variables ≤0 ≥ constraints
unres. =
5.4. THEOREMS OF THE ALTERNATIVE 57
Proof. It is easy to see that there cannot simultaneously be a solution x̂ to (I) and ŷ to
(II). Otherwise we would have
0 ≥ ŷ 0 A x̂ = ŷ 0 b > 0 ,
|{z} |{z}
≤0 ≥0
min 00 x
Ax = b ; (P)
x ≥ 0.
Its dual is
max y 0 b
(D)
y 0 A ≤ 00 .
Because (P) is infeasible, then (D) is either infeasible or unbounded. But ŷ := 0 is a
feasible solution to (D), therefore (D) must be unbounded. Therefore, there exists a
feasible solution ŷ to (D) having objective value greater than zero (or even any fixed
constant). Such a ŷ is a solution to (II). t
u
Remark 5.12
Geometrically, the Farkas Lemma asserts that exactly one of the following holds:
58 CHAPTER 5. DUALITY
In a similar fashion to the Farkas Lemma, we can develop theorems of this type for
feasible regions of other linear-optimization problems.
5.5. EXERCISES 59
Proof. It is easy to see that there cannot simultaneously be a solution x̂ to (I) and ŷ to
(II). Otherwise we would have
0 = ŷ 0 A x̂ ≥ ŷ 0 b > 0 ,
|{z}
=0
min 00 x
(P)
Ax ≥ b .
Its dual is
max y 0 b
y 0 A = 00 ; (D)
y ≥ 0.
5.5 Exercises
Exercise 5.1 (Dual picture)
For the standard-form problem (P) and its dual (D), explain aspects of duality and
complementarity using this picture:
60 CHAPTER 5. DUALITY
Prove that if (P) has an optimal solution, then there are always optimal solutions for
(P) and (D) that are overly complementary.
HINT: Let v be the optimal objective value of (P). For each j = 1, 2, . . . , n , consider
max xj
c0 x ≤ v
(Pj )
Ax = b .
x ≥ 0.
(Pj ) seeks an optimal solution of (P) that has xj positive. Using the dual of (Pj ), show
that if no optimal solution x̂ of (P) has x̂j positive, then there is an optimal solution ŷ
of (D) with cj − ŷ 0 A·j positive. Once you do this you can conclude that, for any fixed
j, there are optimal solutions x̂ and ŷ with the property that exactly one of
Take all of these n pairs of solutions x̂ and ŷ and combine them appropriately to con-
struct optimal x̂ and ŷ that are overly complementary.
AG
P xP + AG G
N xN + AU xU ≥ bG ;
L
AP x P + L
AN xN + AL ≤ bL ;
U xU (I)
E
AP x P + AE x + A Ex = bE ;
N N U U
xP ≥ 0 , xN ≤ 0 .
min c0 x
Ax ≥ b ; (P)
x ≥ 0.
b) Suppose, further, that the dual (D) of (P) is feasible. Take a feasible solution ŷ of
(D) and a solution ỹ to your system of part (a) and combine them appropriately
to prove that (D) is unbounded.
Chapter 6
Sensitivity Analysis
• Learn how the optimal value of a linear-optimization problem behaves when the
right-hand side vector and objective vector are varied.
f (b) := min c0 x
Ax = b ; (Pb )
x ≥ 0.
That is, (Pb ) is simply (P) with the optimal objective value viewed as a function of its
right-hand side vector b .
63
64 CHAPTER 6. SENSITIVITY ANALYSIS
Consider a fixed basis β for (Pb ). Associated with that basis is the basic solution
x̄β = A−1 0 −1
β b and the corresponding dual solution ȳ = cβ Aβ . Let us assume that ȳ is
0
feasible for the dual of (Pb ) — or, equivalently, c0η − ȳ 0 Aη ≥ 00 . Considering the set B
of b ∈ Rm such that β is an optimal basis, is it easy to see that B is just the set of b such
that x̄β := A−1 β b ≥ 0 . That is, B ⊂ R is the solution set of m linear inequalities (in
m
fact, it is a “simplicial cone” — we will return to this point in Section 6.1.3). Now, for
b ∈ B , we have f (b) = ȳ 0 b . Therefore, f is a linear function on b ∈ B . Moreover, as
∂f
long as b is in the interior of B , we have ∂b i
= ȳi . So we have that ȳ is the gradient of
f , as long as b is in the interior of B . Now what does it mean for b to be in the interior
of B ? It just means that x̄βi > 0 for i = 1, 2, . . . , m .
Let us focus our attention on changes to a single right-hand side element bi . Sup-
pose that β is an optimal basis of (P) , and consider the problem
min c0 x
Ax = b + ∆i ei ; (Pi )
x ≥ 0,
where ∆i ∈ R . The basis β is feasible (and hence still optimal) for (Pi ) if A−1
β (b +
−1
∆i ei ) ≥ 0 . Let h := Aβ ei . So
i
[h1 , h2 , . . . , hm ] = A−1
β .
straightforward to check that β is feasible (and hence still optimal) for (Pi ) as long as
∆i is in the interval [Li , Ui ] , where
Li := max −x̄βk /hik ,
k : hik >0
and
Ui := min −x̄βk /hik .
k : hik <0
It is worth noting that it can be the case that hik ≤ 0 for all k, in which case we define
Li := −∞, and it could be the case that hik ≥ 0 for all k, in which case we define
Ui := +∞,
In summary, for all ∆i satisfying Li ≤ ∆i ≤ Ui , β is an optimal basis of (P) .
It is important to emphasize that this result pertains to changing one right-hand side
element and holding all others constant. For a result on simultaneously changing all
right-hand side elements, we refer to Exercise 6.3.
6.1. RIGHT-HAND SIDE CHANGES 65
The domain of f is the set of b for which (Pb ) has an optimal solution. Assuming
that the dual of (Pb ) is feasible (note that this just means that y 0 A ≤ c0 has a solution),
then (Pb ) is never unbounded. So the domain of f is just the set of b ∈ Rm such that
(Pb ) is feasible.
Theorem 6.1
The domain of f is a convex set.
Proof. Suppose that bj is in the domain of f , for j = 1, 2 . Therefore, there exist xj that
are feasible for (Pbj ) , for j = 1, 2 . For any 0 < λ < 1 , let b̂ := λb1 + (1 − λ)b2 , and
consider x̂ := λx1 + (1 − λ)x2 . It is easy to check that x̂ is feasible for (Pb̂ ) , so we can
conclude that b̂ is in the domain of f . t
u
for all u1 , u2 ∈ S and 0 < λ < 1 . That is, f is never underestimated by linear interpo-
lation.
AP function f : Rm → R is an affine function, if it has the form f (u1 , . . . , um ) =
a0 + m i=1 ai ui , for constants a0 , a1 , . . . , am ∈ R . If a0 = 0 , then we say that f is a
linear function. Affine (and hence linear) functions are easily seen to be convex.
A function f : Rm → R having a convex set as its domain is a convex piecewise-
linear function if, on its domain, it is the pointwise maximum of a finite number of affine
functions.
66 CHAPTER 6. SENSITIVITY ANALYSIS
Theorem 6.2
If fˇ is a convex piecewise-linear function, then it is a convex function.
Proof. Let
fˇ(u) := max {fi (u)} ,
1≤i≤k
for u in the domain of fˇ , where each fi is an affine function. That is, fˇ is the pointwise
maximum of a finite number (k) of affine functions.
Then, for 0 < λ < 1 and u1 , u2 ∈ Rm ,
fˇ(λu1 + (1 − λ)u2 ) = max fi (λu1 + (1 − λ)u2 )
1≤i≤k
= max λfi (u1 ) + (1 − λ)fi (u2 ) (using the definition of affine)
1≤i≤k
≤ max λfi (u1 ) + max (1 − λ)fi (u2 )
1≤i≤k 1≤i≤k
= λ max fi (u ) + (1 − λ) max fi (u2 )
1
1≤i≤k 1≤i≤k
= λf (u ) + (1 − λ)fˇ(u ) .
ˇ 1 2
t
u
Theorem 6.3
f is a convex piecewise-linear function on its domain.
f (b) := max y 0 b
(Db )
y 0 A ≤ c0 ;
of (Pb ).
6.1. RIGHT-HAND SIDE CHANGES 67
A basis β is feasible or not for (Db ), independent of b . Thinking about it this way,
we can see that
n o
f (b) = max c0β A−1 β b : β is a dual feasible basis ,
6.1.3 A brief detour: the column geometry for the Simplex Algorithm
In this section, we will describe a geometry for visualizing the Simplex Algorithm.6
The ordinary geometry for a standard-form problem, in the space of the non-basic vari-
ables for same choice of basis, can be visualized when n − m = 2 or 3. The “column
geometry” that we will describe is in Rm+1 , so it can be visualized when m + 1 = 2 or
3. Note that the graph of the function f (b) (introduced at the start of this chapter) is
also in Rm+1 , which is why we take the present detour.
We think of the n points
cj
,
Aj
for j = 1, 2, . . . , n , and the additional so-called requirement line
z
: z∈R .
b
We think of the first component of these points and of the line as the vertical dimension;
so the requirement line is thought of as vertical. It of particular interest to think of the
cone generated by the n points. That is,
( ! )
c0 x
K := ∈ Rm+1 : x ≥ 0 .
Ax
Notice how the top coordinate of a point in the cone gives the objective value of the
associated x for (P). So the goal of solving (P) can be thought of as that of finding a
point on the intersection of the requirement line and the cone that is as low as possible.
68 CHAPTER 6. SENSITIVITY ANALYSIS
“Here is what is needed for Occupy Wall Street to become a force for change: a clear,
and clearly expressed, objective. Or two.” — Elayne Boosler
6.2. OBJECTIVE CHANGES 69
That is, (Pc ) is simply (P) with the optimal objective value viewed as a function of its
objective vector c .
for (Pc ) — or, equivalently, A−1β b ≥ 0 . Considering the set C of c ∈ R such that β is an
n
optimal basis, is it easy to see that this is just the set of c such that cη − c0β A−1
0
β Aη ≥ 0 .
0
That is, C ⊂ Rn is the solution set of n − m linear inequalities (in fact, it is a cone). Now,
for c ∈ C , we have g(c) = c0β x̄β . Therefore, g is a linear function on c ∈ C .
Theorem 6.4
The domain of g is a convex set.
for all u1 , u2 ∈ S and 0 < λ < 1 . That is, f is never overestimated by linear inter-
polation. The function g is a concave piecewise-linear function if it is the pointwise
minimum of a finite number of affine functions.
70 CHAPTER 6. SENSITIVITY ANALYSIS
Theorem 6.5
g is a concave piecewise-linear function on its domain.
Then use the AMPL command solve to solve the problem (in particular, to determine an
optimal basis — though AMPL will not tell you the optimal basis that it found).
AMPL has suffixes to retrieve some local sensitivity information from the solver. For a
constraint named <Constraint>, <Constraint>.current gives the current right-hand
side value, while <Constraint>.down and <Constraint>.up give lower and upper lim-
its on that right-hand side value, such that the optimal basis remains optimal. Changing
a particular right-hand side value, in this way, assumes that the right-hand side values
for the other constraints are held constant (though see Exercise 6.3 for the case of si-
multaneous
P changes). In our notation from Section 6.1.1, if <Constraint> corresponds
to nj=1 aij xj = bi (that is, the i-th constraint of (Pb ), then
bi = <Constraint>.current ,
Li = <Constraint>.down − <Constraint>.current ,
Ui = <Constraint>.up − <Constraint>.current .
6.3 Exercises
Exercise 6.1 (Illustrate local sensitivity analysis)
Make an original example to illustrate the local sensitivity-analysis concepts of this
chapter. Use a combination of hand calculations and AMPL.
Exercise 6.3 (“I feel that I know the change that is needed.” — Mahatma Gandhi)
We are given 2m numbers satisfying Li ≤ 0 ≤ Ui , i = 1, 2, . . . , m . Let β be an optimal
basis for all of the m problems
min c0 x
Ax = b + ∆i ei ; (Pi )
x ≥ 0,
for all ∆i satisfying Li ≤ ∆i ≤ Ui . Let’s be clear on what this means: For each i
individually, the basis β is optimal when the ith right-hand side component is changed
from bi to bi + ∆i , as long as ∆i is in the interval [Li , Ui ] (see Section 6.1.1).
The point of this problem is to be able to say something about simultaneously chang-
ing all of the bi . Prove that we can simultaneously change bi to
Li
b̃i := bi + λi ,
Ui
P
where λi ≥ 0 , when m i=1 λi ≤ 1 . [Note that in the formula above, for each i we can
pick either Li (a decrease) or Ui (an increase)].
• Also, via a study of the “cutting-stock problem,” we will have a first glimpse at
some issues associated with integer-linear optimization.
7.1 Decomposition
73
74 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Keep in mind that in (I), x̂ is fixed as well as are the x̂j and the ẑ k — the variables are
the λj and the µk . By way of establishing that S ⊂ S 0 , suppose that x̂ ∈ / S 0 — that is,
suppose that (I) has no solution. Applying the Farkas Lemma to (I) , we see that the
system
w0 x̂ + t > 0 ;
w0 x̂j + t ≤ 0 , ∀ j ∈ J ; (II)
0
w ẑ k ≤ 0, ∀k∈K
has a solution, say ŵ, t̂ . Now, consider the linear-optimization problem
min −ŵ0 x
Ax = b ; (P̂)
x ≥ 0.
for x in c0 x and in Ex = h of (Q), and it is easy to see that (M) is equivalent to (Q). t
u
76 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Decomposition is typically applied in a way such that the constraints defining (S)
are somehow relatively “nice,” and the constraints Ex = h somehow are “complicat-
ing” the situation. For example, we may have a problem where the overall constraint
matrix has the form depicted in Figure 7.1. In such a scenario, we would let
E := ···
and
A := ..
.
Entering variable. With respect to such a dual solution, the reduced cost of a variable
λj is
c0 x̂j − ȳ 0 E x̂j − σ̄ = −σ̄ + c0 − ȳ 0 E x̂j .
It is noteworthy that with the dual solution fixed (at ȳ and σ̄), the reduced cost of λj is
a constant (−σ̄) plus a linear function of x̂j . A variable λj is eligible to enter the basis
if its reduced cost is negative. So we formulate the following optimization problem:
If the “subproblem” (SUB) has as optimal solution, then it has a basic optimal solution
— that is, an x̂j . In such a case, if the optimal objective value of (SUB) is negative, then
the λj corresponding to the optimal x̂j is eligible to enter the current basis of (M) . On
the other hand, if the optimal objective value of (SUB) is non-negative, then we have a
proof that no non-basic λj is eligible to enter the current basis of (M) .
If (SUB) is unbounded, then (SUB) has a basic feasible ray ẑ k having negative ob-
jective value. That
is, 0(c −0 ȳ E)k ẑ < 0 . Amazingly, the reduced cost of µk is precisely
0 0 k
of (M) .
Leaving variable. To determine the choice of leaving variable, let us suppose that B
is the basis matrix for (M) . Note that B consists of at least one column of the form
E x̂j
1
Calculation of basic primal and dual solutions. It is helpful to explain a bit about the
calculation of basic primal and dual solutions. As we have said, B consists of at least
one column of the form
E x̂j
1
and columns of the form
E ẑ k
.
0
So organizing the basic variables λj and µk into a vector ζ , with their order appropri-
ately matched with the columns of B , the vector ζ̄ of values of ζ is precisely the solution
of
h
Bζ = .
1
78 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
That is,
−1 h
ζ̄ = B .
1
Finally, organizing the costs c0 x̂j and c0 ẑ k of the basic variables λj and µk into a vector
ξ , with their order appropriately matched with the columns of B , the associated dual
solution (ȳ, σ̄) is precisely the solution of
(y 0 , σ)B = ξ 0 .
That is,
(ȳ 0 , σ̄) = ξ 0 B −1 .
Starting basis. It is not obvious how to construct a feasible starting basis for (M) ;
after all, we may not have at hand any basic feasible solutions and rays of (S) . Next, we
give a simple recipe. We assume that the problem is first in a slightly different form,
where the complicating constraints are inequalities:
min c̃0 x̃
Ẽ x̃ ≤ h ;
(Q̃)
Ãx̃ = b ;
x̃ ≥ 0 ,
where x̃ is a vector of n − m variables, Ẽ ∈ Rm×(n−m) , Ã ∈ Rp×(n−m) , and all of the
other data has dimensions that conform.
Introducing m slack variables for the Ẽ x̃ ≤ h constraints, we have the equivalent
problem
min c̃0 x̃
Ẽ x̃ + s = h ;
Ãx̃ = b;
x̃ , s ≥ 0 .
Now, it is convenient to define x ∈ Rn by
x̃j , for j = 1, . . . , n − m ;
xj :=
sj−(n−m) , for j = n − m + 1, . . . , n ,
and to define E ∈ Rm×n by
Ẽj , for j = 1, . . . , n − m ;
Ej :=
ej−(n−m) , for j = n − m + 1, . . . , n ,
and A ∈ Rp×n by
Ãj , for j = 1, . . . , n − m ;
Aj :=
0 , for j = n − m + 1, . . . , n .
That is, x = x̃s , E = [Ẽ, Im ] , A = [Ã, 0] , so now our system has the familiar
equivalent form
min c0 x
Ex = h;
(Q)
Ax = b;
x ≥ 0.
7.1. DECOMPOSITION 79
Now, we have everything organized to specify how to get an initial basis for (M).
First, we take as x̂1 any basic feasible solution of (P) . Such a solution can be readily ob-
tained by using our usual (phase-one) methodology of the Simplex Algorithm. Next,
we observe that for k = 1, 2 . . . , m , ẑ k := en−(m+k) is a basic feasible ray of S . The
ray ẑ k corresponds to increasing the slack variable sk (arbitrarily); because, the slack
variable sk = xn−m+k does not truly enter into the Ax = b constraints (i.e., the coeffi-
cients of sk is zero in those constraints), we do not have to alter the values of any other
variables when sk is increased.
So, we have as an initial set of basic variables µ1 , . . . , µm (corresponding to ẑ 1 , . . . , ẑ m )
and λ1 (corresponding to x1 ). Notice that E ẑ k = ek , for k = 1, 2 . . . , m . Organizing
our basic variables in the order µ1 , µ2 , . . . , µm , λ1 , we have the initial basis matrix
Im E x̂1
B= .
00 1
A demonstration implementation.
It is not completely trivial to write a small MATLAB code for the Decomposition Algo-
rithm. First of all, we solve the subproblems (SUB) using functionality of MATLAB. With
this, if a linear-optimization problem is unbounded, MATLAB does not give us access to
the basic-feasible ray on which the objective is decreasing. Because of this, our MATLAB
code quits if it encounters an unbounded subproblem.
Another point is that rather than carry out the simplex method at a detailed level
on the Master Problem (M), we just accumulate all columns of (M) that we generate,
and always solve linear-optimization problems, using functionality of MATLAB, with all
of the columns generated thus far. In this way, we do not maintain bases ourselves,
and we do not carry out the detailed pivots of the Simplex Algorithm. Note that the
linear-optimization functionality of MATLAB does give us a dual solution, so we do not
compute that ourselves.
80 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
options = optimoptions('linprog','Algorithm','dual−simplex');
rng('default');
rng(1); % set seed
E = rand(m1,n); % random m1−by−n matrix
A = rand(m2,n); % random m2−by−n matrix
% first we will calculate the optimal value z of the original LP, just to
% compare later.
disp('Optimizing the original LP without decomposition');
[x,z,exitflag,output,lambda]=linprog([c;zeros(m1,1)],[],[], ...
[horzcat(E,eye(m1)); horzcat(A,zeros(m2,m1))],[h; b], ...
zeros(n+m1,1),[],[],options);
if (exitflag < 1)
disp('fail 1: original LP without decomposition did not solve correctly');
return;
end;
disp('Optimal value of the original LP:');
z
%disp('dual solution:');
%disp(− lambda.eqlin); % Need to flip the signs due to a Matlab convention
% OK, now we are going to solve a phase−one problem to drive the artificial
% variable to zero. Of course, we need to do this with decomposition.
k = 1; % iteration counter
subval = −Inf;
sigma = 0.0;
while −sigma + subval < −0.00000001 && k <= MAXIT
% Set the dual variables by solving the restricted master
[zeta,masterval,exitflag,output,dualsol] = linprog(xi,[],[],B, ...
[h ; 1],zeros(numcol,1),[],options);
masterval
% Need to flip the signs due to a Matlab convention
y = − dualsol.eqlin(1:m1);
sigma = − dualsol.eqlin(m1+1);
% OK, now we can set up and solve the phase−one subproblem
[x,subval,exitflag] = linprog((−y'*E)',[],[],A,b,zeros(n,1),[],options);
if (exitflag < 1)
disp('fail 2: phase−one subproblem did not solve correctly');
return;
end;
% Append [E*x; 1] to B, and set up new restricted master
numcol = numcol + 1;
numx = numx + 1;
X = [X x];
CX = [CX ; c'*x];
B = [B [E*x ; 1]];
% Oh, we should build out the cost vector as well
xi = [xi ; 0];
k = k+1;
end;
% sanity check
if (−sigma + subval < −0.00000001)
disp('fail 3: we hit the max number of iterations for phase−one');
return;
end;
% sanity check
if (zeta(artind) > 0.00000001)
disp('fail 4: we thought we finished phase−one');
disp(' but it seems the aritifical variable is still positive');
82 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
disp(zeta(artind));
return;
end;
% phase−one clean−up: peel off the last column −−− which was not improving,
% and also delete the artifical column
B(:,numcol) =[];
numcol = numcol − 1;
X(:,numx) =[];
CX(numx,:) =[];
numx = numx − 1;
B(:,artind) = [];
numcol = numcol − 1;
end;
%
% Now time for phase−two
disp('*** Starting phase−two');
results1 = zeros(MAXIT,1);
results2 = zeros(MAXIT,1);
k = 0; % iteration counter
subval = −Inf;
sigma = 0.0;
while −sigma + subval < −0.00000001 && k <= MAXIT
k = k + 1;
% Set the dual variables by solving the restricted master
obj = [zeros(m1,1) ; CX];
[zeta,masterval,exitflag,output,dualsol] = linprog(obj,[],[],B, ...
[h ; 1],zeros(numcol,1),[],options);
masterval
% Need to flip the signs due to a Matlab convention
y = − dualsol.eqlin(1:m1);
sigma = − dualsol.eqlin(m1+1);
results1(k)=k;
results2(k)=masterval;
% OK, now we can set up and solve the phase−two subproblem
[x,subval,exitflag] = linprog((c'−y'*E)',[],[],A,b,zeros(n,1),[],options);
if (exitflag < 1)
disp('fail 5: phase−two subproblem did not solve correctly');
return;
end;
% Append [E*x; 1] to B, and set up new restricted master
numcol = numcol + 1;
numx = numx + 1;
X = [X x];
CX = [CX ; c'*x];
B = [B [E*x ; 1]];
end;
% sanity check
if (−sigma + subval < −0.00000001)
disp('fail 6: we hit the max number of iterations for phase−two');
return;
7.1. DECOMPOSITION 83
end;
DWz = CX'*(zeta(m1+1:numcol))
if ((abs(DWz − z) < 0.01) && (abs(c'*xdw − DWz) < 0.00001) && ...
(norm(b−A*xdw) < 0.00001) && (norm(min(h−E*xdw,0)) < 0.00001))
disp('IT WORKED!!!'); % exactly what did we check?
end
In Figure 7.2, we see quite good behavior for (“phase-two” of) the Decomposition
Algorithm.
Again, we consider
z := min c0 x
Ex = h ;
(Q)
Ax = b ;
x ≥ 0,
but our focus now is on efficiently getting a good lower bound on z , with again the
view that we are able to quickly solve many linear-optimization problems having only
the constraints: Ax = b , x ≥ 0 .
Note that the only variables in the minimization are x , because we consider ŷ to be
fixed.
Theorem 7.3
v(ŷ) ≤ z , for all ŷ in the domain of v.
Proof. Let x∗ be an optimal solution for (Q). Clearly x∗ is feasible for (Lŷ ). Therefore
The last equation uses the fact that x∗ is optimal for (Q), so z = c0 x∗ and Ex∗ = h . t
u
Theorem 7.4
Suppose that x∗ is optimal for (Q) , and suppose that ŷ and π̂ are optimal for the dual
of (Q) . Then x∗ is optimal for (Lŷ ) , π̂ is optimal for the dual of (Lŷ ) , ŷ is a maximizer
of v , and the maximum value of v is z .
86 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Proof. x∗ is clearly feasible for (Lŷ ) . Because ŷ and π̂ are feasible for the dual of (Q) ,
we have π̂ 0 A + ŷ 0 E ≤ c0 , and so π̂ is feasible for the dual of (Lŷ ) .
Using the Strong Duality Theorem for (Q) implies that c0 x∗ = ŷ 0 h + π̂ 0 b . Using
that E x̂∗ = h (feasibility of x∗ in (Q)), we then have that (c0 − ŷ 0 E) x∗ = π̂ 0 b . Finally,
using the Weak Duality Theorem for (Lŷ ) , we have that x∗ is optimal for (Lŷ ) and π̂ is
optimal for the dual of (Lŷ ) .
Next,
Therefore the first inequality is an equation, and so ŷ is a maximizer of v and the max-
imum value is z . t
u
Theorem 7.5
Suppose that ŷ is a maximizer of v , and suppose that π̂ is optimal for the dual of (Lŷ ) .
Then ŷ and π̂ are optimal for the dual of (Q) , and the optimal value of (Q) is v(ŷ) .
Proof.
The third equation follows from taking the dual of the inner (minimization) problem.
The last equation follows from seeing that the final maximization (over y and π simul-
taneously) is just the dual of (Q).
So, we have established that the optimal value z of (Q) is v(ŷ) . Looking a bit more
closely, we have established that z = ŷ 0 h + π̂ 0 b , and because π̂ 0 A ≤ c0 − ŷ 0 E , we have
that ŷ and π̂ are optimal for the dual of (Q) . t
u
Note that the conclusion of Theorem 7.5 gives us an optimal ŷ and π̂ for the dual of
(Q), but not an optimal x∗ for (Q) itself.
7.2. LAGRANGIAN RELAXATION 87
Theorem 7.6
Suppose that we fix ŷ , and solve for v(ŷ) . Let x̂ be the solution of (Lŷ ) . Let γ̂ :=
h − E x̂ . Then
v(ỹ) ≤ v(ŷ) + (ỹ − ŷ)0 γ̂ ,
for all ỹ in the domain of v .
Proof.
v(ŷ) + (ỹ − ŷ)0 γ̂ = ŷ 0 h + (c0 − ŷ 0 E)x̂ + (ỹ − ŷ)0 (h − E x̂)
= ỹ 0 h + (c0 − ỹ 0 E)x̂
≥ v(ỹ) .
The last equation follows from the fact that x̂ is feasible (but possible not optimal) for
(Lỹ ). t
u
Subgradient. What is v(ŷ)+(ỹ− ŷ)0 γ̂ ? It is a linear estimation of v(ỹ) starting from the
actual value of v at ŷ . The direction ỹ − ŷ is what we add to ŷ to move to ỹ . The choice
of γ̂ := h − E x̂ is made so that Theorem 7.6 holds. That is, γ̂ is chosen in such a way
that the linear estimation is always an upper bound on the value v(ỹ) of the function,
for all ỹ in the domain of f . The nice property of γ̂ demonstrated with Theorem 7.6
has a name: we say that γ̂ := h − E x̂ is a subgradient of (the concave function) v at ŷ
(because it satisfies the inequality of Theorem 7.6).
Convergence. We have neglected, thus far, to fully specify the Subgradient Optimiza-
tion Algorithm. We can stop if, at some iteration k , we have γ̂ k = 0, because the al-
gorithm will make no further progress if this happens, and indeed we will have found
that ŷ k is a maximizer of v . But this is actually very unlikely to happen. In practice, we
may stop if k reaches some pre-specified iteration limit, or if after many iterations, v is
barely increasing.
We are interested in mathematically analyzing the convergence behavior of the al-
gorithm, letting the algorithm iterate infinitely. We will see that the method converges
(in a certain sense), if we take a sequence P of λ2 k > 0 that in P∞ some sense slowly di-
verges; Specifically, we will require that ∞ λ
k=1 k < +∞ and k=1 λk = +∞ . That is,
“square summable, but not summable.” For example, taking λk := α/(β + k) , with
α > 0 and β ≥ 0 , we get a sequence of step sizes satisfying P this property; in par-
ticular, for αP= 1 and β = 0 we havePthe harmonic series ∞ k=1 1/k which satisfies
∞ ∞
ln(k + 1) < k=1 1/k < ln(k) + 1 and k=1 1/k = π /6 . 2 2
To prove convergence of the algorithm, we must first establish a key technical lemma.
Lemma 7.7
Let y ∗ be any maximizer of v . Suppose that λk > 0 , for all k . Then
k
X k
X
ky ∗ − ŷ k+1 k2 ≤ ky ∗ − ŷ 1 k2 − 2 λi v(y ∗ ) − v(ŷ i ) + λ2i kγ̂ i k2 .
i=1 i=1
Proof. The proof is by induction on k . The inequality is trivially true for the base case
of k = 0 . So, consider general k > 0 .
ky ∗ − ŷ k+1 k2 = k y ∗ − ŷ k − λk γ̂ k k2
0
= y ∗ − ŷ k − λk γ̂ k y ∗ − ŷ k − λk γ̂ k
0
= ky ∗ − ŷ k k2 + kλk γ̂ k k2 − 2λk y ∗ − ŷ k γ̂ k
≤ ky ∗ − ŷ k k2 + λ2k kγ̂ k k2 − 2λk v(y ∗ ) − v(ŷ k )
" k−1 k−1
#
X X
∗ 1 2 ∗ i 2 i 2
≤ ky − ŷ k − 2 λi v(y ) − v(ŷ ) + λi kγ̂ k
i=1 i=1
+ λ2k kγ̂ k k2 ∗
− 2λk v(y ) − v(ŷ ) k
.
The penultimate inequality uses the assumption that λk > 0 and the subgradient in-
equality:
v(ỹ) ≤ v(ŷ k ) + (ỹ − ŷ k )0 γ̂ k ,
plugging in y ∗ for ỹ. The final inequality uses the inductive hypothesis. The final ex-
pression easily simplifies to the right-hand side of the inequality in the statement of the
lemma. t
u
7.2. LAGRANGIAN RELAXATION 89
Now, let
vk∗ := maxki=1 v(ŷ i ) , for k = 1, 2, . . .
That is, vk∗ is the best value seen up through the k-th iteration.
Proof. Because the left-hand side of the inequality in the statement of Lemma 7.7 is
non-negative, we have
k
X k
X
∗
∗ 1 2
2 i
λi v(y ) − v(ŷ ) ≤ ky − ŷ k + λ2i kγ̂ i k2 .
i=1 i=1
k
! k
X X
∗
2 λi (v(y ) − vk∗ ) ∗ 1 2
≤ ky − ŷ k + λ2i kγ̂ i k2 ,
i=1 i=1
or P
∗ ky ∗ − ŷ 1 k2 + ki=1 λ2i kγ̂ i k2
v(y ) − vk∗ ≤ P .
2 ki=1 λi
Next, we observe that kγ̂ i k2 is bounded by some constant Γ , independent of i , because
our algorithm takes γ̂ = h−E x̂ , where x̂ is a basic solution of a Lagrangian subproblem.
There are only a finite number of bases. Therefore, we can take
Γ = max kh − E x̂k2 : x̂ is a basic solution of Ax = b , x ≥ 0 .
So, we have P
∗ ky ∗ − ŷ 1 k2 + Γ ki=1 λ2i
v(y ) − vk∗ ≤ P .
2 ki=1 λi
P
Now, we get our result by observing that ky ∗ − ŷ 1 k2 is a constant, ki=1 λ2i is converging
P
to a constant and ki=1 λi goes to +∞ (as k increases without limit), and so the right-
hand side of the final inequality converges to zero. The result follows. t
u
options = optimoptions('linprog','Algorithm','dual−simplex');
rng('default');
rng(1); % set seed
E = rand(m1,n); % random m1−by−n matrix
A = rand(m2,n); % random m2−by−n matrix
results1 = zeros(MAXIT,1);
results2 = zeros(MAXIT,1);
k = 0; % iteration counter
y = zeros(m1,1); % initialize y as you like
g = zeros(m1,1); % initialize g arbitrarily −−− it will essentially be ignored
stepsize = 0; % just to have us really start at the initialized y
bestlb = −Inf;
while k < MAXIT
k = k + 1; % increment the iteration counter
7.2. LAGRANGIAN RELAXATION 91
bestlb
pi = − dualsol.eqlin;
Total_Dual_Infeasibility = norm(min(c' − y'*E − pi'*A,zeros(1,n)))
Practical steps. Practically speaking, in order to get a ŷ with a reasonably high value
of v(ŷ) , it can be better to choose a sequence of λk that depends on a “good guess” of
the optimal value of v(ŷ), taking bigger steps when one is far away, and smaller steps
when one is close (try to develop this idea in Exercise 7.3). Then, the method is usually
stopped after a predetermined number of iterations or after progress becomes very
slow.
Dual estimation. From Theorem 7.5, we see that the Subgradient Optimization Method
is a way to try and quickly find estimates of an optimal solution to the dual of (Q). But
note that we give something up — we do not get an x∗ that solves (Q) from a ŷ that
maximizes v and a π̂ that is optimal for the dual of (Lŷ ) . There is no guarantee that a
x̂ that is optimal for (Lŷ ) will be feasible for (Q) . Moreover, we need for the Subgra-
92 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
dient Optimization Method to have nearly converged to certify that ŷ and π̂ are nearly
optimal for the dual of (Q).
The cutting-stock problem is a nice concrete topic at this point. We will develop a
technique for it, using column generation, but the context is different than for decom-
7.3. THE CUTTING-STOCK PROBLEM 93
position. Moreover, the topic is a nice segue into integer linear optimization — the topic
of the next chapter.
The story is as follows. We have stock rolls of some type of paper of (integer) width
W . But we encounter (integer) demand di for rolls of (integer) width wi < W , for
i = 1, 2, . . . , m . The cutting-stock problem is to find a plan for satisfying demand,
using as few stock rolls as possible.8
We endeavor to compute a basic optimum (x̄, t̄) . Because of the nature of the formula-
tion, we can see that dx̄e is feasible for (CSP). Moreover, we have produced a solution
using 10 dx̄e stock rolls, and we can give an a priori bound on its quality. Specifically,
as we will see in the next theorem, the solution that we obtain wastes at most m − 1
stock rolls, in comparison with an optimal solution. Moreover, we have a practically-
computable bound on the number of wasted rolls, which is no worse than the worst-
case bound of m − 1. That is, our waste is at worst 10 dx̄e − dze .
94 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Theorem 7.9
We suppose that we have a feasible basis of (CSP) and that we have, at hand, the as-
sociated dual solution ȳ . For each i , 1 ≤ i ≤ m , the reduced cost of ti is simply ȳi .
Therefore, if ȳi < 0 , then ti is eligible to enter the basis.
So, moving forward, we may assume that ȳi ≥ 0 for all i . We now want to examine
the reduced cost of an xj variable. The reduced cost is simply
m
X
1 − ȳ 0 Aj = 1 − ȳi aij .
i=1
P
The variable xj is eligible to enter the basis then if 1 − m
i=1 ȳi aij < 0 . Therefore, to
check whether there is some column xj with negative reduced cost, we can solve the
so-called knapsack problem
Pm
max i=1 ȳi ai
Pm
i=1 wi ai ≤ W;
ai ≥ 0 integer, i = 1, . . . , m ,
and check whether the optimal value is greater than one. If it is, then the new variable
that we associate with this solution pattern (i.e., column of the constraint matrix) is
eligible to enter the basis.
7.3. THE CUTTING-STOCK PROBLEM 95
Our algorithmic approach for the knapsack problem is via recursive optimization
(known popularly as dynamic programming9 ). We will solve this problem for all positive
integers up through W . That is, we will solve
P
f (s) := max P m i=1 ȳi ai
m
i=1 wi ai ≤ s ;
ai ≥ 0 integer, i = 1, . . . , m ,
It is important to note that we can always calculate f (s) provided that we have already
calculated f (s0 ) for all s0 < s . Why does this work? It follows from a very simple
observation: If we have optimally filled a knapsack of capacity s and we remove any
item i, then what remains optimally fills a knapsack of capacity s − wi . If there were a
better way to fill the knapsack of capacity s−wi , then we could take such a way, replace
the item i , and we would have found a better way to fill a knapsack of capacity s . Of
course, we do not know even a single item that we can be sure is in an optimally filled
knapsack of capacity s , and this is why in the recursion, we maximize over all items
that can fit in (i.e., i : wi ≤ s).
The recursion appears to calculate the value of f (s) , but it is not immediate how to
recover optimal values of the ai . Actually, this is rather easy.
1. While (s > 0)
2. Return ai , for i = 1, . . . , m .
Note that in Step 1.a, there must be such an ı̂ , by virtue of the recursive formula for
calculating f (s) . In fact, if we like, we can save an appropriate ı̂ associated with each s
at the time that we calculate f (s) .
96 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Basic solutions: dual and primal. At any iteration, the basis matrix B has some columns
corresponding to patterns and possibly other columns for ti variables. The column cor-
responding to ti is −ei .
Organizing the basic variables xj and ti into a vector ζ , with their order appropri-
ately matched with the columns of B , the vector ζ̄ of values of ζ is precisely the solution
of
Bζ = d .
That is,
ζ̄ = B −1 d .
The cost of an xj is 1, while the cost of a ti is 0. Organizing the costs of the basic
variables into a vector ξ , with their order appropriately matched with the columns of
B , the associated dual solution ȳ is precisely the solution of
y0B = ξ0 .
That is,
ȳ 0 = ξ 0 B −1 .
# −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
# KNAPSACK SUBPROBLEM FOR CUTTING STOCK
# −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
minimize Reduced_Cost:
1 − sum {i in WIDTHS} ybar[i] * a[i];
subj to Width_Limit:
sum {i in WIDTHS} w[i] * a[i] <= W;
110
5
70 205
40 2321
55 143
25 1089
35 117
reset;
option solution_round 6;
option solver_msg 0;
model csp.mod;
#data csp.dat;
98 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
let nPAT := 0;
for {i in WIDTHS} {
let nPAT := nPAT + 1;
let A[i,nPAT] := floor (W/w[i]);
let {i2 in WIDTHS: i2 <> i} A[i2,nPAT] := 0;
};
repeat {
solve Cutting_Opt > crap;
let {i in WIDTHS} ybar[i] := FillDemand[i].dual;
solve Pattern_Gen > crap;
if Reduced_Cost < −0.00001 then {
printf "\nImproving column generated. Reduced cost = %11.2e ", Reduced_Cost;
let nPAT := nPAT + 1;
let {i in WIDTHS} A[i,nPAT] := a[i];
}
else break;
};
print " "; print "LP solved! Solution: "; print " ";
display X;
display RollsUsed;
printf "\nLower bound = %11i (LP obj value rounded up) \n\n", ceil(RollsUsed);
option Cutting_Opt.relax_integrality 0;
option Cutting_Opt.presolve 10;
solve Cutting_Opt;
7.3. THE CUTTING-STOCK PROBLEM 99
X [*] :=
1 0
2 0
3 71.5
4 0
5 0
6 205
7 1033.38
8 18.5385
9 49.2308
;
RollsUsed = 1377.65
CUT 72 OF:
1 0
2 0
3 2
4 0
5 0
100 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
CUT 19 OF:
1 0
2 0
3 0
4 3
5 1
CUT 50 OF:
1 0
2 1
3 0
4 0
5 2
CPLEX 12.2.0.0:
CUT 72 OF:
1 0
2 0
3 2
4 0
5 0
CUT 19 OF:
1 0
2 0
3 0
4 3
5 1
CUT 49 OF:
1 0
2 1
3 0
4 0
5 2
Our algorithm gives a lower bound of 1378 on the minimum number of stock rolls
needed to cover demand, and it gives us an upper bound (feasible solution) of 1380.
By solving a further integer-linear optimization problem to determine the best way to
cover demand using all patterns generated in the course of our algorithm, we improve
the upper bound to 1379. It remains unknown as to whether the optimal solution to
this instance is 1378 or 1379.
7.4 Exercises
Exercise 7.1 (Dual solutions)
Refer to (Q) and (M) defined in the Decomposition Theorem (i.e., Corollary 7.2) What
is the relationship between optimal dual solutions of (Q) and (M) ?
Also, Theorem 7.5, together with the fact that Subgradient Optimization converges
in the limit to a a solution of the Lagrangian Dual together, tells us that the ŷ and π̂
produced should be optimal for the dual of (Q). Check this out: see to what extent ŷ
and π̂ are optimal for the dual of (Q) (i.e., feasible and objective value near z).
Integer-Linear Optimization
• to learn the fundamentals of the ideas that most solvers employ to handle integer
variables;
103
104 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
for v ∈ N . A flow is conservative if the net flow out of node v , minus the net flow into
node v , is equal to the net supply at node v , for all nodes v ∈ N .
The (single-commodity min-cost) network-flow problem is to find a minimum-
cost conservative flow that is non-negative and respects the flow upper bounds on the
arcs. We can formulate this as follows:
X
min ce xe
e∈A
X X
xe − xe = bv , ∀ v ∈ N ;
e∈A : e∈A :
t(e)=v h(e)=v
0 ≤ xe ≤ ue , ∀e∈A.
As we have stated this, it is just a structured linear-optimization problem. But there are
many situations where the given net supplies at the nodes and the given flow capacities
on the arcs are integer, and we wish to constrain the flow variables to be integers.
We will see that it is useful to think of the network-flow problem in matrix-vector
language. We define the network matrix of G to be a matrix A having rows indexed
from N , columns indexed from A , and entries
1 , if v = t(e) ;
ave := −1 , if v = h(e);
0 , if v ∈
/ {t(e), h(e)},
A finite bipartite graph G is described by two finite sets of vertices V1 and V2 , and
a set E of ordered pairs of edges, each one of which is of the form (i, j) with i ∈ V1 and
j ∈ V2 . A perfect matching M of G is a subset of E such that each vertex of the graph
meets exactly one edge in M . We assume that there are given edge weights
and our goal is to find a perfect matching that has minimum (total) weight.
We can define
for all (i, j) ∈ E . Then we can model the problem of finding a perfect matching of G
having minimum weight via the formulation:
X
min cij xij
(i,j)∈E
X
xij = 1, ∀ i ∈ V1 ;
j∈V2 :
(i,j)∈E
X
xij = 1, ∀ j ∈ V2 ;
i∈V1 :
(i,j)∈E
for v ∈ V1 ∪V2 , (i, j) ∈ E . With this notation, and organizing the cij , xij and as column-
vectors indexed accordingly with the columns of A , we can rewrite the assignment-
problem formulation as
min c0 x
Ax = 1 ;
x ∈ {0, 1}E ,
106 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
As we have stated it, this staffing problem is really a very general type of integer-
linear-optimization problem because we have not restricted the form of A beyond it
being 0, 1-valued. In some situations, however, it may may be reasonable to assume
that shifts must consist of a consecutive set of time periods. In this case, the 1’s in each
column of A occur consecutively, so we call A a consecutive-ones matrix.
In this section we explore the essential properties of a constraint matrix so that basic
solutions are guaranteed to be integer. This has important implications for the network-
flow, assignment, and staffing problems that we introduced.
8.1. INTEGRALITY FOR FREE 107
Definition 8.1
Let A be an m × n real matrix. A basis matrix Aβ is unimodular if det(Aβ ) = ±1 .
Checking whether a large unstructured matrix has all of its basis matrices unimod-
ular is not a simple matter. Nonetheless, we will see that this property is very useful
for guaranteeing integer optimal of linear-optimization problems, and certain structured
constraint matrices have this property.
Theorem 8.2
If A is an integer matrix, all basis matrices of A are unimodular, and b is an integer
vector, then every basic solution x̄ of
Ax = b ;
x ≥ 0
is an integer vector.
Theorem 8.3
Let A be an integer matrix in Rm×n . If the system
Ax = b ;
x ≥ 0
has integer basic feasible solutions for every integer vector b ∈ Rm , then all basis
matrices of A are unimodular.
It is important to note that the hypothesis of Theorem 8.3 is weaker than the con-
clusion of Theorem 8.2. For Theorem 8.3, we only require integrality for basic feasible
solutions.
108 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
It is trivial to see that if A is TU, then every basis matrix of A is unimodular. But
note that even for integer A , every basis matrix of A could be unimodular, but A need
not be TU. For example,
2 1
1 1
has only itself as a basis matrix, and its determinant is 1, but there is a 1 × 1 submatrix
with determinant 2, so A is not TU. Still, as the next result indicates, there is a way to
get the TU property from unimodularity of basis matrices.
Theorem 8.5
If every basis matrix of [A, Im ] is unimodular, then A is TU.
columns of B , and then the m − r identity columns that have their ones in rows other
than those used by B . If we permute the rows of A so that B is within the first r rows,
then we can put the identity columns to the right, in their natural order, and the basis
we construct is !
B 0
H= .
× Im−r
Clearly B and H have the same determinant. Therefore, the fact that every basis matrix
has determinant 1 or −1 implies that B does as well. t
u
Next, we point out some simple transformations that preserve the TU property.
Theorem 8.6
If A is TU, then all of the following leave A TU.
(iii) appending standard-unit columns (that is, all entries equal to 0 except a single
entry of 1) ;
Remark 8.7
Relationship with transformations of linear-optimization problems. The significance
of Theorem 8.6 for linear-optimization problems can be understood via the following
observations:
• (i) allows for reversing the sense of an inequality (i.e., switching between “≤”
and “≥”) or variable (i.e., switching between non-negative and non-positive) in
a linear-optimization problem with constraint matrix A .
• (ii) together with (i) allows for replacing an equation with a pair of oppositely
senses inequalities and for replacing a sign-unrestricted variable with the differ-
ence of a pair of non-negative variables.
• (iii) allows for adding a non-negative slack variable for a “≤” inequality, to trans-
form it into an equation. Combining (iii) with (i) , we can similarly subtract a
non-negative surplus variable for a “≥” inequality, to transform it into an equa-
tion.
• (iv) allows for taking the dual of a linear-optimization problem with constraint
matrix A .
110 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Theorem 8.8
If A is a network matrix, then A is TU.
Proof. A network matrix is simply a 0 , ±1-valued matrix with exactly one +1 and one
−1 in each column.
Let B be an r × r invertible submatrix of the network matrix A . We will demon-
strate that det(B) = ±1 , by induction on r . For the base case, r = 1 , the invertible
submatrices have a single entry which is ±1 , which of course has determinant ±1 .
Now suppose that r > 1 , and we inductively assume that all (r − 1) × (r − 1) invertible
submatrices of A have determinant ±1 .
Because we assume that B is invertible, it cannot have a column that is a zero-vector.
Moreover, it cannot be that every column of B has exactly one +1 and one −1 . Be-
cause, by simply adding up all the rows of B , we have a non-trivial linear combination
of the rows of B which yields the zero vector. Therefore, B is not invertible in this case.
So, we only need to consider the situation in which B has a column with a single
non-zero ±1 . By expanding the determinant along such a column, we see that, up to a
sign, the determinant of B is the same as the determinant of an (r−1)×(r−1) invertible
submatrix of A . By the inductive hypothesis, this is ±1 . t
u
Corollary 8.9
The (single-commodity min-cost) network-flow formulation
X
min ce xe
e∈A
X X
xe − xe = bv , ∀ v ∈ N ;
e∈A : e∈A :
t(e)=v h(e)=v
0 ≤ xe ≤ ue , ∀e∈A.
has an integer optimal solution if: (i) it has an optimal solution, (ii) each bv is an
integer, and (iii) each ue is an integer or is infinite.
min c0 x
Ax = b ;
x ≤ u;
x ≥ 0,
8.1. INTEGRALITY FOR FREE 111
where A is a network matrix. For the purpose of proving the theorem, we may as well
assume that the linear-optimization problem has an optimal solution. Next, we trans-
form the formulation into standard form:
min c0 x
Ax = b;
x + s = u;
x , s ≥ 0.
A 0
The constraint matrix has the form . This matrix is TU, by virtue of the fact
I I
that A is TU, and that it arises from A using operations that preserve the TU property.
Finally, we delete any redundant equations from this system of equations, and we delete
any rows that have infinite right-hand side ue . The resulting constraint matrix is TU,
and the right-hand side is integer, so an optimal basic solution exists and will be integer.
t
u
Assignments.
Theorem 8.10
If A is the vertex-edge incidence matrix of a bipartite graph, then A is TU.
Proof. The constraint matrix A for the formulation has its rows indexed by the vertices
of G . With each edge having exactly one vertex in V1 and exactly one vertex in V2 , the
constraint matrix has the property that for each column, the only non-zeros are a single
1 in a row indexed from V1 and a single 1 in a row indexed from V2 .
Certainly multiplying any rows (or columns) of a matrix does not bear upon whether
or not it is TU. It is easy to see that by multiplying the rows of A indexed from V1 , we
obtain a network matrix, thus by Theorem 8.8, the result follows. t
u
Corollary 8.11
The continuous relaxation of the following formulation for finding a minimum-weight
perfect matching of the bipartite graph G has an 0, 1-valued solution whenever it is
feasible. X
min cij xij
(i,j)∈E
X
xij = 1 , ∀ i ∈ V1 ;
j∈V2 :
(i,j)∈E
X
xij = 1 , ∀ j ∈ V2 ;
i∈V1 :
(i,j)∈E
xij ≥ 0 , ∀ (i, j) ∈ E .
112 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Proof. After deleting any redundant equations, the resulting formulation as a TU con-
straint matrix and integer right-hand side. Therefore, its basic solutions are all integer.
The constraints imply that no variable can be greater than 1, therefore the optimal value
is not unbounded, and the only integer solutions have all xij ∈ {0, 1} . The result fol-
lows. t
u
Proof. We can formulate the problem of finding the maximum cardinality of a matching
of G as follows:
X
max xij
(i,j)∈E
X
xij ≤ 1 , ∀ i ∈ V1 ;
j∈V2 :
(i,j)∈E
X
xij ≤ 1 , ∀ j ∈ V2 ;
i∈V1 :
(i,j)∈E
yi + yj ≥ 1 , ∀ (i, j) ∈ E ;
yv ≥ 0 , ∀ v ∈ V .
It is easy to see that after putting this into standard form via the subtraction of sur-
plus variables, the constraint matrix has the form [A0 , −I] , where A is the vertex-edge
incidence matrix of G . This matrix is TU, therefore an optimal integer solution exists.
8.1. INTEGRALITY FOR FREE 113
Next, we observe that because of the minimization objective and the form of the
constraints, an optimal integer solution will be 0, 1-valued; just observe that if ȳ is an
integer feasible solution and ȳv > 1 , for some v ∈ V , then decreasing ȳv to 1 (holding
the other components of ȳ constant, produces another integer feasible solution with a
lesser objective value. This implies that every integer feasible solution ȳ with any ȳv > 1
is not optimal.
Next, let ŷ be an optimal 0, 1-valued solution. Let
W := {v ∈ V : ŷv = 1} .
P
It is easy to see that W is a vertex cover of G and that |W | = v∈V ŷv . The result now
follows from the strong duality theorem. t
u
Staffing.
Theorem 8.13
If A is a consecutive-ones matrix, then A is TU.
where F and G are the submatrices indicated. Note that F and G are each consecutive-
ones matrices.
Next, we subtract the top row from all other rows that have a 1 in the first column.
Such row operations do not change the determinant of B , and we get a matrix of the
form
1 1···1 0···0
0 0···0
.. .. ..
. . .
0 0···0 .
0
.
G
.
0
.
F
Note that this resulting matrix need not be a consecutive-ones matrix — but that is
not needed. By expanding the determinant of this latter matrix along the first column,
we see that the determinant of this matrix is the same as that of the matrix obtained by
striking out its first row and column,
0···0
.. ..
. .
0···0
.
G
F
Corollary 8.14
Let A be a shift matrix such that each shift is a contiguous set of time periods, let c be
a vector of non-negative costs, and let b be a vector of non-negative integer demands
for workers in the time periods. Then there is an optimal solution x̄ of the continuous
relaxation
min c0 x
Ax ≥ b ;
x ≥ 0
of the staffing formulation that has x̄ integer, whenever the relaxation is feasible.
Proof. A is a consecutive-ones matrix when each shift is a contiguous set of time periods.
Therefore A is TU. After subtracting surplus variables to put the problem into standard
form, the constraint matrix takes the form [A, −I] , which is also T U . The result follows.
t
u
8.2.1 Disjunctions
Example 8.15
Suppose that we have a single variable x ∈ R , and we want to model the disjunction
−12 ≤ x ≤ 2 or 5 ≤ x ≤ 20 .
By introducing a binary variable y ∈ {0, 1} , we can model the disjunction as
x ≤ 2 + M1 y ,
x + M2 (1 − y) ≥ 5 ,
where the constant scalars M1 and M2 (so-called big M’s) are chosen to be appropri-
ately large. A little analysis tell us how large. Considering our assumption that x could
be as large as 20 , we see that M1 should be at least 18 . Considering our assumption
that x could be as small as −12 , we see that M2 should be at least 17 . In fact, we should
choose these constants to be as small as possible so as make the feasible region with
y ∈ {0, 1} relaxed to 0 ≤ y ≤ 1 as small as possible. So, the best model for us is:
x ≤ 2 + 18y ,
x + 17(1 − y) ≥ 5 .
It is interesting to see a two-dimensional graph of this in x − y space; see Figures 8.1
and 8.2.
cij := cost for satisfying all of customer j’s demand from facility i ,
the total cost. The problem is “uncapacitated” in the sense that each facility has no limit
on its ability to satisfy demand from even all customers.
We formulate this optimization problem with
for i = 1, . . . , m , and
for i = 1, . . . , m , j = 1, . . . , n .
Our formulation is as follows:
Pm Pm Pn
min i=1 fi yi + i=1 j=1 cij xij
Pm
i=1 xij = 1, for j = 1, . . . , n ;
for i = 1, . . . , m ,
−yi + xij ≤ 0,
j = 1, . . . , n ;
for i = 1, . . . , m ,
xij ≥ 0 ,
j = 1, . . . , n .
These constraints simply enforce that for any feasible solution x̂, ŷ , we have that ŷi = 1
whenever x̂ij > 0 . It is an interesting point that this could also be enforced via the m
constraints:
Xn
−nyi + xij ≤ 0 , for i = 1, . . . , m . (W)
j=1
We can view the coefficient −n of yi as a “big M”, rendering the constraint vacuous
when yi = 1 .
Despite the apparent parsimony of the latter formulation, it turns out that the orig-
inal formulation is preferred.
This adjacency condition means that we “activate” the interval [ξ j , ξ j+1 ] for approxi-
mating f (x) . That is, we will approximate f (x) by
λj f (ξ j ) + λj+1 f (ξ j+1 ) ,
with
λj + λj+1 = 1 ;
λj , λj+1 ≥ 0 .
for j = 1, 2, . . . , n − 1 .
The situation is depicted in Figure 8.3, where the red curve graphs the non-linear
function f .
We only want to allow one of the n − 1 intervals to be activated, so we use the
constraint
n−1
X
yj = 1 .
j=1
We only want to allow λ1 > 0 if the first interval [ξ 1 , ξ 2 ] is activated. For an internal
breakpoint ξ j , 1 < j < n , we only want to allow λj > 0 if either [ξ j−1 , ξ j ] or [ξ j , ξ j+1 ]
is activated. We only want to allow λn > 0 if the last interval [ξ n−1 , ξ n ] is activated. We
can accomplish these restrictions with the constraints
λ1 ≤ y1 ;
λj ≤ yj−1 + yj , for j = 2, . . . , n − 1 ;
λn ≤ yn−1 .
Notice how if yk is 1 , for some k (1 ≤ k ≤ n), and necessarily all of the other yj are
0 (j 6= k), then only λk and λk+1 can be positive.
How do we actually use this? If we have a model involving such
P a non-linear f (x),
then wherever we have f (x) in the model, we simply substitute nj=1 λj f (ξ j ) , and we
120 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
value for n , so as to get an accurate approximation. And higher values for n imply
more binary variables yj , which come at a high computational cost.
8.3. A PRELUDE TO ALGORITHMS 121
8.4 Branch-and-Bound
• If ȳi is integer for all i ∈ I , then we have solved the selected problem. In this case,
if ȳ 0 b > LB , then we
– reset LB to ȳ 0 b ;
– reset ȳLB to ȳ .
• Finally, if ȳ 0 b > LB and ȳi is not integer for all i ∈ I , then (it is possible that this
selected problem has a feasible solution that is better than ȳLB , so) we
– place two new “child” problems on the list, one with the constraint yi ≤ bȳi c
appended (the so-called down branch), and the other with the constraint
yi ≥ dȳi e appended (the so-called up branch).
(observe that every feasible solution to a parent is feasible for one of its chil-
dren, if it has children.)
Theorem 8.18
Suppose that the original (P) is feasible. Then at termination of branch-and-bound,
we have LB= −∞ if (DI ) is infeasible or with ȳLB being an optimal solution of (DI ).
Finite termination. If the feasible region of the continuous relaxation (D) of (DI ) is
a bounded set, then we can guarantee finite termination. If we do not want to make
such an assumption, then if we assume that the data for the formulation is rational, it is
possible to bound the region that needs to be searched, and we can again assure finite
termination.
Solving continuous relaxations. Some remarks are in order regarding the solution
of continuous relaxations. Conceptually, we apply the Simplex Algorithm to the dual
(P̃) of the continuous relaxation (D̃) of a problem (D̃I ) selected and removed from
the list. At the outset, for an optimal basis β of (P̃), the optimal dual solution is given
by ȳ 0 := c0β A−1
β . If i ∈ I is chosen, such that ȳi is not an integer, then we replace the
selected problem (D̃I ) with one child having the additional constraint yi ≤ bȳi c (the
down branch) and another with the constraint yi ≥ dȳi e appended (the up branch).
Adding a constraint to (D̃) adds a variable to the standard-form problem (P̃). So, a
basis for (P̃) remains feasible after we introduce such a variable.
• The down branch: The constraint yi ≤ bȳi c , dualizes to a new variable xdown in
(P̃). The variable xdown has a new column Adown := ei and a cost coefficient of
cdown := bȳi c . Notice that the fact that ȳi is not an integer (and hence ȳ violates
yi ≤ bȳi c) translates into the fact that the reduced cost c̄down of xdown is c̄down =
cdown − ȳ 0 Adown = bȳi c − ȳi < 0 , so xdown is eligible to enter the basis.
In either case, provided that we have kept the optimal basis for the (P̃) associated
with a problem (D̃ ), the Simplex Algorithm picks up on the (P̃) ˜ associated with a
I
˜ ) of that problem , with the new variable of the child’s (P̃)
child (D̃ ˜ entering the basis.
I
Notice that the (P̃) associated with a problem (D̃I ) on the list could be unbounded.
But this just implies that the problem (D̃I ) is infeasible.
Partially solving continuous relaxations. Notice that as the Simplex Algorithm is ap-
plied to the (P̃) associated with any problem (D̃I ) from the list, we generate a sequence
of non-increasing objective values, each one of which is an upperbound on the optimal
objective value of (D̃I ). That is, for any such (P̃), we start with the upperbound value
of its parent, and then we gradually decrease it, step-by-step of the Simplex Algorithm.
At any point in this process, if the objective value of the Simplex Algorithm falls at or
below the current LB, we can immediately terminate the Simplex Algorithm on such a
(P̃) — its optimal objective value will be no greater than LB — and conclude that the
optimal objective value of (D̃I ) is no greater than LB.
A global upper bound. As the algorithm progresses, if we let UBbetter be the maximum,
over all problems on the list, of the objective value of the continuous relaxations, then
any feasible solution ŷ with objective value greater than that LB satisfies ŷ 0 b ≤ UBbetter .
Of course, it may be that no optimal solution is feasible to any problem on the list —
for example if it happens that LB = z . But we can see that
It may be useful to have UB at hand, because we can always stop the computation
early, say when UB − LB < τ , returning the feasible solution ȳLB , with the knowledge
0 b ≤ τ . But notice that we do not readily have the objective value of the con-
that z − ȳLB
tinuous relaxation for problems on the list — we only solve the continuous relaxation
for such a problem after it is selected (for processing). But, for every problem on the
list, we can simply keep track of the optimal objective value of its parent’s continuous
relaxation, and use that instead. Alternatively, we can re-organize our computations a
bit, solving continuous relaxations of subproblems before we put them on the list.
Selecting a subproblem from the list. Which subproblem from the list should we
process next?
• A strategy of last-in/first-out, known as diving, often results in good increases in
LB. To completely specify such a strategy, one would have to decide which of the
two children of a subproblem is put on the list last (i.e., the down branch or the
up branch). A good choice can affect the performance of this rule, and such a
good choice depends on the type of model being solved.
seeking a decrease in UB. If such a rule is desired, then it is best to solve continuous
relaxations of subproblems before we put them on the list.
A hybrid strategy, doing mostly diving at the start (to get a reasonable value of LB)
and shifting more and more to best bound (to work on proving that LB is at or near the
optimal value) has rather robust performance.
Selecting a branching variable. Probably very many times, we will need to choose
an i ∈ I for which ȳi is fractional, in order to branch and create the child subproblems.
Which such i should we choose? Naïve rules such as choosing randomly or the so-called
most fractional rule of choosing an i that maximizes min{ȳi − bȳi c , dȳi e − ȳi } seem to
have rather poor performance. Better rules are based on estimates of how the objective
value of the children will change relative to the parent.
Using dual variables to bound the “other side” of an inequality. Our constraint
system y 0 A ≤ c0 can be viewed as y 0 Aj ≤ cj , for j = 1, 2, . . . , n ; that is, cj is an upper
bound on y 0 Aj . We may wonder if we can also derive lower bounds on y 0 Aj .
Theorem 8.19
Let LB be the objective value of any feasible solution of (DI ). Let x̄ be an optimal
solution of (P), and assume that x̄j > 0 for some j . Then
LB − c0 x̄
cj + ≤ y 0 Aj
x̄j
z(∆j ) := max y 0 b
y 0 A ≤ c0 + ∆j e0j ;
(DI (∆j ))
y ∈ Rm ;
yi integer, for i ∈ I .
Let zR (∆j ) be defined the same way as z(∆j ), but with integrality relaxed. Using ideas
from Chapters 6 and 7, we can see that zR is a concave (piecewise-linear) function on
its domain, and x̄j is a subgradient of zR at ∆j = 0 . It follows that
8.5.1 Pure
In this section, we assume that all yi variables are constrained to be integer. That is,
I = {1, 2, . . . , m}
We can choose any non-negative w ∈ Rn , and we see that
w ≥ 0 and y 0 A ≤ c0 =⇒ y 0 (Aw) ≤ c0 w .
Note that this inequality is valid for all solutions of y 0 A ≤ c0 , integer or not. Next, if Aw
is integer, we can exploit the integrality of y . We see that
Aw ∈ Zm , y ∈ Zm =⇒ y 0 (Aw) ≤ bc0 wc ,
Aw ∈ Zm by then just choosing w ∈ Zn . In fact, for the remained of this section, we will
assume that A and c are integer.
Of course, it is by no means clear how to choose appropriate w, and this is critical
for getting useful inequalities. We should also bear in mind that there are examples
for which Chvátal-Gomory are rather ineffectual. Trying to apply such cuts to Example
8.16 reveals that infeasible integer points can “guard” Chvátal-Gomory cuts from getting
close to any feasible integer points.
We would like to develop a concrete algorithmic scheme for generating Chvátal-
Gomory cuts. We will do this via basic solutions. Let β be any basis for P. The asso-
ciated dual basic solution (for the continuous relaxation (D)) is ȳ 0 := c0β A−1 β . Suppose
that ȳi is not an integer. Our goal is to derive a valid cut for (DI ) that is violated by ȳ.
8.5. CUTTING PLANES 127
Let
b̃ := ei + Aβ r,
where r ∈ Zm , and, as usual, ei denotes the i-th standard unit vector in Rm . Note that
by construction, b̃ ∈ Zm .
Theorem 8.21
ȳ 0 b̃ is not an integer, and so y 0 b̃ ≤ bȳ 0 b̃c cuts off ȳ.
At this point, we have an inequality y 0 b̃ ≤ bȳ 0 b̃c which cuts off ȳ, but we have not
established its validity for (DI ).
Let H·i := A−1 −1
β e , the i-th column of Aβ . Now let
i
w := H·i + r.
Theorem 8.22
Choosing r ∈ Zm satisfying (∗≥ ), we have that y 0 b̃ ≤ bȳ 0 b̃c is valid for (DI ).
y 0 Aβ (A−1 i 0 −1 i
β e + r) ≤ cβ (Aβ e + r),
even for the continuous relaxation (D) of (DI ). Simplifying this, we have
So we have that y 0 b̃ ≤ ȳ 0 b̃ is valid even for (D). Finally, observing that b̃ ∈ Zm and y is
constrained to be in Zm for (DI ), we can round down the right-hand side and get the
result. t
u
128 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
So, given any non-integer basic dual solution ȳ, we have a way to produce a valid
inequality for (DI ) that cuts it off. This cut for (DI ) is used as a column for (P): the
column is b̃ with objective coefficient bȳ 0 b̃c. Taking β to be an optimal basis for (P), the
new variable corresponding to this column is the unique variable eligible to enter the
basis in the context of the primal simplex algorithm applied to (P) — the reduced cost
is precisely
bȳ 0 b̃c − ȳ 0 b̃ < 0.
The new column for A is b̃ which is integer. The new objective coefficient for c is bȳ 0 b̃c
which is an integer. So the original assumption that A and c are integer is maintained,
and we can repeat. In this way, we get a legitimate cutting-plane framework for (DI ) —
though we emphasize that we do our computations as column generation with respect
to (P).
There is clearly a lot of flexibility in how r can be chosen. Next, we demonstrate that
in a very concrete sense, it is always best to choose a minimal r ∈ Zm satisfying (∗≥ ).
Theorem 8.23
Let r ∈ Zm be defined by
and suppose that r̂ ∈ Zm satisfies (∗≥ ) and r ≤ r̂. Then the cut determined by r
dominates the cut determined by r̂.
Noting that c0β − y 0 Aβ ≥ 0 for all y that are feasible for (D), we see that the strongest
inequality is obtained by choosing r ∈ Zm to be minimal. t
u
Example 8.24
Let
7 8 −1 1 3 26
A= , b=
5 6 −1 2 1 19
and c0 = 126 141 −10 5 67 .
So, the integer program (DI ) which we seek to solve is defined by five inequalities in
the two variables y1 and y2 . For the basis of (P), β = (1, 2), we have
7 8 −1 3 −4
Aβ = , and hence Aβ = .
5 6 −5/2 7/2
and for the non-basis η = {3, 4, 5, 6}, we have c̄0η = 5 1/2 1 , which are both non-
negative, and so this basis is optimal for (P). The associated dual basic solution is
ȳ 0 = 51/2 −21/2 , and the objective value is z = 463 1/2.
Because both ȳ1 and ȳ2 are not integer, we can derive a cut for (DI ) from either.
Recalling the procedure, for any fraction ȳi , we start with the i-th column H·i of H :=
A−1
β , and we get a new A·j := e + Aβ r. Throughout we will choose r via (∗= ). So we
i
have,
3 −3 1 7 8 −3 4
H·1 = ⇒r= ⇒ b̃ = + = =: A·6
−5/2 3 0 5 6 3 3
−4 4 0 7 8 4 4
H·2 = ⇒r= ⇒ b̃ = + = .
7/2 −3 1 5 6 −3 3
In fact, for this iteration of this example, we get the same cut for either choice of i. To
calculate the right-hand side of the cut, we have
0
4
ȳ b̃ = 51/2 −21/2 = 70 1/2,
3
with objective value 462, a decrease. At this point, index 5 has a negative reduced cost,
and index 1 leaves the basis. So we now have β = (5, 6), which turns out to be optimal
for the current (P). We have
ȳ 0 = 131/5 −58/5 , and the objective value is z = 460 4/5.
We observe that the objective function has decreased, but unfortunately both ȳ1 and ȳ1
are not integers. So we must continue. We have
3 4 −1 3/5 −4/5
Aβ = , and hence Aβ = .
1 3 −1/5 3/5
We observe that the objective function has decreased, but because both ȳ1 and ȳ2
are not integers, we can again derive a cut for (DI ) from either. We calculate
3/5 0 1 3 4 0 5
H·1 = ⇒r= ⇒ b̃ = + = =: A·7
−1/5 1 0 1 3 1 3
130 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
−4/5 1 0 3 4 1 3
H·2 = ⇒r= ⇒ b̃ = + = =: A·8 .
3/5 0 1 1 3 0 2
Choosing to incorporate both as columns for (P), and letting index 8 enter the basis,
index 5 leaves (according to the ratio test), and it turns out that we reach an optimal
basis β = (8, 6) after this single pivot. At this point, we have
ȳ 0 = 25 −10 , and the objective value is z = 460.
Not only has the objective decreased, but now all of the ȳi are integers, so we have an
optimal solution for (DI ).
8.5.2 Mixed
In this section, we no longer assume that all yi variables are constrained to be integer.
That is, we only assume that non-empty I ⊂ {1, 2, . . . , m}. The cuts from the previous
section cannot be guaranteed to be valid, so we start anew.
Let β be any basis partition for(P), and let ȳ be the associated dual basic solution.
Suppose that ȳi ∈/ Z, for some i ∈ I. We aim to find a cut, valid for (DI ) and violated
by ȳ.
Let
b̃1 := ei + Aβ r,
r ≥ −h·i , we can make w1 ≥ 0. Moreover, c0 w1 = c0β (h·i + r) = c0β h·i + c0β r = ȳi + c0β r,
so because we assume that ȳi ∈ / Z, we can choose r ∈ Zm , and we have that c0 w1 ∈ / Z.
Next, let
b̃2 := Aβ r.
Let w2 be the basic solution associated with the basis β and the “right-hand side” b̃2 .
So, now further choosing r ≥ 0, we have wβ2 = r ≥ 0, wη2 = 0, and c0 w2 = c0β r.
So, we choose r ∈ Zm so that:
slope
(1 + αi0 r)yi + z ≤ ȳi + ȳ 0 Aβ r −1/(1 + αi0 r) (B1)
(αi0 r)yi 0
+ z ≤ ȳ Aβ r −1/αi0 r (B2)
Note that the intersection point (yi∗ , z ∗ ) of the lines associated with
P these inequalities
(subtract the second equation from the first) has yi∗ = ȳi and z ∗ = j:j6=i (αj0 r)ȳj . Also,
the “slopes” indicated regard yi as the ordinate and z as the abscissa.
Bearing in mind that we choose r ∈ Zm and that A is assumed to be integer, we have
that αi0 r ∈ Z. There are now two cases to consider:
• αi0 r ≥ 0, in which case the first line has negative slope and the second line has
more negative slope (or infinite αi0 r = 0);
• αi0 r ≤ −1, in which case the second line has positive slope and the first line has
more positive slope (or infinite αi0 r = −1).
Subtracting, we have
z1 − z2 = (ȳi − bȳi c) −(1 + αi0 r ),
| {z } |{z}
∈(0,1) ∈Z
so we see that: z 1 < z 2 precisely when αi0 r ≥ 0; z 2 < z 1 precisely when αi0 r ≤ −1.
Moreover, the slope of the line through the pair of points (z 1 , yi1 ) and (z 2 , yi2 ) is just
1 1
= .
z1 −z 2 (ȳi − bȳi c) − (1 + αi0 r)
132 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
(B2)
(B1)
(B2)
(B1)
yi
(F-BMI)
Lemma 8.25
(F-BMI) is satisfied at equality by both of the points (z 1 , yi1 ) and (z 2 , yi2 ).
8.5. CUTTING PLANES 133
Lemma 8.26
(F-BMI) is valid for
(yi , z) ∈ R2 : (B1), yi ≥ dȳi e ∪ (yi , z) ∈ R2 : (B2), yi ≤ bȳi c .
Lemma 8.27
(F-BMI) is violated by the point (yi∗ , z ∗ ).
Proof. Plugging (yi∗ , z ∗ ) into (F-BMI), and making some if-and-only-if manipulations,
we obtain
(ȳi − bȳi c − 1) (ȳi − bȳi c) ≥ 0,
which is not satisfied. t
u
or,
− (ȳi − bȳi c − 1) yi + y 0 Aβ r ≤ ȳ 0 Aβ r − (ȳi − bȳi c − 1) bȳi c,
which, finally has the convenient form
We immediately have:
Theorem 8.28
(F-GMI) is violated by the point ȳ.
Finally, we have:
Theorem 8.29
(F-GMI) is valid for the following relaxation of the feasible region of (D):
Proof. The proof, maybe obvious, is by a simple disjunctive argument. We will argue
that (F-BMI) is valid for both S1 := {y ∈ Rm : y 0 Aβ ≤ c0β , −yi ≤ −bȳi c − 1} and
S2 := {y ∈ Rm : y 0 Aβ ≤ c0β , yi ≤ bȳi c}.
The inequality (F-BMI) is simply the sum of (B1) and the scalar ȳi − bȳi c times
−yi ≤ −bȳi c − 1. It follows than that taking (I1) plus ȳi − bȳi c times −yi ≤ −bȳi c − 1,
we get an inequality equivalent to (F-GMI).
Similarly, it is easy to check that the inequality (F-BMI) is simply (B2) plus 1 − (ȳi −
bȳi c) times yi ≤ bȳi c. It follows than that taking (I2) plus 1 − (ȳi − bȳi c) times yi ≤ bȳi c,
we also get an inequality equivalent to (F-GMI). t
u
In our algorithm, we append columns to (P), rather than cuts to (D). The column
for (P) corresponding to (F-GMI) is
Aβ r − (ȳi − bȳi c − 1) ei ,
and the associated cost coefficient is
c0β r − (ȳi − bȳi c − 1) bȳi c.
So A−1
β times the column is
r − (ȳi − bȳi c − 1) h·i .
Agreeing with what we calculated in Proposition 8.27, we have the following result.
Proposition 8.30
The reduced cost of the column for (P) corresponding to (F-GMI) is
Proof.
c0β r − (ȳi − bȳi c − 1) bȳi c − c0β (r − (ȳi − bȳi c − 1) h·i )
= (ȳi − bȳi c − 1) c0β h·i − bȳi c
= (ȳi − bȳi c − 1) (ȳi − bȳi c) .
t
u
Theorem 8.31
Let r ∈ Zm be defined by
and suppose that r̂ ∈ Zm satisfies (∗≥+ ) and r ≤ r̂. Then the cut determined by r
dominates the cut determined by r̂.
8.6. EXERCISES 135
Observing that c0β − y 0 Aβ ≥ 0 for y that are feasible for (D), we see that the tightest
inequality of this type, satisfying (∗≥+ ), arises by choosing a minimal r. The result
follows. t
u
8.5.4 Branch-and-Cut
State-of-the-art algorithms for (mixed-)integer linear optimization combine cuts with
branch-and-bound. There are a lot of software design and tuning issues that make this
work successfully.
8.6 Exercises
Exercise 8.1 (Task scheduling, continued)
Consider again the “task scheduling” Exercise 2.5. Take the dual of the linear-optimization
problem that you formulated. Explain how this dual can be interpreted as a kind of net-
work problem. Using AMPL, solve the dual of the example that you created for Exercise
2.5 and interpret the solution.
S1 : 2x1 + 2x2 + x3 + x4 ≤ 2 ;
xj ≤ 1;
−xj ≤ 0.
S2 : x1 + x2 + x3 ≤ 1 ;
x1 + x2 + x4 ≤ 1 ;
−xj ≤ 0.
S3 : x1 + x2 ≤ 1 ;
x1 + x3 ≤ 1 ;
x1 + x4 ≤ 1 ;
x2 + x3 ≤ 1 ;
x2 + x4 ≤ 1 ;
−xj ≤ 0.
Notice that each system has precisely the same set of integer solutions. In fact, each
system chooses, via its feasible integer (0/1) solutions, the “vertex packings” of the
following graph.
A vertex packing of a graph is a set of vertices with no edges between them. For this
particular graph we can see that the packings are: ∅, {1}, {2}, {3}, {4}, {3, 4}.
Compare the feasible regions Si of the continuous relaxations, for each pair of these
systems. Specifically, for each choice of pair i 6= j , demonstrate whether or not the
solution set of Si is contained in the solution set of Sj . HINT: To prove that the solution
set of Si is contained in the solution set of Sj , it suffices to demonstrate that every
inequality of Sj is a non-negative linear combination of the inequalities of Si . To prove
that the solution set of Si is not contained in the solution set of Sj , it suffices to give a
solution of Si that is not a solution of Sj .
Which formulation is stronger? That is, compare (both computationally and analyti-
cally) the strength of the two associated continuous relaxations (i.e., when we relax
yi ∈ {0, 1} to 0 ≤ yi ≤ 1 , for i = 1, . . . , m). In Appendix A.4, there is AMPL code for
trying computational experiments.
Exercise 8.5 (Comparing piecewise-linear formulations)
We have seen that the adjacency condition for piecewise-linear univariate functions can
be modeled by
λ1 ≤ y1 ;
λj ≤ yj−1 + yj , for j = 2, . . . , n − 1 ;
λn ≤ yn−1 .
An alternative formulation is
j
X j+1
X
yi ≤ λi , for j = 1, . . . , n − 2 ;
i=1 i=1
n−1
X Xn
yi ≤ λi , for j = 2, . . . , n − 1 .
i=j i=j
Explain why this alternative formulation is valid, and compare its strength to the orig-
inal formulation, when we relax yi ∈ {0, 1} to 0 ≤ yi ≤ 1 , for i = 1, P. n. . , n − 1 . (Note
that
Pn−1 for both formulations, we require λi ≥ 0, for i = 1, . . . , n , i=1 λi = 1 , and
i=1 i y = 1).
Exercise 8.6 (Variable fixing)
Prove Corollary 8.20.
Exercise 8.7 (Gomory cuts)
Prove that we need at least k Chvátal-Gomory cuts to solve Example 8.16.
Exercise 8.8 (Solve pure integer problems using Gomory cuts)
Extend what you did for Exercise 4.1 to now solve integer problems using Gomory cuts.
In Appendix A.3, there is a MATLAB script pure_gomory.m for generating Gomory cuts.
Using this additional primitives only, extend your script from Exercise 4.1 to solve pure
integer problems using Gomory cuts. Be sure to read in the data using pivot_setup.m.
As before, do not worry about degeneracy/anti-cycling. Make some small examples to
fully illustrate your code.
Exercise 8.9 (Make amends)
Find an interesting applied problem, model it as a pure- or mixed- integer linear-optimization
problem, and test your model with AMPL.
138 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Credit will be given for deft modeling, sophisticated use of AMPL, testing on meaningfully-
large instances, and insightful analysis. Try to play with CPLEX integer solver options
(they can be set through AMPL) to get better behavior of the solver.
Your grade on this problem will replace your grades on up to 6 homework problems
(i.e., up to 6 homework problems on which you have lower grades than you get on this
one). I will not consider any re-grades on this one! If you already have all or mostly A’s (or
not), do a good job on this one because you want to impress me, and because you are
ambitious, and because this problem is what we have been working towards all during
the course, and because you should always finish strong.
8.6. EXERCISES 139
Take rest
Appendices
141
143
This template can serve as a starting point for learning LATEX. You may download MiKTeX from
miktex.org to get started. Look at the source file for this document (in Section 5) to see how to get
all of the effects demonstrated.
The equations are automatically numbered, like x.y, where x is the section number and y is the y-th
equation in section x. By tagging the equations with labels, we can refer to them later, like (2.3) and (2.1).
Theorem 2.1. This is my favorite theorem.
Proof. Unfortunately, the space here does not allow for including my ingenious proof of Theorem 2.1.
min c0 x
(P) Ax = b;
x ≥ 0.
Notice that in this example, there are 4 columns separated by 3 &’s. The ‘rrcl’ organizes justification
within a column. Of course, one can make more columns.
4. Graphics
This is how to include and refer to Figure 1 with pdfLaTeX.
\title{\LaTeX~ Template}
\date{\today}
\maketitle
\href{mailto:youremail@umich.edu,anotheremail@umich.edu}
{Your actual name (youremail@umich.edu),
Another actual name (anotheremail@umich.edu)}
%
%\medskip
%
%(this identifies your work and it \emph{greatly} help’s me in returning homework to you by email
%---- just plug in the appropriate replacements in the \LaTeX~ source; then when I click on the
%hyperlink above, my email program opens up starting a message to you)
\bigskip
% ----------------------------------------------------------------
This template can serve as a starting point for learning \LaTeX. You may download MiKTeX from
{\tt miktex.org}
to get started. Look at the source file for this
document (in Section \ref{sec:appendix})
to see how to get all of the effects demonstrated.
You can typeset math inline, like $\sum_{j=1}^n a_{ij} x_j$, by just enclosing the math in dollar signs.
But if you want to \emph{display} the math, then you do it like this:
\[
\sum_{j=1}^n a_{ij} x_j~ \forall~ i=1,\ldots,m.
\]
\begin{thm}\label{Favorite}
This is my favorite theorem.
\end{thm}
\begin{proof}
Unfortunately, the space here does not allow for including my ingenious proof
of Theorem \ref{Favorite}.
\end{proof}
\[
\tag{P}
\begin{array}{rrcl}
\min & c’x & & \\
& Ax & = & b~; \\
& x & \geq & \mathbf{0}~.
\end{array}
\]
\section{Graphics}
\begin{figure}[h!!]
\includegraphics[width=0.5\textwidth]{yinyang.jpg}
\caption{Another duality}\label{nameoffigure}
\end{figure}
Look at the \LaTeX~ commands in this section to see how each of the elements
of this document was produced. Also, this section serves to show
how text files (e.g., programs) can be included in a \LaTeX~ document verbatim.
\bigskip
\hrule
\small
\verbatiminput{LaTeX_Template.tex}
\normalsize
\hrule
\bigskip
% ----------------------------------------------------------------
LATEX TEMPLATE 5
\end{document}
% ----------------------------------------------------------------
149
• pivot_setup.m : Script sets up all of the data structures and executes the script
pivot_input.m (or a user-created .m file — file name input at MATLAB command
prompt) to read in the input. Must execute pivot_algebra.m after setup.
• pivot_input.m : Script holds the default input which (or alternative user input
file) is executed by the script pivot_setup.m . Do not run pivot_input.m from a
MATLAB command prompt.
• pivot_algebra.m : Script to carry out the linear algebra after setup and after a
swap.
clear all;
try
reply = input('NAME of NAME.m file holding input? [pivot_input]: ', 's');
if isempty(reply)
reply = 'pivot_input';
end
catch
reply = 'pivot_input'; % need catch for execution on MathWorks Cloud
end
eval(reply);
if (size(b) ~= m)
display('size(b) does not match number of rows of A')
return
end
if (size(c) ~= n)
display('size(c) does not match number of columns of A')
return
end
if(size(setdiff(beta,1:n)) > 0)
display('beta has elements not in 1,2,...,n')
return
end
if(size(setdiff(eta,1:n)) > 0)
display('eta has elements not in 1,2,...,n')
return
end
if (size(beta) ~= m)
display('size(beta) does not match number of rows of A')
return
end
if (size(eta) ~= n−m)
display('size(eta) does not match number of cols minus number of rows of A')
return
end
zbar = sym(zeros(n,1));
ratios = sym(zeros(m,1));
151
theta= 2*pi/5;
gamma = cot(theta);
b = [0 0]';
b=sym(b);
beta = [1,2];
[m,n] = size(A);
eta = setdiff(1:n,beta); % lazy eta initialization
%
% uncomment next line to lexically perturb the rhs
b = b + A(:,beta)*[t t^2]'; % lex feasible if beta is feasible
A_beta = A(:,beta);
% if (rank(A_beta,0.0001) < size(A_beta,1))
% display('error: A_beta not invertible')
% return;
% end;
A_eta = A(:,eta);
c_beta = c(beta);
c_eta = c(eta);
ybar = linsolve(A_beta',c_beta);
xbar = sym(zeros(n,1));
xbar_beta = linsolve(A_beta,b);
xbar(beta) = xbar_beta;
xbar_eta = sym(zeros(n−m,1));
xbar(eta) = xbar_eta;
Abar_eta = linsolve(A_beta,A_eta);
function pivot_direction(j)
global beta eta m n Abar_eta zbar
if (j > n−m)
display('error: not so many directions')
return
end
zbar = sym(zeros(n,1));
zbar(beta) = −Abar_eta(:,j);
zbar(eta(j))=1;
zbar
end
function pivot_ratios(j)
global xbar_beta Abar_eta m n ratios;
if (j>n−m)
display('error: j out of range.')
else
ratios = xbar_beta ./ Abar_eta(:,j)
end
function swap(j,i)
global beta eta m n;
if ((i>m) | (j>n−m))
display('error: i or j out of range. swap not accepted');
else
savebetai = beta(i);
beta(i) = eta(j);
eta(j) = savebetai;
display('swap accepted −−− new partition:')
display(beta);
display(eta);
display('*** MUST APPLY pivot_algebra! ***')
end
154
if (n−m ~= 2)
display('error: cannot plot unless n−m = 2')
return
end
clf;
hold on
set(x_eta1_axis,'LineStyle','−','Color',[0 0 0],'LineWidth',2);
set(x_eta2_axis,'LineStyle','−','Color',[0 0 0],'LineWidth',2);
jcmap = colormap(lines);
for i = 1:m
% a bit annoying but apparently necessary to symbolically perturb (below)
% in case a line is parallel to an axis.
p(i) = ezplot(strcat(char(h(i)),'+0*x_eta1+0*x_eta2'));
set(p(i),'Color',jcmap(i,:),'LineWidth',2);
name{i} = ['x_{\beta_{',num2str(i),'}} = x_{' int2str(beta(i)),'}'];
end
if (xbar_beta >= 0)
plot([0],[0],'Marker','o','MarkerFaceColor','green','MarkerEdgeColor',...
'black','MarkerSize',5);
polygonlist=[0,0];
else
plot([0],[0],'Marker','o','MarkerFaceColor','red','MarkerEdgeColor',...
'black','MarkerSize',5);
polygonlist=[ , ];
end
for i = 1:m+1
for j = i+1:m+2
[M, d] = equationsToMatrix(h(i) == 0, h(j) == 0, [x_eta1, x_eta2]);
if (rank(M) == 2)
intpoint=linsolve(M,d);
[warnmsg, msgid] = lastwarn;
if ~strcmp(msgid,'symbolic:sym:mldivide:warnmsg2')
155
pivot_directions = zeros(n,2);
pivot_direction(1);
pivot_directions(:,1)=zbar;
pivot_direction(2);
pivot_directions(:,2)=zbar;
pivot_directions;
quiver([0],[0],cbar_eta(1),cbar_eta(2),'LineWidth',2,'MarkerSize',2, ...
'Color','magenta','LineStyle','−');
156
>> pivot_setup
>> pivot_algebra
>> pivot_plot
>> pivot_ratios(2)
>> pivot_swap(2,4)
>> pivot_algebra
>> pivot_ratios(1)
>> pivot_swap(1,1)
>> pivot_algebra
>> pivot_ratios(2)
>> pivot_swap(2,2)
>> pivot_algebra
157
Note that the MATLAB GUI allows to pan beyond the initial display range:
159
A = [7 8 −1 1 3; 5 6 −1 2 1];
A= sym(A);
c = [126 141 −10 5 67]';
c=sym(c);
b = [26 19]';
b=sym(b);
beta = [1,2];
[m,n] = size(A);
eta = setdiff(1:n,beta); % lazy eta initialization
function pure_gomory(i)
global A c A_beta ybar m n beta eta;
function mixed_gomory(i)
global A c A_beta c_beta ybar m n beta eta;
minimize Total_Cost:
sum {(i,j) in LINKS} shipping_cost[i,j] * X[i,j]
+ sum {i in FACILITIES} facility_cost[i] * Y[i];
reset;
option randseed 412837;
option display_1col 0;
option display_transpose −20;
option show_stats 1;
option solver cplex;
option cplex_options 'presolve=0';
model uflstrong.mod;
data uflstrong.dat;
option relax_integrality 1;
solve;
#display X;
display Y;
display Total_Cost;
162
param M := 30;
param N := 100;
minimize Total_Cost:
sum {(i,j) in LINKS} shipping_cost[i,j] * X[i,j]
+ sum {i in FACILITIES} facility_cost[i] * Y[i];
reset;
option randseed 412837;
option display_1col 0;
option display_transpose −20;
option show_stats 3;
option solver cplex;
option cplex_options 'presolve=0';
model uflweak.mod;
data uflweak.dat;
option relax_integrality 1;
solve;
#display X;
display Y;
display Total_Cost;
163
End Notes
1
“The reader will find no figures in this work. The methods which I set forth do not require either
constructions or geometrical or mechanical reasonings: but only algebraic operations, subject to a regular
and uniform rule of procedure.” — Joseph-Louis Lagrange, Preface to “Mécanique Analytique,” 1815.
2
“Il est facile de voir que...”, “il est facile de conclure que...”, etc. — Pierre-Simon Laplace, frequently
in “Traité de Mécanique Céleste.”
3
“One would be able to draw thence well some corollaries that I omit for fear of boring you.” — Gabriel
Cramer, Letter to Nicolas Bernoulli, 21 May 1728. Translated from “Die Werke von Jakob Bernoulli,” by
R.J. Pulskamp.
166 END NOTES
4
“Two months after I made up the example, I lost the mental picture which produced it. I really regret
this, because a lot of people have asked me your question, and I can’t answer.” — Alan J. Hoffman, private
communication with J. Lee, August, 1994.
5
“Fourier hat sich selbst vielfach um Ungleichungen bemüht, aber ohne erheblichen Erfolg.” — Gyula
Farkas, “Über die Theorie der Einfachen Ungleichungen,” Journal für die Reine und Angewandte Mathematik,
vol. 124:1–27.
6
“The particular geometry used in my thesis was in the dimension of the columns instead of the rows.
This column geometry gave me the insight that made me believe the Simplex Method would be a very
efficient solution technique for solving linear programs. This I proposed in the summer of 1947 and by
good luck it worked!” — George B. Dantzig, “Reminiscences about the origins of linear programming,”
Operations Research Letters vol. 1 (1981/82), no. 2, 43–48.
END NOTES 167
7
“George would often call me in and talk about something on his mind. One day in around 1959,
he told me about a couple of problem areas: something that Ray Fulkerson worked on, something else
whose details I forget. In both cases, he was using a linear programming model and the simplex method
on a problem that had a tremendous amount of data. Dantzig in one case, Fulkerson in another, had
devised an ad hoc method of creating the data at the moment it was needed to fit into the problem. I
reflected on this problem for quite awhile. And then it suddenly occurred to me that they were all doing
the same thing! They were essentially solving a linear programming problem whose data - whose columns
- being an important part of the data, were too many to write down. But you could devise a procedure
for creating one when you needed it, and creating one that the simplex method would choose to work
with at that moment. Call it the column-generation method. The immediate, lovely looking application
was to the linear programming problem, in which you have a number of linear programming problems
connected only by a small number of constraints. That fit in beautifully with the pattern. It was a way
of decomposing such a problem. So we referred to it as the decomposition algorithm. And that rapidly
became very famous.” — Philip Wolfe, interviewed by Irv Lustig ∼2003.
8
“So they have this assortment of widths and quantities, which they are somehow supposed to make
out of all these ten-foot rolls. So that was called the cutting stock problem in the case of paper. So Paul
[Gilmore] and I got interested in that. We struck out (failed) first on some sort of a steel cutting problem,
but we seemed to have some grip on the paper thing, and we used to visit the paper mills to see what
they actually did. And I can tell you, paper mills are so impressive. I mean they throw a lot of junk in at
one end, like tree trunks or something that’s wood, and out the other end comes – swissssssh – paper! It’s
one damn long machine, like a hundred yards long. They smell a lot, too. We were quite successful. They
didn’t have computers; believe me, no computer in the place. So we helped the salesman to sell them the
first computer.” — Ralph E. Gomory, interviewed by William Thomas, New York City, July 19, 2010.
168 END NOTES
9
“I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision
processes. An interesting question is, Where did the name, dynamic programming, come from? The
1950s were not good years for mathematical research. We had a very interesting gentleman in Washington
named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the
word research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would
turn red, and he would get violent if people used the term research in his presence. You can imagine
how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force,
and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson
and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What
title, what name, could I choose? In the first place I was interested in planning, in decision making, in
thinking. But planning, is not a good word for various reasons. I decided therefore to use the word
‘programming’. I wanted to get across the idea that this was dynamic, this was multistage, this was time-
varying I thought, lets kill two birds with one stone. Lets take a word that has an absolutely precise
meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an
adjective, and that is its impossible to use the word dynamic in a pejorative sense. Try thinking of some
combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic
programming was a good name. It was something not even a Congressman could object to. So I used it as
an umbrella for my activities.” — Richard E. Bellman, “Eye of the Hurricane: An Autobiography,” 1984.
10
“Vielleicht noch mehr als der Berührung der Menschheit mit der Natur verdankt die Graphentheorie
der Berührung der Menschen untereinander.” — Dénes König, “Theorie Der Endlichen Und Unendlichen
Graphen,” 1936.
The Afterward
169
Bibliography
[1] Stephen P. Bradley, Arnoldo C. Hax, and Thomas L. Magnanti. Applied mathematical
programming. Addison Wesley, 1977. Available at http://web.mit.edu/15.053/
www/.
[2] Qi He and Jon Lee. Another pedagogy for pure-integer gomory. RAIRO-Operations
Research, 51(1):189–197, 2017.
[3] Jon Lee. A first course in combinatorial optimization. Cambridge Texts in Applied
Mathematics. Cambridge University Press, Cambridge, 2004.
[4] Jon Lee and Angelika Wiegele. Another pedagogy for mixed-integer gomory.
EURO Journal on Computational Optimization, 2017 (to appear).
[5] Katta G. Murty. Linear and combinatorial programming. John Wiley & Sons Inc., New
York, 1976.
171
Index of definitions
173
174 INDEX OF DEFINITIONS