0% found this document useful (0 votes)
49 views24 pages

Chapter 8 Dynamic Programming Student

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
49 views24 pages

Chapter 8 Dynamic Programming Student

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 24

1

Chapter 8: Dynamic Programming

It was invented by Richard Bellman, in the 1950s as a general method for optimizing
multistage decision processes. R. Bellman mentioned a general principle that relates to
optimization problems, called the principle of optimality:
“An optimal solution to any instance of an optimization problem is composed of
optimal solutions to its subinstances”.

In computer science, this strategy is used to solve problems with overlapping


subproblems and there is a recurrence relation between the solution of the larger problem
and the solutions of its smaller subproblems.

Example: Computing the 𝑛𝑡ℎ Fibonacci number.

𝑛
Example: Computing the binomial coefficient ( ).
𝑘
𝑛
The binomial coefficient ( ) is given by the formula:
𝑘
𝑛 𝑛!
( )=
𝑘 𝑘! (𝑛 − 𝑘)!
where 0 ≤ 𝑘 ≤ 𝑛 or by the following recursive formula:

𝑛 𝑛−1 𝑛−1
( )+( ) 0<𝑘<𝑛
( )={ 𝑘−1 𝑘
𝑘
1 𝑘 =0∨𝑘 =𝑛

Algorithm (Recursive version)


Binomial(n, k) {
if (k == 0 || k == n)
return 1;
return Binomial(n - 1, k - 1) + Binomial(n - 1, k);
}

𝑛
The generalized formula for computing the binomial coefficient ( ) is as follows:
𝑘
𝑖 𝐶[𝑖 − 1, 𝑗 − 1] + 𝐶[𝑖 − 1, 𝑗] 0<𝑗<𝑖
( ) ⟺ 𝐶[𝑖, 𝑗] = {
𝑗 1 𝑗 =0∨𝑗 =𝑖
2

0 1 2 … j–1 j … k–1 k
0 1
1 1 1
2 1 2 1

i– C[i–1, j–1] C[i–1, j]


1
i C[i, j]

k 1 1

n–1 1 C[n–1, k–1] C[n–1, k]


n 1 C[n, k]

Algorithm (Dynamic programming version)


Binomial(n, k) {
C[0 .. n, 0] = C[0 .. k, 0 .. k] = 1;
for (i = 1; i  n; i++)
for (j = 1; j < (i  k ? i : k + 1); j++)
C[i, j] = C[i – 1, j - 1] + C[i – 1, j];
return C[n, k];
}
Analysis of the algorithm: 𝐴(𝑛, 𝑘) ∈ Θ(𝑛𝑘)

Example: Maximum sum of a path in a right number triangle.


Given a right triangle of numbers, find the largest of the sum of numbers that appear
on a path starting from the top towards the base. The next number on the path is located
directly below or below-and-one-place-to-the-right. Show the path discovered.
3

7 7 0 1 2 3 4 5
0 0 0
8 8 8 8 1 0 7 0
2 0 15 15 0
4 6 0 4 6 0
3 0 19 21 15 0
7 0 8 5 7 0 8 5 4 0 26 21 29 20 0
5 0 26 31 33 38 24
0 5 4 9 4 0 5 4 9 4
(a) (b) (c)

Let’s denote 𝑆(𝑖, 𝑗) the largest of the sum of numbers that appear on a path starting
from the top to the location with row index 𝑖 and column index 𝑗.
max(𝑆(𝑖 − 1, 𝑗 − 1), 𝑆(𝑖 − 1, 𝑗)) + 𝑎𝑖,𝑗 𝑖 ≠ 𝑗 ∧ 𝑖, 𝑗 ≠ 1
𝑎𝑖,𝑗 𝑖=𝑗=1
𝑆(𝑖, 𝑗) =
𝑆(𝑖 − 1, 𝑗 − 1) + 𝑎𝑖,𝑗 𝑖 = 𝑗 ∧ 𝑖, 𝑗 ≠ 1
𝑆(𝑖 − 1, 𝑗) + 𝑎𝑖,𝑗 𝑖 ≠𝑗∧𝑗=1
{

A two-dimensional array 𝑆 of size (𝑛 + 1) × (𝑛 + 1) is used to compute all


potential paths starting from the top. Initially,
𝑆[0. . 𝑛, 0] = 0, 𝑆[𝑗, 𝑗 + 1] = 0
where 0 ≤ 𝑗 ≤ 𝑛 − 1.

Algorithm
… Initialize table S with the input data …

S[0 .. n, 0] = S[0 .. n – 1, 1 .. n] = 0;
for (i = 1; i  n; i++)
for (j = 1; j  i; j++)
S[i][j] = max(S[i - 1][j - 1], S[i-1][j]) + a[i][j];
return “the maximum value on the nth row”

Analysis of the algorithm: 𝑇(𝑛) ∈ Θ(𝑛2 )


4

The change-making problem


Given 𝑘 denominations: 𝑑1 , 𝑑2 , … , 𝑑𝑘 . where 𝑑1 = 1. Find the minimum number of
coins (of certain denominations) that add up to a given amount of money 𝑛. The smallest
denomination is always a one-cent coin. Show the exact coins that build up the amount of
money.

Let’s denote 𝐶(𝑛) the minimum number of coins (of certain denominations) that
add up to 𝑛. Then,
𝐶(𝑛) = min 𝐶(𝑛 − 𝑑𝑖 ) + 1
1≤𝑖≤𝑘

where 𝑛 ≥ 𝑑𝑖 , 𝐶(0) = 0

Algorithm
int changeCoinsDP(int d[], int k, int money) {
int C[0 .. money] = 0;

for (cents = 1; cents  money; cents++) {


int minCoins = cents;
for (i = 1; i  k; i++) {
if (d[i] > cents)
continue;
if (C[cents - d[i]] + 1 < minCoins)
minCoins = C[cents - d[i]] + 1;
}
C[cents] = minCoins;
}
return C[money];
}

Analysis of the algorithm: Θ(𝑘𝑛)


5

Longest Monotonically Increasing Subsequence (LMIS) problem

Find the length of the longest subsequence of a given sequence of positive integers
such that all elements of the subsequence are in monotonically increasing order. Print the
longest subsequence.
Note: There may be more than one LMIS combination, it is only necessary for you to return
the length.
Formally we look for the longest sequence of indexes 𝑖1 , 𝑖2 , … , 𝑖𝑘 such that:
1 ≤ 𝑖1 < 𝑖2 < ⋯ < 𝑖𝑘 ≤ 𝑛 ∧ 𝑎𝑖1 < 𝑎𝑖2 < ⋯ < 𝑎𝑖𝑘

Let’s denote 𝐿(𝑖) the length of the longest increasing subsequence whose last
element is 𝑎𝑖 . Of course, 𝑎𝑖 is greater than all other elements in this subsequence.
𝐿(𝑖) = max (𝐿(𝑗)) + 1
1≤𝑗<𝑖: 𝑎𝑗 <𝑎𝑖

where 𝐿(1) = 1.
Algorithm (recursive version)
lis(a[1 .. n], i) {
if (i == 1)
return 1;

tmpMax = 1;
for (j = 1; j < i; j++)
if (a[j] < a[i]) {
res = lis(a, j);
if (res + 1 > tmpMax)
tmpMax = res + 1;
}

if (max < tmpMax) // global variable


max = tmpMax;
return tmpMax;
}
for (i = 1; i  n; i++)
lis(a, i);
print(max);
6

Analysis of the algorithm:


The complexity of the algorithm depends on the distribution of an input data. For
simplicity, let’s assume that the given sequence contains no duplicate values.
Let’s denote 𝑇(𝑖) the running time of function lis(a, i), ∀𝑖 ∈ [1, 𝑛]
• The worst case: Θ(2𝑛 )
• The best case: Θ(𝑛2 )
• The average case: …

Now, we design a dynamic programming algorithm for this problem. Let’s denote
𝐿[𝑖] the length of the longest increasing subsequence whose last element is 𝑎𝑖 . It’s not
difficult to obtain the following formula:
𝐿[𝑖] = max {𝐿[𝑗]} + 1
1≤𝑗<𝑖: 𝑎𝑗 <𝑎𝑖

Algorithm (Dynamic programming version)


lis_DP(a[1..n]) {
L[1 .. n] = 1;

for (i = 2; i  n; i++)
for (j = 1; j < i; j++)
if ((a[j] < a[i]) && (L[j] + 1 > L[i]))
L[i] = L[j] + 1;

return “The biggest element in L”;


}
cout << lis_DP(a);

Analysis of the algorithm: Θ(𝑛2 )

Note: A subsequence of a string is a new string generated from the original string with
some characters (can be none) deleted without changing the relative order of the remaining
characters.
7

Longest Common Subsequence (LCS) Problem


Given two strings 𝑆 = 𝑠1 𝑠2 … 𝑠𝑚 and 𝑇 = 𝑡1 𝑡2 … 𝑡𝑛 , find the length of their longest
common subsequence and print it.
Note: A common subsequence of two strings is a subsequence that is common to both
strings.
Formally we look for the longest sequence of indexes 𝑖1 , 𝑖2 , … , 𝑖𝑘 and 𝑗1 , 𝑗2 , … , 𝑗𝑘
such that:
1 ≤ 𝑖1 < 𝑖2 < ⋯ < 𝑖𝑘 ≤ 𝑚 ∧ 𝑗1 < 𝑗2 < ⋯ < 𝑗𝑘 ∧ 𝑠𝑖ℎ = 𝑡𝑗ℎ , ∀ℎ ∈ [1, 𝑘]

Example: Given 𝑆 = 𝑋𝑌𝑋𝑍𝑃𝑄, 𝑇 = 𝑌𝑋𝑄𝑌𝑋𝑃. The longest common subsequence


is 𝑋𝑌𝑋𝑃 of the length 4.

Let’s denote 𝐿(𝑖, 𝑗) the length of the longest common subsequence which exists in
two substrings 𝑠1 𝑠2 … 𝑠𝑖 and 𝑡1 𝑡2 … 𝑡𝑗 .
• If 𝑠𝑖 = 𝑡𝑗 : 𝐿(𝑖, 𝑗) = 1 + 𝐿(𝑖 − 1, 𝑗 − 1)
• If 𝑠𝑖 ≠ 𝑡𝑗 : 𝐿(𝑖, 𝑗) = max{𝐿(𝑖 − 1, 𝑗), 𝐿(𝑖, 𝑗 − 1)}
where 𝐿(𝑖, 0) = 𝐿(0, 𝑗) = 0.

Algorithm (recursive version)


int LCS(char S[], int i, char T[], int j) {
if ((i == 0) || (j == 0))
return 0;

if (S[i] == T[j])
return 1 + LCS(S, i - 1, T, j - 1);
else
return max(LCS(S, i - 1, T, j), LCS(S, i, T, j - 1));
}
cout << LCS(S, m, T, n);

Analysis of the algorithm: Θ(2𝑛 )


8

Now, let’s design a dynamic programming algorithm for this problem. A two-
dimensional array 𝐿 of the size 𝑚 × 𝑛 is used to hold the results of subproblems.

The cell 𝐿[𝑖, 𝑗] contains the length of the longest common subsequence which exists
in two substrings 𝑠1 𝑠2 … 𝑠𝑖 and 𝑡1 𝑡2 … 𝑡𝑗 :
• If 𝑠𝑖 = 𝑡𝑗 : 𝐿[𝑖, 𝑗] = 𝐿[𝑖 − 1, 𝑗 − 1] + 1
• If 𝑠𝑖 ≠ 𝑡𝑗 : 𝐿[𝑖, 𝑗] = max{𝐿[𝑖 − 1, 𝑗], 𝐿[𝑖, 𝑗 − 1]}
where 𝐿[0, 𝑗] = 𝐿[𝑖, 0] = 0.

Obviously, 𝐿[𝑚, 𝑛] contains the final result.

Algorithm
LCS_Dyn(char S[], int m, char T[], int n) {
L[1 .. m][0] = L[0][1 .. n] = 0

for (i = 1; i  m; i++)
for (j = 1; j  n; j++)
if (S[i] == T[j])
L[i][j] = 1 + L[i - 1][j - 1];
else
L[i][j] = max(L[i - 1][j], L[i][j - 1]);

return L[m][n];
}
Analysis of the algorithm: Θ(𝑚𝑛).
9

Floyd’s Algorithm for the All-Pairs Shortest-Paths Problem

Given a weighted connected graph (undirected or directed), the all-pairs shortest-


paths problem asks to find the distances - i.e., the lengths of the shortest paths - from each
vertex to all other vertices.

Assuming that the given graph has 𝑛 vertices. A two-dimensional array 𝐷 of the
size 𝑛 × 𝑛, called the distance matrix, is used to record the lengths of shortest paths: the
element 𝑑𝑖𝑗 indicates the length of the shortest path from the vertex 𝑖 to the vertex 𝑗 (1 ≤
𝑖 ≠ 𝑗 ≤ 𝑛).

Floyd’s algorithm computes the distance matrix 𝐷 of a weighted graph with 𝑛


vertices through a series of 𝑛 × 𝑛 matrices:
𝐷 (0) , 𝐷 (1) , … , 𝐷 (𝑘−1) , 𝐷 (𝑘) , … , 𝐷 (𝑛) ≡ 𝐷

(𝑘)
The element 𝑑𝑖𝑗 ∈ 𝐷 (𝑘) (𝑘 = 0,1, … , 𝑛; 𝑖, 𝑗 = 1,2, … , 𝑛) is equal to the length of
the shortest path among all paths from the the vertex 𝑖 to the vertex 𝑗 with each
intermediate vertex, if any, numbered not higher than 𝑘. It’s reasonable if we say that
(𝑘) (𝑘−1) (𝑘−1) (𝑘−1)
𝑑𝑖𝑗 = min {𝑑𝑖𝑗 , 𝑑𝑖𝑘 + 𝑑𝑘𝑗 }
1≤𝑘≤𝑛

Algorithm
Floyd(W[1 .. n, 1 .. n]) {
D = W;

for (k = 1; k  n; k++)
for (i = 1; i  n; i++)
for (j = 1; j  n; j++)
D[i, j] = min{D[i, j], D[i, k] + D[k, j]};

return D;
}

Analysis of the algorithm: Θ(𝑛3 )


10

Matrix Chain Multiplication

Find the most efficient way to multiply a sequence of 𝑛 matrices 𝐴1 × 𝐴2 × … × 𝐴𝑛 .


Assuming that the sizes (in order) of these matrices are 𝑑0 × 𝑑1 , 𝑑1 × 𝑑2 , … , 𝑑𝑛−1 × 𝑑𝑛 ,
respectively.
Note: The problem is not actually to perform the multiplications, but merely to decide in
which order to perform the multiplications.

Example: Multiplying 4 matrices 𝐴 × 𝐵 × 𝐶 × 𝐷 of the sizes (in order) 50 ×


20, 20 × 1, 1 × 10, 10 × 100. Here are three among five different orders to perform the
multiplications:
Multiplication order The number of operations
𝐴 × ((𝐵 × 𝐶) × 𝐷) 20 × 1 × 10 + 20 × 10 × 100 + 50 × 20 × 100 = 120200
(𝐴 × (𝐵 × 𝐶)) × 𝐷 20 × 1 × 10 + 50 × 20 × 10 + 50 × 10 × 100 = 60200
(𝐴 × 𝐵) × (𝐶 × 𝐷) 50 × 20 × 1 + 1 × 10 × 100 + 50 × 1 × 100 = 7000

Convention: If 𝐴𝑘 × 𝐴𝑘+1 where 1 ≤ 𝑘 < 𝑛 then the sizes of 𝐴𝑘 and 𝐴𝑘+1 are 𝑑𝑘−1 × 𝑑𝑘
and 𝑑𝑘 × 𝑑𝑘+1 , respectively. In addition, the size of the matrix which is the product of 𝐴𝑘
and 𝐴𝑘+1 is 𝑑𝑘−1 × 𝑑𝑘+1 .
dk+1
dk

dk-1 Ak dk Ak+1

Let’s consider the sequence of multiplications: 𝐴𝑖 × 𝐴𝑖+1 × … × 𝐴𝑗 , where 1 ≤ 𝑖 ≤


𝑗 ≤ 𝑛 . Let’s denote 𝐶(𝑖, 𝑗) the lowest cost of this sequence. We have the following
recurrence relation:
min {𝐶(𝑖, 𝑘) + 𝐶(𝑘 + 1, 𝑗) + 𝑑𝑖−1 × 𝑑𝑘 × 𝑑𝑗 } 𝑖 < 𝑗
𝐶(𝑖, 𝑗) = {𝑖≤𝑘<𝑗
0 𝑖=𝑗
11

C[i][i] C[i][i+1] … C[i][j−1] C[i][j]

C[i+1][j]

C[i+2][j]

C[j][j]

Algorithm
ChainMatrixMult(d[0 .. n], P[1 .. n][1 .. n]) {
C[1 .. n, 1 .. n] = 0;
for (diag = 1; diag < n; diag++)
for (i = 1; i  n - diag; i++) {
j = i + diag;
C[i, j] = min {𝐶[𝑖, 𝑘] + 𝐶[𝑘 + 1, 𝑗] + 𝑑[𝑖 − 1] × 𝑑[𝑘] × 𝑑[𝑗]};
𝑖≤𝑘<𝑗
P[i, j] = the value of k that minimizes C[i, j];
}
return C[1, n];
}

Analysis of the algorithm: Θ(𝑛3 )

How to print the most efficient way to multiply the sequence of matrices?

Algorithm
order(i, j) {
if (i == j)
cout << "A" << i;
else {
k = P[i][j];
cout << "(";
order(i, k);
order(k + 1, j);
cout << ")";
}
}
order(1, n);
12

Optimal Binary Search Trees

An optional binary search tree is a special binary search tree for which the average
number of comparisons in a search is the smallest possible value. Hence, in order to
construct an optional binary search tree, probabilities of searching for elements of a set
must be known.
Note: For simplicity, we limit our discussion to minimizing the average number of
comparisons in a successful search.

Conventions:
• Let 𝑘1 , 𝑘2 , … , 𝑘𝑛 be keys of a binary search tree. Assuming that:
𝑘1 < 𝑘2 < ⋯ < 𝑘𝑛
• ∀𝑖 ∈ [1, 𝑛]: Let 𝑝𝑖 be the probability of searching for 𝑘𝑖 .
• ∀𝑖 ∈ [1, 𝑛]: Let 𝑐𝑖 be the number of times the comparison is executed to find out 𝑘𝑖 :
𝑐𝑖 = 𝑙𝑒𝑣𝑒𝑙(𝑘𝑖 ) + 1
• The average number of comparisons in a successful search in the tree is
𝑛

𝐶𝑜𝑠𝑡 = ∑(𝑐𝑖 × 𝑝𝑖 )
𝑖=1
The tree must be designed and constructed in such a way that the value of 𝐶𝑜𝑠𝑡 is
minimized.

Example: Consider 5 possibilities of constructing a binary search tree that contains


three keys: 𝑘1 < 𝑘2 < 𝑘3 . The probabilities of searching for them are 𝑝1 = 0.7, 𝑝2 =
0.2, 𝑝3 = 0.1.

 𝑘3  𝑘3  𝑘2 𝑘1  𝑘1 

𝑘2 𝑘1 𝑘1 𝑘3 𝑘3 𝑘2

𝑘1 𝑘2 𝑘2 𝑘3

1. 3 (0.7) + 2 (0.2) + 1 (0.1) = 2.6


2. 2 (0.7) + 3 (0.2) + 1 (0.1) = 2.1
3. 2 (0.7) + 1 (0.2) + 2 (0.1) = 1.8
4. 1 (0.7) + 3 (0.2) + 2 (0.1) = 1.5
5. 1 (0.7) + 2 (0.2) + 3 (0.1) = 1.4
13

As you will see, the total number of binary search trees with 𝑛 keys is equal to the
th
𝑛 Catalan number,
1 2𝑛
𝑐(𝑛) = {𝑛 + 1 ( 𝑛 ) 𝑛 > 0
1 𝑛=0
4𝑛
which grows to infinity as fast as 𝑛1.5. So, an exhaustive-search approach is unrealistic.

Let denote 𝐶(𝑖, 𝑗) the smallest average number of comparisons made in a successful
search in an optimal binary search tree 𝑇𝑖,𝑗 made up of keys 𝑘𝑖 , … , 𝑘𝑗 , where 𝑖, 𝑗 are some
integer indices, 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛.
• If 𝑖 = 𝑗: The tree 𝑇𝑖,𝑗 contains only one node.
𝐶(𝑖, 𝑖) = 𝑐𝑖 × 𝑝𝑖 = 1 × 𝑝𝑖 = 𝑝𝑖
• If 𝑖 > 𝑗: The tree 𝑇𝑖,𝑗 is considered as empty one and 𝐶(𝑖, 𝑗) = 0.
𝑡
• If 𝑖 < 𝑗: We will consider all possible ways to construct 𝑇𝑖,𝑗 . Let 𝑇𝑖,𝑗 be a 𝑇𝑖,𝑗 whose
the root contains key 𝑘𝑡 , where 𝑖 ≤ 𝑡 ≤ 𝑗. The left subtree 𝑇𝑖,𝑡−1 contains keys
𝑘𝑖 , … , 𝑘𝑡−1 optimally arranged, and the right subtree 𝑇𝑡+1,𝑗 contains keys
𝑘𝑡+1 , … , 𝑘𝑗 also optimally arranged.

𝑘𝑡

𝑇𝑖,𝑡−1 𝑇𝑡+1,𝑗

Since 𝑇𝑖,𝑡−1 and 𝑇𝑡+1,𝑗 are two optimal binary search trees so 𝐶(𝑖, 𝑡 − 1) and
𝐶 (𝑡 + 1, 𝑗) are the smallest average numbers of comparisons made in a successful search
in 𝑇𝑖,𝑡−1 and 𝑇𝑡+1,𝑗 , respectively.
𝑡−1

𝐶(𝑖, 𝑡 − 1) = ∑ 𝑐𝑠 × 𝑝𝑠
𝑠=𝑖
𝑗

𝐶(𝑡 + 1, 𝑗) = ∑ 𝑐𝑠 × 𝑝𝑠
𝑠=𝑡+1
14

Obviously, the smallest average number of comparisons made in a successful search


𝑡
in 𝑇𝑖,𝑗 is shown as follows:
𝑡−1 𝑗

∑((𝑐𝑠 + 1) × 𝑝𝑠 ) + ∑ ((𝑐𝑠 + 1) × 𝑝𝑠 ) + 𝑝𝑡
𝑠=𝑖 𝑠=𝑡+1
𝑡−1 𝑗

= 𝐶(𝑖, 𝑡 − 1) + ∑ 𝑝𝑠 + 𝐶(𝑡 + 1, 𝑗) + ∑ 𝑝𝑠 + 𝑝𝑡
𝑠=𝑖 𝑠=𝑡+1
𝑗

= 𝐶(𝑖, 𝑡 − 1) + 𝐶(𝑡 + 1, 𝑗) + ∑ 𝑝𝑠
𝑠=𝑖
Thus, we have the recurrence:
𝑗

𝐶(𝑖, 𝑗) = 𝑚𝑖𝑛 {𝐶(𝑖, 𝑡 − 1) + 𝐶(𝑡 + 1, 𝑗) + ∑ 𝑝𝑠 }


𝑖≤𝑡≤𝑗
𝑠=𝑖
𝑗

= 𝑚𝑖𝑛 {𝐶(𝑖, 𝑡 − 1) + 𝐶(𝑡 + 1, 𝑗)} + ∑ 𝑝𝑠


𝑖≤𝑡≤𝑗
𝑠=𝑖
where 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛.

Example: Consider table 𝐶 of the size (1. . 𝑛 + 1) × (0. . 𝑛) where 𝑛 = 10.

j 0 1 2 3 4 5 6 7 8 9 10

i 1 0 p1 ?
2 0 p2
3 0 p3
4 0 p4 𝐶(4,7)
5 0 p5
6 0 p6
7 0 p7
8 0 p8
9 0 p9
10 0 p10
11 0
15

Algorithm
OptimalBST(p[1..n]) {
int C[1 .. n + 1, 0 .. n], R[1 .. n + 1, 0 .. n];

for (i = 0; i  n; i++) C[i + 1, i] = R[i + 1, i] = 0;


for (i = 1; i  n; i++) {
C[i, i] = p[i];
R[i, i] = i;
}

for (diag = 1; diag < n; diag++)


for (i = 1; i  n – diag; i++) {
j = i + diag;
val = min (𝐶[𝑖, 𝑡 − 1] + 𝐶[𝑡 + 1, 𝑗]);
𝑖≤𝑡≤𝑗
R[i, j] = The value of t that minimizes val;
C[i, j] = val + ∑𝑗𝑠=𝑖 𝑝[𝑠];
}

return <C[1, n], R>;


}

Analysis of the algorithm: Θ(𝑛3 )

Algorithm (for constructing the tree)


tree(i, j) {
t = R[i, j];
if (t == 0) return NULL;
p = new node;
p->key = key[t];
p->left = tree(i, t - 1);
p->right = tree(t + 1, j);
return p;
}
root = tree(1, n);
16

Subset-Sum Problem
Find a subset of a given set 𝐴 = {𝑎1 , 𝑎2 , … , 𝑎𝑛 } of 𝑛 positive integers whose sum is
equal to a given positive integer 𝑘.

Let’s assume that SubsetSums(A, k) is the function that finds a subset of a set
𝐴 whose sum is equal to a given positive integer 𝑘. The function returns a boolean value
depending on the existence of such subset. In fact, this is a recursive function.

Algorithm (recursive version)


SubsetSums(a[], n, k) {
if (k == 0)
return true;
if (n == 0)
return false;
if (a[n] > k)
return SubsetSums(a, n - 1, k);

return SubsetSums(a, n - 1, k - a[n]) ||


SubsetSums(a, n - 1, k);
}
SubsetSums(a, n, k);

Analysis of the algorithm: O(2𝑛 )

Now, let’s design a dynamic programming algorithm for this problem. A table
𝑉[0. . 𝑛, 0. . 𝑘] is used to hold the results of subproblems:
• If there is a subset of a set {𝑎1 , 𝑎2 , … , 𝑎𝑖 } whose sum is 𝑗 (1 ≤ 𝑗 ≤ 𝑘): 𝑉[𝑖, 𝑗] = 1
• Otherwise: 𝑉[𝑖, 𝑗] = 0

We have the recurrence:

1 𝑉[𝑖 − 1, 𝑗] = 1 ∨ 𝑉[𝑖 − 1, 𝑗 − 𝑎𝑖 ] = 1 𝑤ℎ𝑒𝑟𝑒 𝑗 ≥ 𝑎𝑖


𝑉[𝑖, 𝑗] = {
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

where 𝑉[0,1. . 𝑘] = 0, 𝑉[0. . 𝑛, 0] = 1.


17

Algorithm
SubsetSumsDP(a[1 .. n], n, k) {
int V[0 .. n, 0 .. k];

V[0 .. n, 0] = 1;
V[0, 1 .. k] = 0;
for (i = 1; i  n; i++)
for (j = 1; j  k; j++) {
tmp = 0;
if (j  a[i])
tmp = V[i – 1, j - a[i]];
V[i, j] = V[i – 1, j] || tmp;
}
}

SubsetSumsDP(a, n, k);

Analysis of the algorithm: Θ(𝑛𝑘).

How to print the subset itself?

Algorithm
if (V[n, k]) {
while (k) {
if (V[n – 1, k - a[n]] == 1 && V[n – 1, k] == 0) {
cout << a[n] << " ";
k -= a[n];
}
n--;
}
}
18

Knapsack Problem

Given 𝑛 items of known weights 𝑤1 , 𝑤2 , … , 𝑤𝑛 (∈ ℤ+ ) and values 𝑣1 , 𝑣2 , … , 𝑣𝑛 (∈


ℝ+ ) and a knapsack of capacity 𝐶(∈ ℤ+ ), find the most valuable subset of the items that
fit into the knapsack.

Let’s denote 𝑇(𝑖, 𝑗) the value of the most valuable subset of the first 𝑖 items that fit
into the knapsack of capacity 𝑗.
max{𝑇(𝑖 − 1, 𝑗), 𝑣𝑖 + 𝑇(𝑖 − 1, 𝑗 − 𝑤𝑖 )} 𝑗 ≥ 𝑤𝑖
𝑇(𝑖, 𝑗) = {
𝑇(𝑖 − 1, 𝑗) 𝑗 < 𝑤𝑖
𝑇(0, 𝑗) = 0 𝑗≥0
{
𝑇(𝑖, 0) = 0 𝑖≥0

Example: Consider a set of 4 items:


{𝑤1 = 2, 𝑣1 = 12}
{𝑤2 = 1, 𝑣2 = 10}
{𝑤3 = 3, 𝑣3 = 20}
{𝑤4 = 2, 𝑣4 = 15}
and 𝐶 = 5.
ij→ 0 1 2 3 4 5 ij→ 0 1 2 3 4 5
0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 12 12 12 12 1 0 0 12 12 12 12
2 0 2 0 10 12 22 22 22
3 0 3 0
4 0 4 0

ij→ 0 1 2 3 4 5 ij→ 0 1 2 3 4 5
0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 12 12 12 12 1 0 0 12 12 12 12
2 0 10 12 22 22 22 2 0 10 12 22 22 22
3 0 10 12 22 30 32 3 0 10 12 22 30 32
4 0 4 0 10 15 25 30 37
19

Algorithm
Knapsack(w[1 .. n], v[1 .. n], C) {
int T[0 .. n, 0 .. C];

T[0 .. n, 0] = T[0, 1 .. C] = 0;
for (i = 1; i  n; i++)
for (j = 1; j  C; j++)
if (j  w[i])
T[i, j] = max{ T[i – 1, j],
v[i] + T[i – 1, j – w[i]]};
else
T[i, j] = T[i – 1, j];
return T[n, C];
}

How to print the subset itself?


20

Traveling Salesman Problem

A tour (also called a Hamiltonian circuit) in a directed graph is a path from a vertex
to itself that passes through each of the other vertices exactly once.

An optimal tour in a weighted, directed graph is such a path of minimum length.

The traveling salesperson problem is to find an optimal tour in a weighted, directed


graph when at least one tour exists.

Note: Given graph 𝐺 = (𝑉, 𝐸) where 𝑉 = {𝑣1 , 𝑣2 , … , 𝑣𝑛 }. The starting vertex is irrelevant
to the length of an optimal tour, we will consider 𝑣1 to be the starting vertex.

Remark:
“If 𝑣𝑖 is the first vertex after 𝑣1 on an optimal tour, the subpath of that tour from 𝑣𝑖
to 𝑣1 must be a shortest path from 𝑣𝑖 to 𝑣1 that passes through each of the other vertices
exactly once.”

Let’s denote
− 𝑊: An adjacency matrix representation of the graph 𝐺. The element 𝑊[𝑖, 𝑗] = 𝑘 if
there is an edge connecting a vertex 𝑣𝑖 with a vertex 𝑣𝑗 , 𝑊[𝑖, 𝑗] = ∞ if there is no edge
connecting two vertices 𝑣𝑖 and 𝑣𝑗 , or 𝑊[𝑖, 𝑗] = 0 if 𝑖 = 𝑗.
− 𝐴(⊆ 𝑉): A subset of 𝑉.
− 𝐷[𝑣𝑖 , 𝐴]: Length of a shortest path from 𝑣𝑖 to 𝑣1 passing through each vertex in 𝐴
exactly once.

In general, ∀𝑖 ∈ [2, 𝑛], 𝑣𝑖 ∉ 𝐴:


𝑚𝑖𝑛 {𝑊 [𝑖, 𝑗] + 𝐷[𝑣𝑗 , 𝐴\{𝑣𝑗 }]} 𝐴 ≠ ∅
𝐷[𝑣𝑖 , 𝐴] = {𝑗:𝑣𝑗 ∈𝐴
𝑊[𝑖, 1] 𝐴=∅
21

Algorithm
TSP(n, W[1 .. n, 1 .. n], P[1 .. n, 1 .. n]) {
D[1 .. n, subset of V \ {v1}];
P[1 .. n, subset of V \ {v1}];

D[2 .. n, ] = W[2 .. n, 1];


for (k = 1; k ≤ n – 2; k++)
for (all subsets A  V \ {v1} containing k vertices)
for (i such that i  1 and vi  A) {
D[i, A] = min {W[i, j] + D[j, A \ {vj}]};
𝑗:𝑣𝑗 ∈𝐴
P[i, A] = value of j that gave the minimum;
}

D[1, V \ {v1}] = min {W[1][j] + D[j][V \ {v1, vj}]};


2≤𝑗≤𝑛
P[1, V \ {v1}] = value of j that gave the minimum;

return { D[1, V \ {v1}], P };


}

Analysis of the algorithm: 𝑇(𝑛) ∈ Θ(𝑛2 2𝑛 ), 𝑀 (𝑛) ∈ Θ(𝑛2𝑛 )

How to print the optimal tour?


Algorithm
TSP_Tour(P[1..n, 1..n]) {
cout << v1;
A = V \ {v1};
k = 1;
while (A  ) {
k = P[k, A];
cout << vk;
A = A \ {vk};
}
}
22

Memoization (or Memory Function)

Dynamic programming and divide-and-conquer solve problems that have a


recurrence relation. The divide-and-conquer is a top-down technique. It has the
disadvantage that it solves common subproblems multiple times. This leads to poor
efficiency (typically, exponential or worse). The dynamic programming is a bottom-up
technique, and solving all the subproblems only once. This has the disadvantage that
solutions to some of subproblems are often not necessary for getting a solution to the
problem given.

We would like to have the best of both worlds, i.e. all the necessary subproblems
solved only once. This is possible using memory functions – a hybrid technique that
combines the strengths of divide-and-conquer and dynamic programming.

The hybrid technique uses a top-down approach with table of subproblem solution.
Before determining the solution recursively, the algorithm checks if the subproblem has
already been solved by checking the table. If the table has a valid value then the algorithm
uses the table value; otherwise it proceeds with the recursive solution.

Example: Finding the 𝑛𝑡ℎ Fibonacci number.

Algorithm
Fib(f[0 .. n], n) {
if (f[n] < 0)
f[n] = Fib(f, n - 1) + Fib(f, n - 2);
return f[n];
}

Fib_Memo(n) {
f[0] = 0;
f[1] = 1;
f[2 .. n] = -1;
return Fib(f, n);
}
23

Example: Matrix chain multiplication

The recurrence relation is as follows:

min {𝐶(𝑖, 𝑘) + 𝐶(𝑘 + 1, 𝑗) + 𝑑𝑖−1 × 𝑑𝑘 × 𝑑𝑗 } 𝑖 < 𝑗


𝐶(𝑖, 𝑗) = {𝑖≤𝑘<𝑗
0 𝑖=𝑗

Algorithm
ChainMatrixMult_Memo(d[0 .. n]) {
C[.., ..] = ;
return MMM(C, d, 1, n);
}

MMM(C[1 .. n, 1 .. n], d[0 .. n], i, j) {


if (C[i, j] < )
return C[i, j];

if (i == j)
C[i, j] = 0;
else
for (k = i; k < j; k++) {
t = MMM(C, d, i, k) + MMM(C, d, k + 1, j) + d[i - 1]
* d[k] * d[j];
if (t < C[i, j])
C[i, j] = t;
}
return C[i, j];
}

Example: The knapsack problem

The recurrence relation is as follows:


𝑚𝑎𝑥{𝑇[𝑖 − 1, 𝑗], 𝑣𝑖 + 𝑇[𝑖 − 1, 𝑗 − 𝑤𝑖 ]} 𝑗 ≥ 𝑤𝑖
𝑇[𝑖, 𝑗] = {
𝑇[𝑖 − 1, 𝑗] 𝑗 < 𝑤𝑖
𝑇[0, 𝑗] = 0 𝑗≥0
where {
𝑇[𝑖, 0] = 0 𝑖≥0
24

Algorithm
Knapsack(T[0 .. n, 0 .. W] , i, j) {
if (T[i, j] < 0) {
if (j < w[i])
tmp = Knapsack(T, i - 1, j);
else
tmp = max(Knapsack(T, i - 1, j), v[i] + Knapsack(T, i-
1, j – w[i]));
T[i, j] = tmp;
}
return T[i, j];
}

Knapsack_Memo() { // global variable: w[1 .. n], v[1 .. n]


T[,] = -1;
T[0, 0 .. W] = T[0 .. n, 0] = 0;
return Knapsack(T, n, W);
}

Remark: The input data is a set of 4 items:


{𝑤1 = 2, 𝑣1 = 12}
{𝑤2 = 1, 𝑣2 = 10}
{𝑤3 = 3, 𝑣3 = 20}
{𝑤4 = 2, 𝑣4 = 15}
and 𝐶 = 5:

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 –1 12 22 –1 22
3 0 –1 –1 22 –1 32
4 0 –1 –1 –1 –1 37

You might also like