0% found this document useful (0 votes)
58 views13 pages

UNIT-3 (Dynamic Programming)

Dynamic Programming, introduced by Richard Bellman in 1953, is a design technique for solving multi-stage optimization problems by breaking them into subproblems. The document outlines the approach of Dynamic Programming, including the steps of dividing problems, storing solutions to avoid recomputation, and combining solutions to achieve a final result. It also covers specific algorithms such as All-Pairs Shortest Paths, 0/1 Knapsack, and the Travelling Salesperson problem.

Uploaded by

Rajeswari Bolla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views13 pages

UNIT-3 (Dynamic Programming)

Dynamic Programming, introduced by Richard Bellman in 1953, is a design technique for solving multi-stage optimization problems by breaking them into subproblems. The document outlines the approach of Dynamic Programming, including the steps of dividing problems, storing solutions to avoid recomputation, and combining solutions to achieve a final result. It also covers specific algorithms such as All-Pairs Shortest Paths, 0/1 Knapsack, and the Travelling Salesperson problem.

Uploaded by

Rajeswari Bolla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT-III

Dynamic Programming: General Method, All pairs shortest paths, Single Source Shortest
Paths– General Weights (Bellman Ford Algorithm), Optimal Binary Search Trees, 0/1
Knapsack, Travelling Salesperson problem
Introduction
 Dynamic Programming was introduced by Richard Ernest Bellman in 1953.
 Richard Ernest Bellman was an American Applied Mathematician.
 Dynamic Programming is useful design technique for solving multi-stage optimization
problems.
 An optimization problem has an objective function and a set of constraints.
 An optimization problem deals with maximization and minimization of the objective
function as per the requirements of the problem.
 In multistage optimization problems, decision is made at successive stages to obtain a
global solution for the given problem.
 Dynamic Programming divides a problem into a set of subproblems and establishes a
recursive relationship among the original problem and its subproblems.
 A subproblem that represents a small part of the original problem is solved for obtaining
the optimal solution.
 Then the scope of the subproblem is enlarged to find the optimal solution of a new
subproblem.
 This process encompasses the whole original problem, thereafter solution for the while
problem is obtaining by combining the optimal solutions of its subproblems.

Approach
Dynamic Programming consists of three steps for solving a problem.

 Step-1: The given problem is divided into a number of subproblems and could be
interrelated, hence these subproblems are overlapping each other.

 Step-2: To avoid re-computation of multiple overlapping subproblems reputedly, a table


is created, whenever subproblem is solved. Then its solution is stored in the table so
that in future its solutions can be reused.

 Step-3: These solutions of subproblems are combined a bottom-up manned to obtain the
final solution of the given problem.
All-Pairs Shortest-Paths Algorithm
Approach
• Let G = (V, E) be a directed graph with n vertices.
• Let cost be a cost adjacency matrix for G such that cost(i,j)=0, 1i n, 1j n
and i=j
• Then cost(i,j) is the length of edge i,j if i,j  E(G) and cost(i,j) =  if i  j
and i,j  E(G).
• The All-Paris shortest-Path problem is to determine a matrix A such that A(i,j)
is the length of a shortest path from i to j.
• At each stage, we can obtain a matrix by solving n single source problems.
Algorithm
Algorithm AllPairs(cost, A, n)
{
for i = 1 to n do
for j = 1 to n do
if i==j then A[i,j] = 0
else A[i,j] = cost[i,j];
for k = 1 to n do
for i = 1 to n do
for j = 1 to n do
A[i,j] = min(A[i,j], A[i,k]+ A[k,j]);
}
Ak(i,j) = min{Ak-1(i,j), Ak-1(i,k)+ Ak-1(k,j)}, k1
A1(2,3) = min{A0(2,3), A0(2,1)+ A0(1,3)}= min{2, 8+∞}= min{2, ∞} = 2
A1(2,4) = min{A0(2,4), A0(2,1)+ A0(1,4)} = min{∞, 8+7}
= min{∞,15} = 15
A1(3,2) = min{A0(3,2), A0(3,1)+ A0(1,2)} = min{∞, 5+3}
= min{∞,8} = 8
A1(3,4) = min{A0(3,4), A0(3,1)+ A0(1,4)}= min{1, 5+7}
= min{1,12} = 1
A1(4,2) = min{A0(4,2), A0(4,1)+ A0(1,2)} = min{∞, 2+3}
= min{∞,5} = 5
A1(4,3) = min{A0(4,3), A0(4,1)+ A0(1,3)} = min{∞, 2+∞} = min{∞,∞} = ∞
A2(1,3) = min{A1(1,3), A1(1,2)+ A1(2,3)} = min{∞,3+2}
= min{∞,5} = 5
A2(1,4) = min{A1(1,4), A1(1,2)+ A1(2,4)} = min{7, 3+15}
= min{7, 18} = 7
A2(3,1) = min{A1(3,1), A1(3,2)+ A1(2,1)} = min{5, 8+8}
= min{5, 16} = 5
A2(3,4) = min{A1(3,4), A1(3,2)+ A1(2,4)} = min{1, 8+15}
= min{1, 23} = 1
A2(4,1) = min{A1(4,1), A1(4,2)+ A1(2,1)} = min{2, 5+8}
= min{2, 13} = 2
A2(4,3) = min{A1(4,3), A1(4,2)+ A1(2,3)} = min{∞, 5+2} = min{∞, 7} = 7
A3(1,2) = min{A2(1,2), A2(1,3)+ A2(3,2)} = min{3, 5+8}
= min{3, 13} = 3
A3(1,4) = min{A2(1,4), A2(1,3)+ A2(3,4)} = min{7, 5+1}
= min{7, 6} = 6
A3(2,1) = min{A2(2,1), A2(2,3)+ A2(3,1)} = min{8, 2+5}
= min{8, 7} = 7
A3(2,4) = min{A2(2,4), A2(2,3)+ A2(3,4)} = min{15, 2+1}
= min{15, 3} = 3
A3(4,1) = min{A2(4,1), A2(4,3)+ A2(3,1)} = min{2, 7+5}
= min{2, 12} = 2
A3(4,2) = min{A2(4,2), A2(4,3)+ A2(3,2)} = min{5, 7+8}
= min{5, 15} = 5
A4(1,2) = min{A3(1,2), A3(1,4)+ A3(4,2)} = min{3, 6+5}
= min{3, 11} = 3
A4(1,3) = min{A3(1,3), A3(1,4)+ A3(4,3)} = min{5, 6+7}
= min{5, 13} = 5
A4(2,1) = min{A3(2,1), A3(2,4)+ A3(4,1)} = min{7, 3+2}
= min{7, 5} = 5
A4(2,3) = min{A3(2,3), A3(2,4)+ A3(4,3)} = min{2, 3+7}
= min{2, 10} = 2
A4(3,1) = min{A3(3,1), A3(3,4)+ A3(4,1)} = min{5, 1+2} = min{5, 3} = 3
A4(3,2) = min{A3(3,2), A3(3,4)+ A3(4,2)} = min{8, 1+5} = min{8, 6} = 6
Example:
Given a weighted digraph G = (V, E) with weight. Determine the length of the shortest path
between all pairs of vertices in G. Here we assume that there are nocycles with zero or negative
cost.
Step 1: Solving the equation for, k = 1;
A1 (1, 1) = min {(Ao (1, 1) + Ao (1, 1)), c (1, 1)} = min {0 + 0, 0} = 0
A1 (1, 2) = min {(Ao (1, 1) + Ao (1, 2)), c (1, 2)} = min {(0 + 4), 4} = 4
A1 (1, 3) = min {(Ao (1, 1) + Ao (1, 3)), c (1, 3)} = min {(0 + 11), 11} = 11
A1 (2, 1) = min {(Ao (2, 1) + Ao (1, 1)), c (2, 1)} = min {(6 + 0), 6} = 6
A1 (2, 2) = min {(Ao (2, 1) + Ao (1, 2)), c (2, 2)} = min {(6 + 4), 0)} = 0
A1 (2, 3) = min {(Ao (2, 1) + Ao (1, 3)), c (2, 3)} = min {(6 + 11), 2} = 2
A1 (3, 1) = min {(Ao (3, 1) + Ao (1, 1)), c (3, 1)} = min {(3 + 0), 3} = 3
A1 (3, 2) = min {(Ao (3, 1) + Ao (1, 2)), c (3, 2)} = min {(3 + 4), } = 7
A1 (3, 3) = min {(Ao (3, 1) + Ao (1, 3)), c (3, 3)} = min {(3 + 11), 0} = 0

Step 2: Solving the equation for, K = 2;


A2 (1, 1) = min {(A1 (1, 2) + A1 (2, 1), c (1, 1)} = min {(4 + 6), 0} = 0
A2 (1, 2) = min {(A1 (1, 2) + A1 (2, 2), c (1, 2)} = min {(4 + 0), 4} = 4
A2 (1, 3) = min {(A1 (1, 2) + A1 (2, 3), c (1, 3)} = min {(4 + 2), 11} = 6
A2 (2, 1) = min {(A (2, 2) + A (2, 1), c (2, 1)} = min {(0 + 6), 6} = 6
A2 (2, 2) = min {(A (2, 2) + A (2, 2), c (2, 2)} = min {(0 + 0), 0} = 0
A2 (2, 3) = min {(A (2, 2) + A (2, 3), c (2, 3)} = min {(0 + 2), 2} = 2
A2 (3, 1) = min {(A (3, 2) + A (2, 1), c (3, 1)} = min {(7 + 6), 3} = 3
A2 (3, 2) = min {(A (3, 2) + A (2, 2), c (3, 2)} = min {(7 + 0), 7} = 7
A2 (3, 3) = min {(A (3, 2) + A (2, 3), c (3, 3)} = min {(7 + 2), 0} = 0

Step 3: Solving the equation for, k = 3;


A3 (1, 1) = min {A2 (1, 3) + A2 (3, 1), c (1, 1)} = min {(6 + 3), 0} = 0
A3 (1, 2) = min {A2 (1, 3) + A2 (3, 2), c (1, 2)} = min {(6 + 7), 4} = 4
A3 (1, 3) = min {A2 (1, 3) + A2 (3, 3), c (1, 3)} = min {(6 + 0), 6} = 6
A3 (2, 1) = min {A2 (2, 3) + A2 (3, 1), c (2, 1)} = min {(2 + 3), 6} = 5
A3 (2, 2) = min {A2 (2, 3) + A2 (3, 2), c (2, 2)} = min {(2 + 7), 0} = 0
A3 (2, 3) = min {A2 (2, 3) + A2 (3, 3), c (2, 3)} = min {(2 + 0), 2} = 2
A3 (3, 1) = min {A2 (3, 3) + A2 (3, 1), c (3, 1)} = min {(0 + 3), 3} = 3
A3 (3, 2) = min {A2 (3, 3) + A2 (3, 2), c (3, 2)} = min {(0 + 7), 7} = 7
A3 (3, 3) = min {A2 (3, 3) + A2 (3, 3), c (3, 3)} = min {(0 + 0), 0} = 0

0/1 Knapsack Problem


• A solution to the Knapsack can be obtained by making a sequence of decisions
on the variables x1, x2, …, xn.
• A decision variable xi can have one of the possible values 0 or 1. Fractional values
are not accepted.
• Let us assume the decision variable xi are made in the order xn, xn-1, …, x1.
• Based on the value of xn, the capacity remain is M-Wn and a profit of Pn has
earned for xn=1.
• Hence, it is clear that xn-1, …, x1 must be optimal [Link] the problem state resulting
from the decision xn.
• Let Fi(y) be the value of an optimal solution to KS(1,j,y), then,
• Fn(M) = max{Fn-1(M), Fn-1(M-Wn) + Pn }
• For an arbitrary Fi(M), i>0, then
Fi(y) = max{Fi-1(y), Fi-1(y-Wi) + Pi }
• The above equation is solved for fn(M) by beginning with the knowledge f0(y)=0
for all y and fi(y) = -∞, y<0.
• Then, f1, f2, …, fn can be successively computed.
• fi(y) is an ascending function, hence we use the ordered set
Example: Consider the knapsack instance n = 3, (w1, w2, w3) = (2, 3, 4), (P1, P2, P3) =
(1,2,5) and M = 6.
Solution: Initially, f0(x)=0, for all x and fi (x) = -∞ if x < 0.
Fn(M) = max {fn-1 (M), fn-1 (M - wn) + pn}
F3(6) = max (f2(6), f2(6 – 4) + 5} = max {f2(6), f2(2) + 5}
F2(6) = max (f1(6), f1(6 – 3) + 2} = max {f1(6), f1(3) + 2}
F1(6) = max (f0(6), f0(6 – 2) + 1} = max {0, 0 + 1} = 1
F1(3) = max (f0(3), f0(3 – 2) + 1} = max {0, 0 + 1} = 1
Therefore, F2(6) = max (1, 1 + 2} = 3
F2(2) = max (f1(2), f1(2 – 3) + 2} = max {f1(2), -∞ + 2}
F1(2) = max (f0(2), f0(2 – 2) + 1} = max {0, 0 + 1} = 1
F2(2) = max {1, -∞ + 2} = 1
Finally, f3(6) = max {3, 1 + 5} = 6
• Si = {(f(yi),yi)| ij k} to represent f(yi).
• Each Si is a pair (P,W), where P = fi(yi) and W = yi
• The solution of Knapsack problem cam be started by S0 = {(0,0)}
• Si+1 is computed from Si as:
Si1={(P,W) | (P-pi,W-wi)Si}
• Then Si+1 is computed by merging the pair Si and Si1together.
• Si+1 contains two pairs (Pj,Wj) and (Pk, Wk) with property that
Pj  Pk and Wj Wk.
• A pair can be discarded if the above rule is not holds called purging rule or
dominance rule.
Pj  Pk and Wj Wk
𝒎𝒂𝒙𝒊𝒎𝒊𝒛𝒆 ∑ 𝒑𝒊 𝒙𝒊
S0={(0,0)}
𝟏≤𝒊≤𝒏
Si1= i
S + (pi,wi)  here + can be an addition
Si = Si-1 + Si-11  here + can be merging 𝒔𝒖𝒃𝒋𝒆𝒄𝒕 𝒕𝒐 ∑ 𝒘𝒊 𝒙𝒊 ≤ 𝒎
𝟏≤𝒊≤𝒏
Formally, the Knapsack problem can be stated as:

𝒂𝒏𝒅 𝟎 = 𝒙𝒊 = 𝟏, 𝟏 ≤ 𝒊 ≤ 𝒏
Apply Dynamic Programming on the instance of Knapsack, n=3, M=6 (w1,w2,w3) = (2,3,4)
and (p1,p2,p3) = (1,2,5)
S0={(0,0)}

Si1= Si + (p1, w1)


S01= S0+(1,2)
= {(0,0}+(1,2)}
= {(0+1, 0+2)} = {(1,2)}
Si = Si-1 U Si-11
S1 = S0 U S01
= {(0,0)} U {(1,2)}
= {(0,0), (1,2)}
S11= S1 + (p2, w2)
= {(0,0), (1,2)} +{(2,3)}
= {(0+2,0+3),(1+2,2+3)}
= {(2,3),(3,5)}
S2 = S1 U S11
= {(0,0),(1,2)} U {(2,3),(3,5)}
= {(0,0),(1,2),(2,3),(3,5)}
S21= S2 + (p3, w3)
= {(0,0),(1,2),(2,3),(3,5)} + {(5,4)}
= {(0+5,0+4),(1+5,2+4), (2+5,3+4),(3+5,5+4)}
= {(5,4),(6,6),(7,7),(8,9)}
S3 = S2 U S21
= {(0,0),(1,2),(2,3),(3,5)}+ {(5,4),(6,6),(7,7),(8,9)}
= {(0,0),(1,2),(2,3),(3,5),(5,4),(6,6),(7,7),(8,9)}
S3 = {(0,0),(1,2),(2,3),(3,5),(5,4),(6,6),(7,7),(8,9)}

Purging or Dominance rule: Consider Sn has a pair (Pj,Wj) and (Pk,Wk) such that Pj  Pk and Wj
Wk, then the pair is discarded.
Consider S3, the pairs are in purging rule
i.e., (3,5) and (5,4)
here, 3  5  T and 5  4  T
hence, the pair(3,5) is discarded, then
S3 = {(0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9)}
After applying purging rule,
if (Pi, Wi)  Sn and (Pi, Wi)  Sn-1 then xn = 1, xn=0 otherwise
Here, the capacity of Knapsack is 6, hence
(6,6)  S3 and (6,6)  S2, therefore x3 = 1
Then, (6,6) – (5,4) = (1,2)

(1,2)  S2 and (1,2)  S1, therefore x2 = 0


(1,2)  S1 and (1,2)  S0, therefore x1 = 1

 (x1,x2,x3) = (1,0,1)
 Maximized profit Pixi = P1x1+P2x2+P3x3
= 1 *1 + 2 * 0 + 5 * 1
=1+0+5
= 6 units
Optimal Binary Search Trees
 OBST is a binary search tree which provides the smallest possible search time for a given
sequence of accesses.
 The search time can be improved in Optimal Cost Binary Search Tree, placing the most
frequently used data in the root and closer to the root element, while placing the least
frequently used data near leaves and in leaves.

 Given a set of identifiers {a1,a2,..,an}. Suppose we need to construct a binary search tree
and p(i) be the probability with which we search for ai then
 If a binary search tree represents n identifiers, then there will be exactly n internal nodes
and n+1 external nodes.
 Every node internal node represents a point where a successful search may terminate.
 Every external node represents a point where an unsuccessful search may terminate.
 If a successful search terminates at an internal node at level(l) , then l comparison is
needed. Hence the expected cost contribution from the internal node for a i is
p(i)*level(ai).
 The identifiers not in the binary search tree can be partitioned into n+1 equivalence
classes Ei, 0 ≤ i ≤ n. If the failure node for Ei is at level(l), then only l-1 comparison are
needed.
 Let q(i) be the probability that the identifier x being searched for is in E i the cost
contribution for the failure node for E i is q(i)*(level(Ei)-1)
 The cost of the optimal binary search tree

 The possible binary search trees for the identifier set


(a1,a2,a3) = (do, if, while) with equal probabilities p(i) = q(i) = 1/7 for all i, we have

w(i, j) = p(j)+q(j)+w(i,j-1)

• c(i, j) such that j-i = 1 (note c(i, i)= 0 and w(i, i)= q(i), 0 <= i <= n). Next we can compute
all c(i, j) such that j-i = 2,then all c(i, j)with j-i = 3, and so on.
• If during this computation we record the root r(i,j)of each tree t ij, then an optimal binary
search tree can be constructed from these r(i,j).
Example:
Let n = 4 and (a1,a2,a3,a4) = (do, if, int, while). Let p(1:4) = (3,3,1,1) and q(0:4) =
(2,3,1,1,1).
Initially, we have
w(i, i)= q(i),
c(i, i)= 0 and
r(i, i)= 0, 0 ≤ i≤ 4
Travelling Salesperson Problem
 Let G = (V, E) be a directed graph with edge costs Cij.
 The variable cij is defined such that cij > 0 for all i and j, cij=0 if i=j and cij = ∞ if <
i, j> ∉ E.
 Let |V| = n and assume n > 1. A tour of G is a directed simple cycle that includes every
vertex in V.
 The cost of a tour is the sum of the cost of the edges on the tour.
 The traveling sales person problem is to find a tour of minimum cost.
 The tour is to be a simple path that starts and ends at vertex 1.
 Let g (i, S) be the length of shortest path starting at vertex i, going through all vertices in
S, and terminating at vertex 1.
 The function g(1,V–{1}) is the length of an optimal salesperson tour. From the principal
of optimality it follows that

 Generalizing equation 1, we obtain (for i ∉ S)

Example: For the following graph find minimum cost tour for the traveling salesperson
problem

Clearly, g (i, ϕ) = ci1 , 1 ≤ i ≤ n. So,


g (2, ϕ) = C21 = 5
g (3, ϕ) = C31 = 6
g (4, ϕ) = C41 = 8
From the above equation
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}, c13 + g (3, {2, 4}), c14 + g (4, {2, 3})}
g (2, {3, 4}) = min {c23 + g (3, {4}), c24 + g (4, {3})}= min {9 + g (3, {4}), 10 + g (4, {3})}
g (3, {4}) = min {c34 + g (4, ϕ)} = 12 + 8 = 20
g (4, {3}) = min {c43 + g (3, ϕ)} = 9 + 6 = 15
Therefore, g (2, {3, 4}) = min {9 + 20, 10 + 15} = min {29, 25} = 25
g (3, {2, 4}) = min {(c32 + g (2, {4}), (c34 + g (4,{2})}
g (2, {4}) = min {c24 + g (4, ϕ)} = 10 + 8 = 18
g (4, {2}) = min {c42 + g (2, ϕ)} = 8 + 5 = 13
Therefore, g (3, {2, 4}) = min {13 + 18, 12 + 13} = min {31, 25} = 25
g (4, {2, 3}) = min {c42 + g (2, {3}), c43 + g (3, {2})}
g (2, {3}) = min {c23 + g (3, ϕ} = 9 + 6 = 15
g (3, {2}) = min {c32 + g (2, ϕ} = 13 + 5 = 18
Therefore, g (4, {2, 3}) = min {8 + 15, 9 + 18} = min {23, 27} =23
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}), c13 + g (3, {2, 4}), c14 + g (4, {2, 3})}
= min {10 + 25, 15 + 25, 20 + 23} = min {35, 40, 43} = 35

The optimal tour for the graph has length = 35


The optimal tour is: 1, 2, 4, 3, 1.

Single Source Shortest Path with General Weights (Bellman Ford Algorithm)
 Single source shortest path with general weight algorithm can be find the shortest path
between source vertex to all other vertices, even the edge weights does have –ve.
 Single source shortest path algorithm can also be solved using Dijkstra algorithm.
Consider the following:
Dijkstra algorithm can be terminates,
dist[2] = 7 and dist[3] = 5.
But, the shortest path between 1 and 3 is 1 – 2 – 3, which gives 7-5 = 2.
This computation cannot be done in Dijkstra algorithm.
By default, Dijkstra algorithm assumes all edge costs are positive values.
Synthesis:
• Negative edges are accepted.
• Negative cycle weight is not allowed.
Single source shortest path algorithm can also be solved using Dijkstra algorithm.
Consider the following:

Total weight of this cycle is: 5+3-10 = -2


Basic idea of re-computation systematically on Bellman-Ford algorithm is
 For increasing lengths k, compute shortest paths with at most k edges.
 Finding shortest paths can have at most n-1 edges, where n is number of vertices.
• If we have more than n edges, then we must have a cycle.
• Hence, cycles must be eliminated and continue.
Algorithm:
Algorithm BellmanFord(v, cost,dist,n)
{
for i :=1 to n do
dist[i]:=cost[v,i];
for k :=2 to n-1 do
for each u such that u!=v has at least one incoming edge do
for each(i,u) in the graph do
if dist[u]>dist[i]+cost[i,u]then
dist[u]:=dist[i]+cost[i,u];
}

Recurrence
• Path of length K from v to u breaks up as
• Path of length k-1 from v to some neighbour w of u
• Edge from w to v
• Relaxing all the edges for (n-1) times
If(dist[u] + c(u,v) < d[v]) then
d[v] = d[u] + c[u,v];

Relaxing all the edges for (n-1) times


If(dist[u] + c(u,v) < d[v]) then
d[v] = d[u] + c[u,v];

(A,B), (A,C), (A,D),(B,E), (C,B), (C,E), (D,C),(D,F),(E,F)

You might also like