Dynamic Programming
Dynamic Programming
Approach
10
Why DP?
• Going bottom-up is a common strategy for dynamic
programming problems, which are problems where the solution is
composed of solutions to the same problem with smaller inputs (as
with the Fibonacci problem, above). The other common strategy for
dynamic programming problems is memorization.
11
Dynamic Programming vs. Recursion and
Divide & Conquer
• In a recursive program, a problem of size n is solved by first solving a
sub-problem of size n-1.
• In a divide & conquer program, you solve a problem of size n by first
solving a sub-problem of size k and another of size k-1, where 1 < k <
n.
• In dynamic programming, you solve a problem of size n by first solving
all sub-problems of all sizes k, where k < n.
12
13
Recursive approach
14
Store (Memoize)
15
Bottom-up approach
16
Two main properties of problem
• Dynamic Programming is an algorithmic paradigm that solves a given
complex problem by breaking it into subproblems and stores the results of
subproblems to avoid computing the same results again. Following are the
two main properties of a problem that suggest that the given problem can
be solved using Dynamic programming.
1) Overlapping Subproblems
2) Optimal Substructure
17
Calculating Binomial Coefficient
• In mathematics, particularly in combinatorics, binomial coefficient is a coefficient
of any of the terms in the expansion of ( a + b) ^ n .
• It is denoted by C (n, k) or n where, ( 0 ≤ k ≤ n ).
k
20
Calculating Binomial Coefficient
• Recurrence: C(n , k) = C (n-1, k-1) + C (n-1, k) for n > k > 0
C(n , 0) = 1,
C(n , n) = 1 for n 0 ……………….. (2)
22
Calculating Binomial Coefficient
• Compute C (4, 2) .
• Here n = 4 and k = 2.
C(4 , 2) = C (4-1, 2-1) + C (4-1, 2)
C(4 , 2) = C (3, 1) + C (3, 2) -------------------------- (1)
• As there are two unknowns : C (3, 1) and C(3, 2) in above equation we will
compute these sub instances of C(4, 2).
23
Calculating Binomial Coefficient
• Now lets compute , C (3, 1) and C(3, 2) .
C(3 , 1) = C (2, 0) + C (2, 1)
As, C(n, 0) = 1 We can write
C(2, 0) = 1
C(3 , 1) = 1 + C (2, 1) ------------------------ (2)
• Hence let us compute C(2, 1) .
n=2,k=1
C(2 , 1) = C (2-1, 1-1) + C (2-1, 1)
= C ( 1, 0) + C (1, 1) ----------------------- (3)
• But as C(n, 0) =1 and C(n , n) = 1 we get
C(1, 0) = 1 and C(1, 1) = 1
24
Calculating Binomial Coefficient
• Substitute these values in above equation 3 we get,
C(2 , 1) = C (1, 0) + C (1, 1)
=1+1
C(2, 1) = 2 ---------------------- (4)
• Now to solve equation 1 we will first compute C(3, 2) with n = 3 and k=2.
C(3 , 2) = C (2, 1) + C (2, 2)
25
Calculating Binomial Coefficient
• But as C(n, n) = C(2, 2) = 1, we will put values of C(2, 1) from equation (4) and
C(2, 2) in C(3, 2) we get,
C(3 , 2) = C (2, 1) + C (2, 2)
= 2+1
C(3, 2) = 3 ------------------- (6)
26
Calculating Binomial Coefficient
27
How DP approach is used?
• While computing C(n, k) the smaller overlapping sequences get generated by C(n-
1, k-1) and C(n-1, k).
• These overlapping, smaller instances of problem need to be solved first. The
solutions which we obtained by solving these instances will ultimately generate
the final solution.
• Thus for computing binomial coefficient DP is used.
• If we record binomial coefficients n and k values ranging from 0 to n and 0 to k
then it will look like Pascal’s triangle.
• To compute C(n, k) we fill up the table row by row starting with C(n, 0)= 1 and
ending with C(n, n) = 1.
• The cell at current row is calculated by two adjacent cells of previous row.
28
Calculating Binomial Coefficient
29
Calculating Binomial
Coefficient Time Complexity
• In calculating binomial coefficient the basic operation is addition i.e. ,
C [i, j] = C [i-1 , j-1] + C [i-1, j]
• Let A(n, k) denotes total additions made in computing C(n, k).
k i-1 n k
A(n, k) = ∑ ∑ 1 + ∑ ∑ 1
i = 1 i=1 i = k+1 j = 1
k n n
= ∑ [ ( i-1) – 1 + 1] + ∑ ( k-1 +1) , because ∑ 1 = ( n-1+1)
i=1 i=k+1 i=1
30
Calculating Binomial
Coefficient Time Complexity
k n
A(n, k) = ∑ ( i -1) + ∑ k
i=1 i = k+1
n
= [ 1+2+3+……(k-1) ] + k ∑ 1
i = k+1
= k (k – 1) + k ( n – (k+1) +1)
2
= k (k – 1) + k ( n – k)
2
31
Calculating Binomial
Coefficient Time Complexity
A(n, k) = k2 - k + n k - k2
2 2
A(n, k) = Ɵ (n k)
32
Thief Story
• The problem is often given as a story:
• A thief breaks into a house.
• Around the thief are various objects: a diamond ring, a silver candle lamps, a
Radio, a large portrait of Elvis painted on a black velvet background (a "velvet-
elvis").
• The thief has a knapsack that can only hold a certain capacity.
• Each of the items has a value and a size, and cannot hold all of the items in the
knapsack.
1 - ring 1 15
2 – candle lamp 5 10
3 - radio 3 9
4 - elvis 4 5
• Finally, the values of the table are filled in, from left to right, and from top to bottom.
For each cell item, the total worth of a knapsack is determined as either the worth of
a knapsack without the current item (expressed as the value directly to the left of the
current value), or it is the value of the knapsack with the current item added into it.
Ex :
Obj : 1,2,3,4
Profit : 1,4,5,7
Weight : 1,3,4,5
(Bellman-Ford Algorithm)
The Bellman-Ford algorithm
Returns a boolean:
• TRUE if and only if there is no negative-weight cycle reachable
from the source: a simple cycle <v0, v1,…,vk>, where v0=vk
and
k
weight ( v
i 1
i 1 ,v i ) 0
• FALSE otherwise
43
Negative-Weight Edges
• Negative-weight edges may form negative-weight cycles
44
Negative-Weight Edges
• s a: only one path
δ(s, a) = w(s, a) = 3
a b
• s b: only one path
-4
3 -1
3 4
c 6 d g
δ(s, b) = w(s, a) + w(a, b) = -1 s 0
5
5 11
8
-
-3
y
• s c: infinitely many paths
2 3 7
- -
e -6 f
s, c, s, c, d, c, s, c, d, c, d, c
46
Cycles
• Can shortest paths contain cycles?
• Negative-weight cycles No!
• Zero-weight cycles
• No reason to use them
47
Single Source Shortest-Path:
The General Case (with negative edges)
• Bellman-Ford algorithm.
Running time?
O(VE).
All-Pairs Shortest Paths
All-Pairs Shortest Paths
• Given a directed graph G = (V, E), weight function w : E → R, |V| = n.
• Assume no negative weight cycles.
• Goal: create an n × n matrix of shortest-path distances δ(u, v).
• Could run BELLMAN-FORD once from each vertex:
• O(V2E)—which is O(V4) if the graph is dense (E = (V2)).
• If no negative-weight edges, could run Dijkstra’s algorithm once from each vertex:
• O(V E lg V) with binary heap—O(V3 lg V) if dense,
• We’ll see how to do in O(V3) in all cases, with no fancy data structure.
All Pairs Shortest Path – Floyd-Warshall
Algorithm
• Dynamic programming approach.
• Use optimal substructure of shortest paths: Any subpath of a
shortest path is a shortest path.
• Create a 3-dimensional table:
• Let dij(k) –shortest path weight of any path from i to j where all intermediate
vertices are from the set {1,2, …, k}.
• Ultimately, we would like to know the values of dij(n).
Computing dij(k)
• Running time = ?
• O(n3).
• Memory required = ?
• O(n2) (if we drop the superscripts).
Example
Step 1
Step 2
Step 3
Step 4
Step 5
All-Pairs Shortest Path: Johnson’s Algorithm
• Idea: If the graph is sparse (|E|<<|V|2), it pays to run Dijkstra’s algorithm
once from each vertex.
• O(VE log V) using binary heap, O(V2 log V + V E) using Fibonacci heap.
• But Dijkstra’s algorithm does not handle negative edges.
• Johnson’s Algorithm: reweight edges to form equivalent graph with non-
negative edges.
• Floyd-Warshall still has advantages:
• very simple implementation
• no fancy data structures
• small constant.
https://www.youtube.com/watch?v=KQ9zlKZ5Rzc
&t=27s
65
Assembly Line Scheduling
66
Assembly Line Scheduling
Where,
ei = entry time of the chassis
a1j = assembly time at line 1
a2j = assembly time at line 2
t ij = time required to change the assembly line
xi = exit time of the chassis
• There is no cost required to stay on the same line. But if it is required to change
the assembly line than the time required is t ij .
• Using dynamic programming approach we have to determine which stations to
choose from line 1 and from line 2 so that within minimum amount of time the
auto gets produced.
• This problem can be solved by applying the steps of dynamic programming.
67
Assembly Line Scheduling
• Step1 : Characterizing the structure of optimal solution
The goal of this problem is to compute the fastest assembly time. Hence we
need to know, the fastest time from entry to S1n and from entry to S2n for assembly
line 1 and assembly line 2 respectively.
Then we have to consider two exiting points X1 and X2.
68
Assembly Line Scheduling
• The fastest possible time can be obtained using formula –
f1[j] = min { f1 [ j – 1] + a1j , f2[ j – 1] + t 2(j-1) + a1j } , j ≥ 2
f2[j] = min { f1 [ j – 1] + t 1(j-1) + a2j , f2 [ j – 1] + a2j }, j ≥ 2
f1 [1] = e1 + a11 , j = 1
f2 [1] = e2 + a21 , j = 1
Step 3 : Compute the fastest time for assembly using above equations.
69
Assembly Line Scheduling Example
a11 a12 a13 a14 a15 a16
2 8 9 3 4 1
e1 X1
t11 3
4 t12 1 t13
2 t14 1 t15 3 3
Enter Exit
6 11 2 2 7 3
a21 a22 a23 a24 a25 a26 70
Assembly Line
Scheduling Example
Using the formula defined in step 2 , we can compute the fastest time for assembly.
• Step 1:
f1 [1] = e1 + a11 f1[ j ] = min { f1 [ j – 1] + a1j , f2[ j – 1] + t 2(j-1) + a1j }
f1[ 2 ] = min { f1 [ 2 – 1] + a12 , f2[ 2 – 1] + t 2(2-1) + a12 }
=4+2
= min { f1 [1] + a12 , f2[ 1 ] + t 21 + a12 }
f1 [1] = 6 = min { 6+ 8 , 8 + 3 + 8 }
= min { 14 , 19 }
f [1] = e + a f1[ 2 ] = 14
2 2 21
=2+6
f2 [1] = 8
71
Assembly Line Scheduling Example
f2[ j ] = min {f1[ j – 1] + t 1(j-1) + a2j , f2 [ j – 1] + a2j }
= min {f1[ 2 – 1] + t 1(2-1) + a22 , f2 [ 2 – 1] + a22 }
= min {f1[ 1] + t 11 + a22 , f2 [1] + a22 }
= min {6 + 3 + 11 , 8+ 11 }
= min {20 , 19 }
f2[ 2 ] = 19
76
Assembly Line Scheduling Example
• Now to obtain the path which gives fastest time in producing auto we will use all
the li[ j ] values.
• Let i = l* = 1 so we will start from passing the message as line 1 station 6.
• Then ,
For ( j = n ; j ≥ 2 ; j --)
{
i = l1[j]
print Line " i " station " j - 1“
}
77
Assembly Line Scheduling Example
• First message will transmitted from line 1 station 6 .
• Let , j = 6 , then i = l1[ 6 ] = 1. Hence message will be line 1 station 5 .
• Let , j = 5 , then i = l1[ 5 ] = 2. Hence message will be line 2 station 4 .
• Let , j = 4 , then i = l2[ 4 ] = 2. Hence message will be line 2 station 3 .
• Let , j = 3 , then i = l2[ 3 ] = 1. Hence message will be line 1 station 2 .
• Thus the optimal path in assembly line scheduling will be –
Line 1 station 6
Line 1 station 5
Line 2 station 4
Line 2 station 3
Line 1 station 2
78
Line 1 station 1
Optimal Path in Assembly Line Scheduling
S1 S2 S3 S4 S5 S6
2 8 9 3 4 1
4 3 1 2 1 3
3
Enter Exit
7
2 3 4 1 1 3
6 11 2 2 7 3
S1 S2 S3 S4 S5 S6 79
Assembly Line
Scheduling Algorithm
Algorithm Fastest_time_computation ( a[] [] , t[] [], e[], x[],n)
{
//Problem Description: This algorithm is for computing fi[j] and li[j] values.
f1[1] <- e[1] + a[1][1]
f2[1] <- e[2] + a[2][1]
For ( j<- 2 to n ) do
{
if ( f1[j-1]+a[1][j] ≤ f2[j-1] + t[2][j-1]+a[1][j]) then
{
f1[j] <- f1[j-1]+a[1][j]
l1[j]<- 1
} 80
Assembly Line
Scheduling Algorithm
else
{
f1[j] <- f2[j-1] + t[2][j-1]+a[1][j]
l1[j]<- 2
}
if ( f2[j-1]+a[2][j] ≤ f1[j-1] + t[1][j-1]+a[2][j]) then
{
f2[j] <- f2[j-1]+a[2][j]
l2[j]<- 2
}
81
Assembly Line
Scheduling Algorithm
else
{
f2[j] <- f1[j-1] + t[1][j-1]+a[2][j]
l2[j]<- 1
}
} // end of for loop
If ( f1[n] + x[1] ≤ f2 [n] + x[2]) then
{
f_star <- f1[n] + x[1]
l_star <- 1
}
82
Assembly Line
Scheduling Algorithm and Analysis
Else
{
f_star <- f2[n] + x[2]
l_star <- 2
}
} // end of the algorithm
Analysis:
• The basic operation in assembly line scheduling is computation of fi[j] and li[j]
values for all the stations.
• Hence total running time complexity will be Ɵ ( n ) .
83
Master Theorem Proof