Unit-4 (Dynamic Programming)
Unit-4 (Dynamic Programming)
Dynamic Programming
Prepared by: Bhavini Tandel
Strategy for designing algorithms is dynamic programming
Used when problem breaks down into recurring small sub problems
Dynamic programming is typically applied to optimization problems. In
such problem there can be many solutions.
Each solution has a value and have to find a solution with the optimal
value.
Divide-and-Conquer
Dynamic programming: Find out all possible solutions and than pickup the
best solution/ optimal solution.
Used for solving optimization problem.
Idea of dynamic programming: avoid calculating the same thing twice,
usually by keeping a table of known result that fills up a sub instances are
solved. For any problem there may be many solution which are feasible, so
find solutions of all problem and than choose optimal solution.
Divide and conquer is a top-down method.
Dynamic programming like divide-and-conquer method solve problems by
combining the solutions to sub-problems.
On the other hand Dynamic programming is a bottom-up technique.
We usually start with the smallest and the simplest sub- instances.
By combining their solutions, we obtain the answers to sub-instances of
increasing size until finally we arrive at the solution of the original instances.
Dynamic programming always give optimal answer.
Dynamic programming adopt tabulation method and memoization
method.
Principle of Optimality
By storing the results of function, we avoiding the same call again and
again. To avoid the same call again and again by storing the result in an
array.
Compute solution in a bottom-up fashion
Example: Fibonacci numbers
For finding fibbo(6) than total functions calls are 25.
If we consider all function calls than the time complexity is:
fibbo(n-2)+fibbo(n-1) than calling itself two times (assume n-2 as n-1)
Recurrence Relation: T(n)=2T(n-1)+1
Time Complexity: O(2𝑛 )
To reduce this time, we take a array and store the value of fibbo() function
to it.
So, total number of calls are 7.
If total number of calls are n
fibbo(n) = n+1 calls
Time complexity: O(n)
Tabulation Method
We mostly write iterative functions only in dynamic programming which will
fill up the table to get the values from the smaller value onwards.
It is bottom-up approach.
Mostly tabulation method is used in dynamic programming.
Binomial Coefficients
2 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
3 1 0 1 1 1 1 2 1 2 2 2 2 3 2 3 3
5 1 0 1 1 1 2 2 2 3 3 4 4 5 5 6 7
10 1 0 1 1 1 2 2 2 3 3 5 4 6 6 7 9
9
1 0 1 2 3 4 5 6 7 8 9 10
5 0 1 2 3 4 1 2 3 4 5 2
6 0 1 2 3 4 1 1 2 3 4 2
9 0 1 2 3 4 1 1 2 3 1 22
1 0 1 2 3 4 5 6 7 8 9 10 Different
5 0 1 2 3 4 1 2 3 4 5 2 Same
6 0 1 2 3 4 1 1 2 3 4 2 Same
9 0 1 2 3 4 1 1 2 3 1 22
When we finally reach 0. i.e. initial state than further no coins are selected. So, 2
coins of 5rs. Is selected which is optimal solution.
Time Complexity: O(n * w)
Where n=no. of coins and w=amount
0/1 Knapsack Problem
This problem is similar to ordinary knapsack problem but we may not take a
fraction of an object.
We are given ‘ N ‘ object with weight Wi and profits Pi where i varies from 1
to N and also a knapsack with capacity ‘ M ‘.
The problem is, we have to fill the bag with the help of ‘ N ‘ objects and the
resulting profit has to be maximum.
Formally, the problem can be started as,
Where Xi are constraints on the solution Xi {0,1}. Xi is required to be 0 or 1. if
the object is selected then the unit in 1. if the object is rejected than the
unit is 0. So, that why it is called as 0/1 knapsack problem.
0 = Object is absent
1 = Object is present
You have to pick the item completely means it is not divisible. Either you
have to select that object or not selected. You can’t use fraction of that
object.
Rules to fill the table:
If i=1 and j < w(i) then T(i,j) =0 filled in the table.
If i=1 and j ≥ w (i) then T (i,j) = p(i), the cell is filled with the profit p[i],
since only one object can be selected to the maximum.
If i>1 and j < w(i) then T(i,1) = T (i-1,j) the cell is filled the profit of previous
object since it is not possible with the current object.
If i>1 and j ≥ w(i) then T (i,j) = {f(i) +T(i-1,j-w(i)),. since only ‘1’ unit can be
selected to the maximum. If is the current profit + profit of the previous
object to fill the remaining capacity of the bag.
Example:
Weight: {3, 4, 6, 5}
Profit: {2, 3, 1, 4}
W=8
W 0 1 2 3 4 5 6 7 8
i
Pi Wi 0 0 0 0 0 0 0 0 0 0
2 3 1 0 0 0 2 2 2 2 2 2
3 4 2 0 0 0 2 3 3 3 5 5
4 5 3 0 0 0 2 3 4 4 5 6
1 6 4 0 0 0 2 3 4 4 5 66
Maximum Profit
W 0 1 2 3 4 5 6 7 8
i
Pi Wi 0 0 0 0 0 Different 0 0 0 0 0
2 3 1 0 0 0 2 Same 2 2 2 2 2
3 4 2 0 0 0 2 3 3 3 5 5
Different
4 5 3 0 0 0 2 3 4 4 5 6
Same
1 6 4 0 0 0 2 3 4 4 5 6
List out all edges: (1,2), (1,3), (1,4), (2,5), (3,2), (3,5), (4,3), (4,6), (5,7), (6,7)
(1,2), (1,3), (1,4), (2,5), (3,2), (3,5), (4,3), (4,6), (5,7), (6,7)
If a value d[v] fails to converge after |V|-1 passes, there exists a negative
weight cycle in G reachable from S.
Example:
Example:
A1 = 3 Χ 2, A2 = 2 Χ 4, A3 = 4 Χ 2, A4 = 2 Χ 5
A1 A2 A3 A4
3Χ2 2Χ4 4Χ2 2Χ5
d0 d1 d2 d3 d4
M 1 2 3 4 K 1 2 3 4
1 0 24 28 58 1 0 1 1 3
2 0 16 36 2 0 2 3
3 0 40 3 0 3
4 0 4 0
Algorithm:
Time Complexity:
Here, we are preparing only half of the table. So, n(n+1)/2 elements are
generated. So, n(n+1)/2 = 𝒏𝟐
but for getting each element we are calculating all and than choose
minimum. So, it will take at most n time,
n(n-1)/2 = 𝒏𝟐 * n = 𝒏𝟑
Time Complexity of matrix chain multiplication: O(𝒏𝟑 )
Example:
For the given he sequence {4,10,3,12,20,7}. The matrices have sizes 4*10, 10*3,
3*12,12*20,20*7. Compute M[I,j], 0<=I,j<=5. M[I,j]=0 for all i.
Longest Common Subsequence (LCS)
If matched
If no matched
Find longest common subsequence using dynamic programming.
LCS = bd
a b c d a b c d
0 1 2 3 4 0 1 2 3 4
0 0 0 0 0 0 0 0 0 0 0 0
b 1 0 0 1 1 1 b 1 0 0 1 1 1
d 2 0 0 1 1 2 d 2 0 0 1 1 2
Maximum Length of subsequence b d
Algorithm:
Time Complexity:
m no. of alphabets
n no. of alphabets
7 9 3 4 8 4 3
2
2 3 1 3 4
Exit
Enter
2 1 2 2 1
2
4
8 5 6 4 5 7
1 2 3 4 5 6
F* = 38
F1[j] 9 18 20 24 32 35
l* = 1
F2[j] 12 16 22 25 30 37
1 2 3 4 5 6
L1[j] 1 1 2 1 1 2
L2[j] 2 1 2 1 2 2
Algorithm
Where,
a[i,j] = assembly time on jth station on ith line
t[i,j] = transit time from jth station on ith line to j+1th station on another line
e[i] = entry time on line i
x[i] = exit time on line i
n = no of station on every line
1 2 3 4 5 6
2 7 9 3 4 8 4 3
2 3 1 3 4
Exit
Enter
2 1 2 2 1
2
4 8 5 6 4 5 7
Min{16,17} Min{27,22} Min{25,26} Min{32,30} Min{43,37}
=16 =22 =25 =30 =37
F* = 28
l* = 1
d0, d1, d2, …., dn representing values not in K. here d0 represents all values
less than k and dn represents all values greater than kn and for i=1,2,…,n-1,
the dummy key di represents all values between ki and ki+1. Figure shows
two binary search trees for a set of n=5 keys.
Each key ki is an internal node and each dummy key di is a leaf. Every
search is either successful (finding some key ki ) or unsuccessful (finding
some dummy key di) so
Optimal Binary Search Tree: If the keys and their probabilities are given than
we have to generate the BST such that the cost is minimum.
Cost of the tree is depends on height of a binary tree.
2𝑛𝑐𝑛
If n nodes are there in the tree, total trees are possible.
𝑛+1
So using dynamic programming is easier and faster method for trying out all
possible tree and picking up the best one without trying up all of them. So
try out all of them not directly but indirectly.
For the base case, Compute w[i,i-1] = qi-1 for 1≤ i ≤ n+1. For j ≥ i, so compute
K=2
R[0,1] R[2,4]
10 30
K=1 K=3
R[3,4]
R[0,0]
R[1,1]
40
R[2,2]
K=4
R[3,3] R[4,4]
O(𝒏𝟑 )