Dynamic Programming
Dynamic Programming
Dynamic Programming
Dynamic programming is an algorithmic technique used to solve complex problems by
breaking them down into smaller overlapping sub problems and solving each sub problem
only once. The solutions to the subproblems are stored in a data structure (such as an array)
so that they can be reused when needed, eliminating redundant computations and
improving overall efficiency.
Dynamic programming is especially useful for optimization problems and problems with
optimal substructure, where the optimal solution can be constructed from optimal solutions
to smaller subproblems. The technique is widely used in various fields, including computer
science, operations research, economics, and engineering.
Let’s assume that we had a question to solve the square of 25.
If we can remember it then we can skip it next time. Dynamic programming works on a
similar concept, the idea is to remember the result of a computation and use it in the future
if required.
Dynamic Programming
Dynamic programming has two concepts:
1. Overlapping subproblems : The problem is broken down into smaller subproblems, and
the same subproblems are solved multiple times in the process of solving the main
problem. Dynamic programming aims to avoid redundant calculations by storing the results
of solved subproblems for future use.
2. Optimal substructure: The optimal solution to the main problem can be constructed from
optimal solutions to smaller subproblems. This property allows dynamic programming to
build the solution incrementally and find the best possible result.
A given problem has Optimal Substructure Property if the optimal solution of the given
problem can be obtained by using optimal solutions of its subproblems.
Dynamic Programming
Let's understand the concept of overlapping subproblem through an example.
Consider an example of the Fibonacci series.
The following series is the Fibonacci series: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55,…
The numbers in the above series are not randomly calculated. Mathematically, we could
write each of the terms using the below formula:
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the
above relationship.
For example, F(2) is the sum f(0) and f(1), which is equal to 1.
How does dynamic approach works
The following are the steps that the dynamic programming follows:
It breaks down the complex problem into simpler subproblems.
It finds the optimal solution to these sub-problems.
It stores the results of subproblems.
It reuses them so that same sub-problem is calculated more than once.
Finally, calculate the result of the complex problem.
How does dynamic approach works
Applications of Dynamic Programming:
Fibonacci sequence and related problems
Shortest path algorithms (e.g., Dijkstra's algorithm)
Knapsack problem and its variants
Longest common subsequence (LCS) problem
Matrix chain multiplication
Resource allocation problems
Recursion vs Dynamic Programming
Recursion is a general programming technique involving self-calling functions to solve smaller
instances of the same problem, while dynamic programming is a specific algorithmic
optimization technique that aims to avoid redundant computations by reusing the results of
smaller overlapping subproblems.
Dynamic programming often uses recursion (top-down) or iteration (bottom-up) to achieve its
goal, but the key difference lies in its focus on optimizing efficiency by storing and reusing
computed results.
Greedy vs Dynamic Programming
Greedy algorithms make locally optimal choices at each step, hoping to reach a globally
optimal solution, whereas dynamic programming systematically solves overlapping
subproblems and guarantees finding the optimal solution.
Greedy algorithms are simpler and faster but may not always produce the best solution, while
dynamic programming is more complex and time-consuming but ensures optimality for
problems with certain characteristics.
The choice between the two techniques depends on the problem's nature and the trade-offs
between optimality and efficiency.
The Principle of Optimality
A dynamic-programming algorithm solves every sub-problem just once and then saves its
answer in a table.
It avoids the work of re-computing the answer every time the sub problem is encountered.
The dynamic programming algorithm obtains the solution using principle of optimality.
The principle of optimality states that “in an optimal sequence of decisions or choices, each
subsequence must also be optimal”. i.e. the optimal solution to a dynamic optimization
problem can be found by combining the optimal solutions to its sub-problems.
If it is not possible to apply the principle of optimality then it is almost impossible to
obtain the solution using the dynamic programming approach.
Binomial Coefficient
Binomial Coefficient
The binomial coefficient, often denoted as "n choose k," is a mathematical concept that
represents the number of ways to choose 'k' elements from a set of 'n' distinct elements
without regard to the order of the chosen elements.
It is to solve problems related to combinations.
The binomial coefficient is represented as C(n, k) or nCk, and it is calculated using the
formula:
C(n, k) = n! / (k! * (n - k)!) where,
n is a non-negative integer representing the total number of elements in the set (the "pool" of
choices).
k is a non-negative integer representing the number of elements to be chosen from the set.
Introduction
If you want to make a 2-person committee from a group of four people. How many different
combinations are possible?
The number of ways to do this is given by .
Specifically, the binomial coefficient counts the number of ways to form an unordered
collection of items chosen from a collection of distinct items.
Binomial Coefficient
The definition of binomial coefficient is given as:
{ )
¿ 𝟏 𝒊𝒇 𝒌=𝟎 𝒐𝒓 𝒌=𝒏
( ) ( )(
𝒏 = ¿ 𝒏 − 𝟏 + 𝒏 −𝟏 𝒊𝒇 𝟎< 𝒌<𝒏
𝒌 𝒌− 𝟏 𝒌 )
¿ 𝟎 𝑶𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
function C(n, k)
if k=0 or k=n then return 1
else return C(n-1, k-1) + C(n-1, k)
Binomial Coefficient
Binomial coefficients have numerous applications in various fields, such as probability,
statistics, algebra, and computer science.
They are used, for example, in calculating probabilities, solving combinatorial problems,
generating Pascal's triangle, and implementing algorithms like the binomial theorem and
binomial distribution.
Pascal's Triangle is a triangular arrangement of numbers in which each row starts and ends
with the number 1, and each number in the interior of the triangle is the sum of the two
numbers directly above it.
Pascal's Triangle is used to represent binomial coefficients, which are the coefficients of the
terms in the expansion of a binomial expression (a + b)^n. The binomial coefficient C(n, k) can
be found in the nth row and kth position (0-indexed) of Pascal's Triangle.
Binomial Coefficient
Pascal's Triangle has several interesting properties:
Symmetry: The triangle is symmetric about its center, meaning that the numbers on the left
half of each row are the same as the numbers on the right half.
Diagonals: The numbers along the diagonals represent the binomial coefficients. For example,
the 3rd diagonal represents the coefficients of (a + b)^2: 1, 2, 1.
Sum of Rows: The sum of all numbers in each row is equal to 2^n, where n is the row number
(0-indexed). For example, the sum of the numbers in the 4th row (1 + 4 + 6 + 4 + 1) is 2^4 =
16.
Binomial Coefficient
0 1 2 3 4 5 .. k
0 1
1 1 +1
2 1 +2 +1
3 1 +3 +3 +1 PASCAL’s
TRIANGL
4 1 4 6 4 1 E
..
n
function C(n, k)
if k=0 or k=n then return 1
else return C(n-1, k-1) + C(n-1, k)
Generalized Solution using Dynamic Programming
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom-up fashion.
4. Construct an optimal solution from the computed information.
Making Change Problem
Making Change Problem
We need to generate a table 𝑐[𝑛][𝑁], where
1. 𝑛 = number of denominations
2. 𝑁= amount for which you need to make a change.
Step-3: If then
Step-4: Otherwise
Making Change Problem
Denominations: . Make a change of Rs. .
Step-1: Make
Step-2:
Step-2: IfIfthen here c[i][j] = 1+c[i,j−d_i]
𝑖=1 then
Step-3:
Step-3: IfIf then
𝑗<𝑑_𝑖 then c[i][j] =c[i−1,j]
Step-4:
Step-4: Otherwise
Otherwise c[i][j] = min(1+c[i,j−d_i ],c[i−1,j])
𝑗 Amount
0 1 2 3 4 5 6 7 8
𝒊=𝟏 0 1 2 3 4 5 6 7 8
𝒊=𝟐 0 1 2 3 1 2
𝒊=𝟑 0
𝑚𝑖𝑛 (𝑐[1][
𝑚𝑖𝑛(𝑐 4],1+𝑐[2][0])=𝑚𝑖𝑛
[1][5], ( 4,1+0)=𝑚𝑖𝑛
1+𝑐 [2][1])=𝑚𝑖𝑛(5, ( 4,1)=1
1+1)=𝑚𝑖𝑛(5, 2)=2
Making Change Problem
Denominations: . Make a change of Rs. .
Step-1: Make
Step-2: If then here
Step-3: If then
Step-4: Otherwise )
𝑗 Amount
0 1 2 3 4 5 6 7 8
𝒊=𝟏 0 1 2 3 4 5 6 7 8
𝒊=𝟐 0 1 2 3 1 2 3 4 2
𝒊=𝟑 0 1 2 3 1 2 1 2 2
Making Change Problem
We can also find the coins to be included in the solution set as follows:
1. Start looking at c[3, 8] = c[2, 8] ⟹ So, not to include a coin with denomination 6.
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 1 2 3 1 2 3 4 2
0 1 2 3 1 2 1 2 2
Solution contains 2 coins with denomination 4
0/1 Knapsack Problem
0/1 Knapsack Problem - Dynamic Programming Solution
We need to generate table
1. where,
2.
Step-3: if then
0/1 Knapsack Problem - Dynamic Programming Solution
Solve the following knapsack problem using dynamic programming technique.
1. and
Object
1 6 18 22 28
1 2 5 6 7
0/1 Knapsack Problem - Dynamic Programming Solution
if then
else
𝒋 Knapsack
weight & value 0 1 2 3 4Capacity
5 6 7 8 9 10 11
0 1 1 1 1
0 1 6 7 7
0 1 6 7 7
0 1 6 7 7
0 1 6 7 7
0/1 Knapsack Problem - Dynamic Programming Solution
if then So,
So,
else
𝒋 Knapsack
weight & value 0 1 2 3 4Capacity
5 6 7 8 9 10 11
0 1 1 1 1 1 1 1 1 1 1 1
0 1 6 7 7 7 7 7 7 7 7 7
0 1 6 7 7 18 19 24 25 25 25 25
0 1 6 7 7 18 22 24 28 29 29 40
0 1 6 7 7 18 22 28 29 34 35 40
0/1 Knapsack Problem - Dynamic Programming Solution
We can also find the objects to be carried in the knapsack as follows,
1. Start looking at V[5, 11] = V[4, 11] ⟹ So not to include object 5
2. Next go to V[4, 11] ≠ V[3, 11] but V[4, 11] = V[3,11-w4]+v4 = V[3, 5]+22 So include object 4
3. Now go to V[3,5] ≠ V[2,5] but V[3,5] = V[2,5-w3]+v3 = V[2,0]+18 So include object 3
4. Now V[2,0] = V[1, 0] and V[1,0] = V[0,0] ⟹ So objects 2 and 1 are not included.
𝑉 [𝑖][ 𝑗]=𝑚𝑎𝑥 (𝑉 [𝑖−1][ 𝑗],𝑽 [𝒊−𝟏][ 𝒋 − 𝒘 𝒊]+𝒗 𝒊)
weight & value 0 1 2 3 4 5 6 7 8 9 10 11
0 1 1 1 1 1 1 1 1 1 1 1
0 1 6 7 7 7 7 7 7 7 7 7
0 1 6 7 7 18 19 24 25 25 25 25
0 1 6 7 7 18 22 24 28 29 29 40
0 1 6 7 7 18 22 28 29 34 35 40
Longest Common Subsequence
Introduction
A subsequence is a sequence that appears in the same relative order, but not necessarily
contiguous.
Given two sequences and we say that a sequence is a common subsequence of and if is a
subsequence of both and .
E.g., if and then is a subsequence.
Use dynamic programming technique to find the longest common subsequence (LCS).
LCS – Optimal Sub-structure
We need to generate table
where length of string and length of string .
0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1
0 0 1 1 1 2 2 2
0 0 1 2 2 2 2 2
0 1 1 2 2 2 3 3
0 1 2 2 3 3 3 4
0 1 2 2 3 3 4 4
LCS – Dynamic Programming Solution
else max(,
0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1
0 0 1 1 1 2 2 2
0 0 1 2 2 2 2 2
0 1 1 2 2 2 3 3
0 1 2 2 3 3 3 4
0 1 2 2 3 3 4 4
LCS – Dynamic Programming Solution
else max(,
0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1
0 0 1 1 1 2 2 2
0 0 1 2 2 2 2 2
0 1 1 2 2 2 3 3
0 1 2 2 3 3 3 4
0 1 2 2 3 3 4 4
LCS – Dynamic Programming Solution
and and LCS =
0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1
0 0 1 1 1 2 2 2
0 0 1 2 2 2 2 2
0 1 1 2 2 2 3 3
0 1 2 2 3 3 3 4
0 1 2 2 3 3 4 4
All Pair Shortest Path – Floyd’s
Algorithm
Introduction
Given a directed, connected weighted graph for each edge , a weight is associated with each
edge.
The all pairs of shortest paths problem is to find a shortest path from to for every pair of
vertices and in .
Floyd’s algorithm is used to find all pair shortest path problem from a given weighted graph.
As a result of this algorithm, it will generate a matrix, which will represent the minimum
distance from any node to all other nodes in the graph.
At first, the output matrix is same as given cost matrix of the graph.
As the algorithm proceeds, the output matrix will be updated with each vertex as an
intermediate vertex.
The time complexity of this algorithm is , where is the number of vertices in the graph.
Floyd Algorithm - Example
Step: 1 Initialization
3 𝟑𝟓
4
1 2 3 4
1 D0 = For node 3
2
3
4
Floyd Algorithm - Example
So,
Where, and
𝒑 ∗𝒒∗𝒓 =𝟐∗𝟑∗𝟐=𝟏𝟐
Matrix Chain Multiplication
Now, we want to calculate the product of more than two matrices. Matrix multiplication is
associative.
The product of four matrices can be fully parenthesized in distinct ways.
1
Dimension of
Dimension of
Dimension of
10582
50
Matrix Chain Multiplication
Now, we want to calculate the product of more than two matrices. Matrix multiplication is
associative.
The product of four matrices can be fully parenthesized in distinct ways.
1
Dimension of
2
3
Dimension of
4
5 Dimension of
54201
51
Matrix Chain Multiplication using Dynamic Programming
Matrix table stores the cost of multiplications.
Step-2:if ifthen
Step-1: then
𝑀 [1][2]=𝑃
[3][4 ]= 𝑃012∗∗𝑃
[2][3 ]=𝑃 𝑃123∗∗𝑃𝑃23=5
So, =6∗∗6
4=4 ∗42∗
∗6=120
∗ 2=48
7=84
1 2 3 4
1 0 120
2 -- 0 48
3 -- -- 0 84
4 -- -- -- 0
Matrix Chain Multiplication using Dynamic Programming
Here dimensions are,
step 3:
1 2 3 4
1 0 120 88
2 -- 0 48
3 -- -- 0 84
4 -- -- -- 0
54
Matrix Chain Multiplication using Dynamic Programming
Here dimensions are,
step 3:
1 2 3 4
1 0 120 88
2 -- 0 48 104
3 -- -- 0 84
4 -- -- -- 0
55
Matrix Chain Multiplication using Dynamic Programming
Here dimensions are,
step 3:
1 2 3 4
1 0 120 88 158
2 -- 0 48 104
3 -- -- 0 84
4 -- -- -- 0
56
Matrix Chain Multiplication using Dynamic Programming
How to parenthesize matrices?
A B C
D
1 2 3 4
1 0 120 88 158
2 -- 0 48 104
3 -- -- 0 84
4 -- -- -- 0
57
Matrix Chain Multiplication using Dynamic Programming
How to parenthesize matrices?
A B C
D
1 2 3 4
1 0 120 88 158
2 -- 0 48 104
3 -- -- 0 84
4 -- -- -- 0
58
Matrix Chain Multiplication using Dynamic Programming
Here dimensions are,
There would be matrices ),and
A B C
D
158
Assembly Line Scheduling
Introduction
Assembly line scheduling is a manufacturing problem.
In automobile industries, assembly lines are used to transfer parts from one station to another.
Manufacturing of large items like car, trucks etc. generally undergoes through multiple
station, where each station is responsible for assembling particular part only.
Entire product will be ready after it goes through predefined n stations in sequence
Introduction
For example, manufacturing of car may be done in several stages like engine fitting,
coloring, light fitting, fixing of controlling system, gates, seats and many other things.
Particular task is carried out at the station dedicated for that task only. Based on requirement,
there may be more than one assembly line.
In case of two assembly lines, if load at station j of assembly line 1 is very high, then
components are transferred to station j of assembly line 2, converse is also true. This helps to
speed up the manufacturing process.
The time to transfer partial product from one station to next station on same assembly line is
negligible.
During rush, factory manager may transfer partially completed auto from one assembly line
to another, to complete the manufacturing as quick as possible. Some penalty of time t occurs
when product is transferred from assembly 1 to 2 or 2 to 1.
Introduction
Determine which station should be selected from assembly line 1 and which to choose from
assembly line 2 in order to minimize the total time to build the entire product.
Introduction
An automobile chassis enters each assembly line, has parts added to it at a number of
stations, and a finished auto exits at the end of the line.
There are two assembly lines, numbered .
Each assembly line has stations, numbered
Introduction
We denote the station on line by
The station on line performs the same function as the station on line
Corresponding stations and perform the same function but can take different amounts of
time and .
Introduction
Entry times are: and ; exit times are: and
After going through a station, a chassis can either:
stay on same line at no cost, or
transfer to other line: cost after is for
Assembly Line Scheduling - Example
Using dynamic programming technique, find the fastest time to get through the entire factory
for the following:
Assembly Line Scheduling - Example
Step 1:
: the fastest time to get through the entire factory
: the fastest time to get from the starting point through station .
Base case: (getting through station )
Assembly Line Scheduling – Dynamic Programming
Solution
General Case: 𝑗 = 2, 3, …,𝑛 and 𝑖 = 1, 2 - Recursive solution
𝑓 =𝑚𝑖𝑛 ( 𝑓 1 [ 𝑛 ] + 𝑥1 , 𝑓 2 [ 𝑛 ] + 𝑥 2 )
∗
∗
𝒇 =𝒎𝒊𝒏 (𝟑𝟓+𝟑,𝟑𝟕+𝟐)=𝟑𝟖
Thank You!