Dynamic Programming
Dynamic Programming
3: DYNAMIC PROGRAMMING
Date:24/03/2023
Aim: Write a C program to implement the following program using Dynamic programming
a. Multi stage graph
b. All pair shortest path :Floyd Warshall
c. Single Source Shortest Path : Bellman Ford algorithm
d. Optimal binary search tree
e. 0/1 knapsack problem
THEORY:
Dynamic programming:
Dynamic programming is a technique that breaks the problems into sub-problems, and saves
the result for future purposes so that we do not need to compute the result again. The
subproblems are optimized to optimize the overall solution is known as optimal substructure
property. The main use of dynamic programming is to solve optimization problems. Here,
optimization problems mean that when we are trying to find out the minimum or the
maximum solution of a problem. The dynamic programming guarantees to find the optimal
solution of a problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex
problem by first breaking into a collection of simpler subproblems, solving each subproblem
just once, and then storing their solutions to avoid repetitive computations.
The following are the steps that the dynamic programming follows:
The above five steps are the basic steps for dynamic programming. The dynamic
programming is applicable that are having properties such as:
Those problems that are having overlapping subproblems and optimal substructures. Here,
optimal substructure means that the solution of optimization problems can be obtained by
simply combining the optimal solution of all the subproblems.
In the case of dynamic programming, the space complexity would be increased as we are
storing the intermediate results, but the time complexity would be decreased.
Algorithm
I] forward graph
Time complexity :
The time complexity of solving the multistage graph problem using dynamic programming is
typically O(V^2), where V is the number of vertices (nodes) in the graph.
In the dynamic programming approach for the multistage graph problem, we iterate over
each vertex and calculate the optimal cost based on the costs of its successors. This process
involves considering all possible outgoing edges from each vertex.
Since there are V vertices in the graph, the outer loop runs V times. For each vertex, we need
to consider all possible outgoing edges, which can be up to V-1 edges in the worst case (if
every vertex is connected to all subsequent stages). Therefore, the inner loop also runs V-1
times.
As a result, the time complexity of the dynamic programming approach for the multistage
graph problem is O(V * (V-1)), which simplifies to O(V^2) asymptotically.
Space Complexity :
The space complexity for the dynamic programming approach to the multistage graph
problem depends on the specific implementation. However, in general, the space complexity
is typically O(V), where V is the number of vertices (nodes) in the graph.
Additionally, we might need additional arrays or data structures to store intermediate results
or track the optimal path. The space required for these additional data structures is typically
small compared to the main table, and thus, the overall space complexity remains O(V).
It's worth noting that if the multistage graph is represented by an adjacency matrix or
adjacency list, the space complexity for storing the graph itself can be O(V^2) or O(V + E),
respectively, where E is the number of edges in the graph. However, the space complexity of
the dynamic programming algorithm specifically refers to the additional space used by the
algorithm itself, not the input graph representation.
Code :
Program (i):
/* Code for multistage graph: Forward Method */
#include <stdio.h>
int adj[100][100], C[100][100], p[100], cost[100], d[100];
OUTPUT :
PS D:\SE SEM IV\MADF LAB> gcc fgraph.c
PS D:\SE SEM IV\MADF LAB> ./a.exe
Enter the number of vertices of the graph, and the stages.
12
5
Enter the edges of the graph and their respective cost.
Enter the edge,( 0 0 randval) to quit :1 2 3
Enter the edge,( 0 0 randval) to quit :1 3 8
Enter the edge,( 0 0 randval) to quit :1 4 4
Program (ii):
/* Code for multistage graph: Backward Method */
#include <stdio.h>
int adj[100][100], C[100][100], p[100], bcost[100], d[100];
OUTPUT :
Date:28/03/2023
Problem Statement:
Write a c program to implement a all pair shortest path algorithm on the following graph:
Algorithm
Time complexity :
In the dynamic programming approach, we build a table or matrix to store the shortest
distances between all pairs of vertices. The algorithm iterates over each vertex as an
intermediate node and updates the shortest distances between all pairs of vertices based on
the intermediate node.
There are typically three nested loops involved in this process. The outermost loop selects the
intermediate vertex, the middle loop selects the source vertex, and the innermost loop selects
the destination vertex. Each loop iterates V times, as we need to consider each vertex as a
potential intermediate, source, or destination vertex.
Hence, the overall time complexity of the dynamic programming approach for all-pairs
shortest path is O(V^3). However, it's important to note that there are more efficient
algorithms available for specific cases, such as the Floyd-Warshall algorithm, which can
achieve the same result with a time complexity of O(V^3) as well.
Space complexity :
The space complexity of the dynamic programming approach for solving the all-pairs
shortest path problem is typically O(V^2), where V is the number of vertices (nodes) in the
graph.
In the dynamic programming approach, we use a matrix to store the shortest distances
between all pairs of vertices. The matrix has dimensions V x V, representing the distances
from each source vertex to each destination vertex.
Therefore, the space required to store the matrix is O(V^2), as it grows quadratically with the
number of vertices in the graph.
Additionally, if we also need to store the predecessor information to reconstruct the shortest
paths, an additional matrix of the same size, i.e., V x V, would be required. This would
further contribute to the space complexity.
Overall, the space complexity of the dynamic programming approach for all-pairs shortest
path is O(V^2) due to the space required for the distance matrix and potentially the
predecessor matrix.
Code
#include <stdio.h>
#include <limits.h>
#define sint(x) scanf("%d", &x);
#define N 30
#define inf 99999
Write a c program to find shortest path using bellman ford algorithm on following graph:
ALGORITHM :
Time complexity :
The time complexity of the Bellman-Ford algorithm for finding the shortest path in a graph is
typically O(V * E), where V is the number of vertices (nodes) and E is the number of edges
in the graph.
Space complexity :
The space complexity of the Bellman-Ford algorithm for finding the shortest path in a graph
is typically O(V), where V is the number of vertices (nodes) in the graph.
The algorithm uses an array or list to store the distances from the source vertex to each vertex
in the graph. This array has a length of V, representing the distances for each vertex.
In addition to the distance array, the algorithm may also use other data structures to store
auxiliary information, such as the predecessor of each vertex in the shortest path. The space
required for these additional data structures is typically small compared to the main distance
array and does not significantly affect the overall space complexity.
Therefore, the space complexity of the Bellman-Ford algorithm is O(V) due to the space
required for the distance array and any additional auxiliary data structures.
Code
#include<stdio.h>
#include<limits.h>
#include<stdlib.h>
struct Edge
{
int source;
int destination;
struct Egde *next;
};
int main()
{
int vertices;
vertices = 5;
int graph[vertices][vertices];
for(int i=0;i<vertices;i++)
{
for(int j=0;j<vertices;j++)
{
graph[i][j]=INT_MAX;
}
}
printf("************************************************************
**********\n");
// Inserting edges in the linked list.
for(int i=0;i<vertices;i++)
{
for(int j=0;j<vertices;j++)
{
if(graph[i][j] != INT_MAX)
{
Insert_Edge(i,j);
}
}
}
int source;
printf("Enter the source node:: ");
scanf("%d",&source);
int shortest_path[vertices];
for(int i=0;i<vertices;i++)
{
shortest_path[i]=INT_MAX;
}
shortest_path[source]=0;
for(int i=1;i<vertices;i++)
{
struct Edge *temp=HEAD;
while(temp!=NULL)
{
if(shortest_path[temp->source] != INT_MAX)
{
if(shortest_path[temp->source] + graph[temp->source]
[temp->destination]
< shortest_path[temp->destination])
{
shortest_path[temp-
>destination]=shortest_path[temp->source]
+ graph[temp->source][temp->destination];
}
}
temp= temp->next;
printf("************************************************************
*****\n");
for(int i=0;i<vertices;i++)
{
if(shortest_path[i]==INT_MAX)
{
printf("Node [%c] to [%c] is unreachable \
n",source+97,i+97);
continue;
}
else
{
printf("Node [%c] TO [%c] MINIMUM COST IS:: %d\
n",source+97,i+97,shortest_path[i]);
}
}
return 0;
}
void Insert_Edge(int src, int des)
{
struct Edge *ptr = (struct Edge*)malloc(sizeof(struct Edge));
struct Edge *temp=HEAD;
ptr->source=src;
ptr->destination=des;
if(HEAD==NULL)
{
HEAD=ptr;
HEAD->next=NULL;
}
else
{
while(temp->next!=NULL)
{
temp=(struct Edge*)temp->next;
}
temp->next=ptr;
ptr->next=NULL;
}
return ;
}
OUTPUT :
N=5
a1,a2,a3,a4,a5{apr,mar,may,oct,sept}
p1,p2,p3,p4,p5= {3,4,3,2,4}
q0,q1,q2,q3,q4,q5={4,4,5,4,5,4}
Algorithm
Time complexity :
The time complexity of constructing an optimal binary search tree using dynamic
programming is typically O(n^3), where n is the number of keys in the search tree.
In the dynamic programming approach, we build a table or matrix to store the optimal costs
for subtrees of the binary search tree. The algorithm iterates over different subtree sizes and
considers all possible roots for each subtree.
There are typically two nested loops involved in this process. The outer loop determines the
size of the subtree, ranging from 1 to n (the number of keys). The inner loop selects the
starting index of the subtree within the list of keys.
For each combination of subtree size and starting index, the algorithm performs calculations
that involve considering all possible roots within the subtree and computing the optimal cost
based on the previously computed values.
It's worth noting that there are variations and optimizations to the algorithm that can improve
its efficiency in certain cases, such as using memoization or applying additional heuristics.
However, the basic dynamic programming approach has a time complexity of O(n^3).
Space complexity ;
The space complexity of constructing an optimal binary search tree using dynamic
programming is typically O(n^2), where n is the number of keys in the search tree.
In the dynamic programming approach, we build a table or matrix to store the optimal costs
for subtrees of the binary search tree. The table has dimensions n x n, representing the
different combinations of subtree sizes and starting indices.
Additionally, we may use additional arrays or data structures to store auxiliary information,
such as the probabilities or frequencies of the keys, or to track the optimal tree structure. The
space required for these additional data structures is typically small compared to the main
table and does not significantly affect the overall space complexity.
Therefore, the space complexity of the optimal binary search tree construction algorithm
using dynamic programming is O(n^2) due to the space required for the main table or matrix
and any additional auxiliary data structures.
Code
#include <stdio.h>
#include <string.h>
#include <math.h>
#define MAX 100
char iden[MAX][MAX];
int p[MAX], q[MAX];
int printlchild(int r[MAX][MAX], int i, int j, int level)
{
if (r[i][r[i][j] - 1] != 0)
{
if (level == 0)
printf("%d = %s ", r[i][r[i][j] - 1], iden[r[i][r[i][j]
- 1]]);
return 1;
}
return 0;
}
int printrchild(int r[MAX][MAX], int i, int j, int level)
{
if (r[r[i][j]][j] != 0)
{
if (level == 0)
printf("%d = %s ", r[r[i][j]][j], iden[r[r[i][j]][j]]);
return 1;
}
return 0;
}
void printchild(int r[MAX][MAX], int i, int j, int n, int level)
{
int a, b;
a = printlchild(r, i, j, level);
b = printrchild(r, i, j, level);
OUTPUT :
N=7
M=15
p1,p2,p3,p4,p5,p6,p7={3,2,3,4,5,2,3}
w1,w2,w3,w4,w5,w6,w7={15,14,16,21,17,14,13}
Algorithm
Time complexity ;
The time complexity of solving the 0/1 knapsack problem using dynamic programming is
typically O(nW), where n is the number of items and W is the maximum weight or capacity
of the knapsack.
In the dynamic programming approach, we create a table or matrix to store the maximum
value achievable for different combinations of items and knapsack capacities. The table has
dimensions (n+1) x (W+1), representing the number of items and the range of knapsack
capacities.
We iterate over each item and consider two possibilities: including the item in the knapsack
or excluding it. For each item and knapsack capacity, we calculate the maximum value based
on these two possibilities and the values and weights of the items.
Space complexity :
The space complexity of solving the 0/1 knapsack problem using dynamic programming is
typically O(nW), where n is the number of items and W is the maximum weight or capacity
of the knapsack.
In the dynamic programming approach, we create a table or matrix to store the maximum
value achievable for different combinations of items and knapsack capacities. The table has
dimensions (n+1) x (W+1), representing the number of items and the range of knapsack
capacities.
To calculate the maximum value for each combination of items and capacities, we only need
to store the values of the current row and the previous row of the table. We can update the
table row by row, using the information from the previous row to compute the current row.
Therefore, the space required for storing the table is O(nW), as we need to maintain the
values for each combination of items and knapsack capacities.
It's worth noting that there are space optimization techniques that can reduce the space
complexity to O(W). These techniques exploit the fact that we only need to store the
information for the current row and the previous row of the table at any given time.
Overall, the space complexity of the 0/1 knapsack problem using dynamic programming is
typically O(nW), where n is the number of items and W is the maximum weight or capacity
of the knapsack.
Code
#include <stdio.h>
#include <limits.h>
#define N 200
#define obj 100
int count = 0;
int size;
struct dp
{
int p, w, o[obj];
};
{
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
size = 1;
if (s[k].w > m)
{
s[k].w = s[k].p = -1;
continue;
}
OUTPUT :
PS D:\SE SEM IV\MADF LAB> gcc knap.c
PS D:\SE SEM IV\MADF LAB> ./a.exe
Enter the knapsack capacity: 6
Enter the number of elements: 3
Enter the profit for each item
1
2
5
Enter the weights for each item
2
3
4
S^0 = { (0, 0) }
Max Profit: 6
Knapsack filled at: 6
Objects: ( 1 0 1 )
CONCLUSION: