What Is Dynamic Programming?
What Is Dynamic Programming?
You have been playing with recursion a lot. Yes, that computationally expensive method to call your
function, again and again, to make your code look elegant and clean. The approach of dynamic programming
is somehow similar to this approach but with added benefits such as backtracking and recurrence relations.
So, this concept works on the principle of dividing a problem into smaller subproblems and solving them
individually, then taking those solutions from these subproblems and storing them using some special
features of Dynamic Programming like Memoization and Tabulation. It is an extremely useful concept and is
used in many real-life applications including flight controls, robotics control, time scheduling in CPU’s
along with various problems such as knapsack, mathematical optimization, all pair shortest path, reliability
design, and longest common subsequence problem. Above all, it is also one of the most favorite areas of the
interviewers because it tests your ability to break apart a problem and find the solution.
Recursion: This is a the fundamental part of Dynamic Programming and works on the principle of
calling a function repeatedly till a specific set of pre-defined conditions are met.
Backtracking: This the feature works extensively on recursion and evaluates multiple possibilities
before arriving at any specific optimal solution. We only stop when either we find a final solution or
endpoint from where we can backtrack (simply move one step back to find an alternative path for the
next possible solution).
Keyword ‘DP’ is used in place of ‘Dynamic Programming’.
Now after the introduction you might be wondering what is the difference between Dynamic Programming
and Divide and Conquer strategy. Actually, they are both the same in the first half of their implementation.
They both divide the problems into smaller manageable chunks of subproblems but then proceed in their
own ways. The Divide and Conquer approach solve the same subproblems, again and again, every time it
encounters them but in DP, we store the result of each subproblem and hence create a solution database that
can be accessed at any time we want.
Now the question arises how do you know the problem can be solved using Dynamic Programming?
The answer to this question is very simple. You just have to look for two main charactersitics.
Overlapping Subproblems: When the problem is divided into various subproblems and the results of these
subproblems are systematically stored in some data containers such as the table. This approach to solving a
problem can only be applied to cases where the overlapping of subproblems exists. Example in the case of
Fibonacci series Fib(n) can be called recursively using two sub-problems Fib(n-1) + Fib(n-2) but the same
cannot be used for binary search.
Optimal Substructure: The given problem satisfies Optimal Substructure property if and only if the solution
of that problem obtained using the optimal solutions of all its subproblems matches the final solution. A
common example of this supposes we have to traverse the shortest path from Point A to Point B and the path
between these points has a hotel at point C. Then the shortest path traversed from point A to point C
combined with the shortest path from point C to point B is the same as the shortest path from A to B. This is
called the shortest path problem.
If a problem does not satisfy these both properties then you can’t solve those problems using Dynamic
Programming.
What should be your approach towards solving DP problems?
1. Study the problem and find possible patterns.
2. Identify the state (A set of parameters to uniquely
identify the position and standings of the problem).
3. Device a recurrence relation.
4. Recursively try to find a naïve solution.
5. Optimize the solution (Using Memoization).
6. Remove the overhead of recursion using a bottom-up
approach (Using Tabulation).
Dynamic Programming Methods.
There are two types of methods used in Dynamic Programming:
1. Memoization (Top-Down Approach): This method of DP is the collaboration of recursion and caching
and is relatively easier to implement than Tabulation. Let’s take the previous example of the Fibonacci
Series again and try to understand the problem. Using recursion Fib(n) can be called using Fib(n-1) + Fib(n-
2). Now let us try to calculate the Fib (5) using recursion. This method will make 15 function calls to output
a result. Now if we use memoization and store all the function call value once they are made, the number of
calls reduce to just 5 because we already know the results of other calls after these 5 calls. The stored value
of the initial 5 calls works as a cached memory storing all the important results of these and when a new call
is made, we ask the cached memory if it has the result of this type of call. Here we have traded storage for
the efficiency of our program. To be honest I think the efficiency of the program is more important
nowadays than storage. For caching the output, we can use the 1D array as in the case of this example or a
2D array for problems such as Knapsack.
2. Tabulation (Bottom-Up Approach): In this method of DP there is no need for recursion. So, if there is no
recursion, we have to store the answer to sub-problems somewhere. The answer to this approach is by
storing the results in a table or matrix. Then use the bottom-up approach as in the case of recursive trees to
find the optimal solution to the initial problem. This is much more efficient in terms of time and space
complexity. Now as we discussed in the approach section that tabulation is out the last stand to solve a
problem using Dynamic Programming because of the complexity issues of tabulation and the easy
implementation of memoization.
To understand tabulation let us consider an example let us suppose you are given a problem and a function F
solves it. Now let the function call F(n) depends on the call F(n-1), F(n-1) to F(n-2) and so on till F(0). Now
the question arises on how can we access the value of F(n) if we did not know the value of F(n-1), F(n-2),
F(n-3), and so on. So, the solution to this problem is to first find out the solution of the lowermost value i.e.
F(0) and work your up the ladder by finding the value of F(1), F(2), and so on up to F(n). And hence this is
called the bottom-up approach.
Summary:
So, both the tabulated and the memoized methods store the solutions of the subproblems of a single big
problem, the memorized version stores the data on demand whereas the tabulated version stored data of the
solutions of all sub-problems and then use that data to finalize the complete
solution.
These are the basic properties of dynamic programming and the more advanced concepts include using bit
masking with Dynamic Programming. Along with these, there are various algorithms that work on the
principle of DP such as Knapsack Problem, mathematical optimization, all pair shortest path, reliability
design, and longest common subsequence problem. The concept of DP is used extensively in the software
industry and is a must-have skill for software developers and competitive programmers. The best way to
learn is to practice the implementation of these approaches using DP based problems.