0% found this document useful (0 votes)
9 views12 pages

Dynamic programming

Uploaded by

mezragadnane
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
9 views12 pages

Dynamic programming

Uploaded by

mezragadnane
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 12

Master Data Science and Artificial Intelligence Semester : S1

Course: Advanced Operation Research Prof. LAYEB

Dynamic Programming
1. Introduction to Dynamic Programming
1.1 What is Dynamic Programming?
Dynamic Programming (DP) is a method for solving complex problems by breaking
them down into simpler subproblems. It is a powerful technique that combines the
correctness of complete search and the efficiency of greedy algorithms.
Key characteristics of Dynamic Programming:
• Solves problems by combining solutions to subproblems
• Stores the results of subproblems to avoid redundant calculations
• Typically applies to optimization problems
The term "programming" in this context refers to a tabular method, not to writing
computer code.
1.2 History and Background
• Developed by Richard Bellman in the 1950s
• Originally used to solve optimization problems in economics
• Bellman's "Principle of Optimality" forms the core of dynamic programming
• Has since been applied to various fields including computer science,
operations research, and bioinformatics
1.3 Comparison with Other Algorithmic Paradigms
1. Divide and Conquer:
o Similarity: Both break problems into smaller subproblems
o Difference: DP reuses solutions to subproblems, while divide and
conquer solves each subproblem independently
2. Greedy Algorithms:
o Similarity: Both make choices at each step to find the optimal solution
o Difference: DP considers all possible choices and their future
consequences, while greedy algorithms make the locally optimal
choice without considering the future
3. Brute Force:
o Similarity: Both can potentially explore all possible solutions
o Difference: DP stores and reuses intermediate results, significantly
reducing time complexity
1.4 When to Use Dynamic Programming
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

Dynamic Programming is particularly useful when a problem has the following


characteristics:
1. Optimal Substructure: The optimal solution to the problem can be
constructed from optimal solutions of its subproblems.
2. Overlapping Subproblems: The problem can be broken down into
subproblems which are reused several times.
3. Recursive Objective Function
Examples of problems suitable for DP:
• Fibonacci sequence calculation
• Shortest path problems
• Knapsack problem
• Sequence alignment in bioinformatics
Advantages of using DP:
• Can solve complex problems efficiently
• Often provides polynomial-time solutions to problems that would be
exponential with naive approaches
• Systematically considers all possible solutions, ensuring optimality
Challenges in using DP:
• Identifying the appropriate subproblems and state representation
• Determining the correct order to solve subproblems
• Managing memory usage for storing subproblem solutions
In the next sections, we'll delve deeper into these concepts and start solving
problems using dynamic programming techniques.
2. Fundamental Concepts
2.1 Optimal Substructure
Optimal substructure is a key property that allows a problem to be solved using
dynamic programming.
Definition: A problem has optimal substructure if an optimal solution to the problem
contains optimal solutions to its subproblems.
Characteristics:
• The overall optimal solution can be constructed from optimal solutions of its
subproblems
• Allows for a recursive definition of the problem
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

Example: Shortest path problem


• If the shortest path from A to C goes through B, then the path from A to B
must be the shortest path between A and B
Importance:
• Enables breaking down complex problems into simpler, manageable
subproblems
• Forms the basis for the recursive formulation in dynamic programming
2.2 Overlapping Subproblems
Overlapping subproblems occur when a recursive algorithm revisits the same
subproblems repeatedly.
Characteristics:
• The same subproblems are solved multiple times
• Solutions to these subproblems can be stored and reused
Example: Fibonacci sequence calculation
Fib(n)= Fib(n-1)+ Fib(n-2), Fib(1)=1, Fib(0)=0
• Computing fib(5) requires computing fib(4) and fib(3)
• Computing fib(4) again requires computing fib(3), which was already
computed
Importance:
• Identifying overlapping subproblems is crucial for applying dynamic
programming
• Allows for significant optimization by storing and reusing solutions
2.3 DP Techniques
There are two main approaches to implementing DP:
2.3.1 Memoization
Memoization is a top-down dynamic programming technique that involves storing
the results of expensive function calls and returning the cached result when the
same inputs occur again.
Key points:
• "Top-down" because it starts with the main problem and recursively solves
subproblems
• Uses a data structure (usually a hash table or an array) to store computed
results
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

• Checks the cache before computing a result; if not found, computes and then
caches the result
Example implementation (Python):
def fibonacci(n, memo={}):

if n in memo:

return memo[n]

if n <= 1:

return n

memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)

return memo[n]

Advantages:
• Easy to implement, often just an addition to a recursive solution
• Computes only necessary subproblems
Disadvantages:
• Still uses recursive calls, which can lead to stack overflow for very large inputs
2.4 Tabulation
Tabulation is a bottom-up dynamic programming technique that involves building a
table of results for subproblems and using those results to solve larger problems.
Key points:
• "Bottom-up" because it starts with the smallest subproblems and builds up to
the main problem
• Uses an n-dimensional table to store results, where n is the number of
parameters that change in the subproblems
• Typically implemented using iteration rather than recursion
Example implementation (Python):
def fibonacci(n):

if n <= 1:

return n

dp = [0] * (n + 1)

dp[1] = 1

for i in range(2, n + 1):

dp[i] = dp[i-1] + dp[i-2]

return dp[n]
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

Advantages:
• Often more efficient in terms of space complexity
• Avoids recursive overhead and potential stack overflow issues
Disadvantages:
• May compute unnecessary subproblems
• Can be less intuitive to implement for some problems
3. Steps to Solve a DP Problem

• Define the Problem:


o Identify what you are trying to compute and break it into
subproblems.
o Formulate the problem as a recurrence relation.

• Choose State Variables:


o Determine what values define the state of each subproblem.
o Commonly, this might include indices, sums, or counts.

• Formulate the Recurrence Relation:


o Define how the current state can be derived from previous states.

• Identify Base Cases:


o Determine the smallest subproblems that can be solved directly.

• Implement Using Memoization or Tabulation:


o Either recursively solve and memoize (store) subproblem
solutions, or iteratively compute solutions and store them in a
table.

4. Basic Dynamic Programming Problems


In this section, we'll explore four classic dynamic programming problems. For each
problem, we'll provide a description, analyze its structure, and present both
recursive (with memoization) and iterative (tabulation) solutions.
• Fibonacci: Simple recursive structure with overlapping subproblems
• Climbing Stairs: Similar to Fibonacci, shows how DP can solve counting
problems
• Coin Change: Optimization problem, shows how to handle multiple choices at
each step
• LCS: Introduces 2D DP tables and string-based problems
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

5. How to solve A problem with tabulare Dynamic Programming Technique


Based on the previous example, we will now analyze the properties common to
dynamic programming problems.
1. The problem can be broken down into steps and a decision must be made at
each step.
The example of the traveler is divided into 4 stages or at each stage. The political
decision at each stage was to choose the next destination
2. Each stage corresponds to a certain number of states. States are the various
possible conditions under which the system might be at a stage of the problem.
The number of states can be finite or infinite.

3. At each stage, the decision made transforms the current state into a state
associated with the next stage. In this example, being in a given city, the traveler
decides to go to another city that is a state of the next step.
4. Given a state, an optimal strategy for the remaining steps is independent of the
decisions made in the previous steps.
5. The procedure for finding the optimal solution starts with finding the optimal
decision for the last step.
6. A recurrence relation that identifies the optimal strategy of step n will find the
optimal strategy d for step n+1.

In our example the relation is f n* (S)= Min


x
{Csxn + f n*+1 (xn)}
n

The optimal strategy, given that we are in the S state of step n, requires finding the
value of xn which minimizes the above expression.

The recurrence relation always has this form f n* (S)= Max


x
or Min
x
{fn(S,xn)},
n n

The precise form of the recursive relation differs from one problem to another.
However, a notation similar to the one introduced in the previous example is to be
used, as summarized below the notation used:
1. n = label of the current step (n = 1, 2, . . . , N ).
2. sn = current state of step n.
3. xn = decision variable of step n. xn* = optimal value of xn (knowing
sn).
4. fn(sn, xn) = contribution of steps n, n + 1, . . . , N in the objective function if
the system starts from the state sn to step n, the immediate decision is xn, and
optimal decisions are made thereafter.
f n*(sn) = fn(sn, xn*).
5. The recursive relation will always be of the form
f n*(sn) = max { fn(sn, xn)} where f n*(sn) = min {fn(sn, xn)},
xn xn
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

1. Using this recurrence relationship, the algorithm proceeds step by step, starting
with the last step and working backwards to the first step. In any dynamic
programming problem, one can construct at each step an array analogous to the
next.
Stage n

fn(S, x1)

X1 state n+1 f n* (s) xn*


S
States N cost or distance The optimal path The best state of
length from S to the stage n+1 along
end state the final optimal
path

Reading the solution : the final step only leads to the (optimal) value of the initial
problem. It does not directly give the solution leading to this value. In general, to get
this solution, we do the opposite work by reading from the table starting from the final
solution and doing the opposite path of the calculations made from step 1.

Example 5.3: Distribution of new researchers on teams.


A government space project is researching a certain engineering problem that needs to
be solved before people can safely fly to Mars. Three research teams are currently trying
three different approaches to solve this problem. It has been estimated that, under the
current circumstances, the probability that the respective teams – 1, 2 and 3 – will not
succeed is 0.40, 0.60 and 0.80, respectively. Thus, the current probability of failure of
the three teams is (0.40) (0.60) (0.80) = 0.192. Since the goal is to minimize the
probability of failure, two other top scientists have been assigned to the project. The
following table gives the estimated probability that the respective teams will fail when
0, 1 or 2 additional scientists are added to this team. Only whole numbers of scientists
are taken into account because each new scientist will have to devote his or her full
attention to a team. The problem is figuring out how to assign the two additional
scientists to minimize the likelihood that all three teams will fail.
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

Probability of failure
Number of new researchers Groups
1 2 3
0 0,4 0,6 0,8
1 0,2 0,4 0,5
2 0,15 0,2 0,3

Solution: We start with the problem modeling.


1. Stages: 3 stages where the states represent the number of available researchers

1. Decision variable: xn represents the number of researchers to be allocated to


the research team n, n = 1, 2, 3.

Contribution of the xn decision is the probability that team n will fail after having xn
more seeker.

1. The recursive relationship f n* (S) is the minimum probability that groups


n=1,2,3 will fail in their searches:

f n* (S) = Min fn(S, xn) n = 1, 2, 3


xn ≤ S

with fn(S, xn) = pn(xn) × f n*+1 (S, xn), pn(xn) is the contribution of the decision xn
and f 3* (s) =1.

Step N Step N+1

S S+xn
pn(xn)

fn(S, xn) = pn(xn) f n*+1 (S- xn)

In this example the recursive relation is not additive in time, but it is a multiplication.

Stage 3

f3(S, x3) = p3(x3)


Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

x3 0 1 2 f 3* (S) x3*
S

0 0,8 - - 0,8 0

1 0,8 0,5 - 0,5 1

2 0,8 0,5 0,3 0,3 2

Stage 2 f*
f2(S, x2) = p2(x2) 3 (x3)

x2 0 1 2 f 2* (S) x2*
S

0 0,48 - - 0,48 0

1 0,3 0,32 - 0,3 0

2 0,18 0,2 0,16 0,16 2

Stage 1

*
f1(S, x1) = p1(x1) f 2 (x1)

x1 0 1 2 f1* (S) x1*


S

2 0,064 0,06 0,072 0,06 1


Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

* * *
The optimal strategy is x1 = 1, x2 = 0 and x3 = 1.
The probability of failure of the three research groups is 0.06.
Exercise to do: Solve the backpack problem using dynamic programming

8. Time and Space Complexity Optimization


1. Optimizing Time Complexity

a. Identifying Redundant Calculations


• Avoiding Nested Loops in Recurrence Relations:
Some DP problems have nested loops in their recurrence relations, which can
increase time complexity. Where possible, find mathematical shortcuts or
rearrange the recurrence to reduce nested operations.
Example: In calculating the sum of subsets, rather than recalculating the same
subset sums, store them in a dictionary for constant-time retrieval.
b. Using Efficient Data Structures
• Hash Maps for Memoization:
Memoization with a dictionary provides O(1) average-time complexity for
storing and retrieving subproblem results, reducing recomputation time for
recursive DP problems.
• Binary Indexed Trees (BITs) and Segment Trees:
In some DP problems, such as range queries and modifications, data
structures like BITs and segment trees can improve efficiency by enabling
O(log n) updates and queries.
2. Space Complexity Optimization
a. Rolling Arrays for Space Reduction

Many DP problems only require results from a limited number of previous states (e.g.,
Fibonacci, minimum path sum). By using a rolling array or a constant number of
variables, space can be reduced from O(n) or O(n^2) to O(1).

b. Bitmasking for State Compression


In problems where states can be represented by combinations (e.g., subsets or
selections), a bitmask is a compact way to store state information, reducing space
complexity from O(2^n * n) to O(2^n).

c. Divide and Conquer with DP

The divide and conquer with DP technique is useful for optimization problems over
ranges (such as in convex hull trick or Knuth’s optimization). By dividing the DP
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

table into smaller sections, you can often reduce time complexity from O(n^2) to O(n
log n) or better.

9. Real-world Applications of Dynamic Programming


Summary of Dynamic Programming Applications
Application Area Example Problems DP Techniques Used
Route Planning and Shortest Path, TSP Memoization, Bitmasking
Navigation
Resource Allocation Budget Allocation, Portfolio Knapsack Problem, Capital
Optimization Investment Models
Machine Learning Reinforcement Learning, Q-learning, Viterbi Algorithm
HMMs
Supply Chain Inventory Management, Inventory DP, Scheduling
Production Scheduling Models
Bioinformatics Sequence Alignment, Protein Needleman-Wunsch, Energy
Folding Minimization
Finance Option Pricing, Portfolio Binomial Models, Risk
Management Optimization
Robotics Path Planning, Control Systems A*, MPC
Text Processing Spell Checking, Parsing Levenshtein Distance, CKY
Parsing
Data Compression File Compression, Video Huffman, Lempel-Ziv
Encoding
Medical Diagnosis Treatment Planning, Disease Personalized Medicine,
Control Outbreak Models

Recent Research Directions


1. DP in Deep Learning: Researchers are exploring connections between
dynamic programming and deep learning, such as using DP to optimize neural
network architectures.
2. Quantum Dynamic Programming: With the advent of quantum computing,
there's ongoing research into quantum algorithms for dynamic programming
problems, potentially offering exponential speedups for certain DP problems.
3. Online Dynamic Programming: Developing DP algorithms that can handle
streaming data or online decision-making scenarios where the entire input is
not available at once.
4. DP in Bioinformatics: Advanced DP techniques are being developed for
complex sequence alignment problems in genomics and proteomics.
5. Robust Dynamic Programming: Extending DP to handle uncertainties and
perturbations in the input data, making solutions more resilient to real-world
variabilities.
Master Data Science and Artificial Intelligence Semester : S1
Course: Advanced Operation Research Prof. LAYEB

You might also like