0% found this document useful (0 votes)
27 views27 pages

Dynamic Programming

Dynamic programming is an optimization technique that solves problems by breaking them down into overlapping subproblems and storing the results of these subproblems to avoid recomputing them. It involves building up a solution using previously found subsolutions and works in a bottom-up fashion by determining the optimal solution to subproblems, then combining these subsolutions in an optimal way. The principle of optimality states that if a sequence is optimal, then its subsequences must also be optimal. Dynamic programming uses this principle to efficiently solve problems like the knapsack problem and traveling salesperson problem by considering all possible sequences to find the optimal solution.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
0% found this document useful (0 votes)
27 views27 pages

Dynamic Programming

Dynamic programming is an optimization technique that solves problems by breaking them down into overlapping subproblems and storing the results of these subproblems to avoid recomputing them. It involves building up a solution using previously found subsolutions and works in a bottom-up fashion by determining the optimal solution to subproblems, then combining these subsolutions in an optimal way. The principle of optimality states that if a sequence is optimal, then its subsequences must also be optimal. Dynamic programming uses this principle to efficiently solve problems like the knapsack problem and traveling salesperson problem by considering all possible sequences to find the optimal solution.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 27

Dynamic Programming

Dynamic Programming
• Dynamic programming is applied to optimization problems.
• Solve problems with overlapping subproblems.
• Each subproblem solved only once and result recorded in a table.
Divide and Conquer Vs Dynamic
Programming
Divide and Conquer Dynamic Programming
Subproblems are independent Subproblems are overlapping
Recomputations performed No need to recompute
Less efficient due to rework More efficient
Recursive method (Top down approach of problem Iterative method(Bottom up approach of problem solving
solving
Splits its input at specific deterministic points usually in Splits its input at every possible split point
middle
Greedy Vs Dynamic Programming
Greedy Dynamic Programming
Used to obtain optimum solution Also obtains optimum solution
Picks optimum solution from a set of feasible solution No special set of feasible solution
Optimum selection is without revising previously Consider all possible sequences to obtain possible
generated selection solution
Only decision sequence is ever generated selection Many decision sequence may be generated
No guarantee of getting optimum solution Guarantee of getting optimum solution using Principal
of optimality
Principle of Optimality
• Principle of Optimality: suppose that in solving a problem, we have to
make a sequence of decisions D1,D2……..Dn. If this sequence is optimal,
then the last k decisions, 1<k<n must be optimal
• E.g. the shortest path problem
• If i,i1,i2,………….j is a shortest path from i to j then i1,i2,……j must be a
shortest path from i to j
The Traveling Salesperson Problem
0/1 Knapsack

A solution to the knapsack problem can be obtained by making a


sequence of decision on the variables x1, x2, x3, ……xn. A decision on
variable xi involves determining which of the values 0 or 1 is to be
assigned to it.

Let fj(y) be the value of an optimal solution to KNAP(1,j,y). Since Principle


of optimality holds we obtain
Fn(m)=max fn-1(m), fn-1(m-wn)+pn --------- (5.14)
Fi(y)=max fi-1(y), fi-1(y-wi)+pi --------- (5.15)

F0 (y)=0 for all non-negative y and Fi(y)= - when y<0


We use ordered set Si =(f(yj),yj)|1≤j ≤ k to represent Fi(y)
Note:
Si+1 can be computed by merging the pair Si and s1i together.
If Si+1 contains two pairs (Pj,Wj) and (Pk,Wk) such that Pj ≤ Pk and Wj Wk
then the pair (Pj,Wj) can be descarded
1. Algorithm DKP(p, w, n, m)
2. {
3. s0 := { (0, 0) };
4. for i := 1 to n - l do
5. { s1i- 1 := {(P, W)l(P - Pi, w - wi)  Si-1 and w ≤ m };
6. Si :== MergePurge(si-1 , s1i-1 );
7. }
8. (PX, WX) := last pair in sn-1;
9. (PY, WY) :== (P' + Pn, W' + wn) where W' is the largest W in
10. any pair in sn-l such that W+ wn ≤ m;
11. / / Trace back for Xn, Xn-1, ... , X1.
12. if (PX > PY) then Xn := O;
13. else Xn := 1;
14. TraceBackFor( Xn-1, ... , x1);
15. }

Algorithm 5.6 Informal knapsack algorithm

You might also like