0% found this document useful (0 votes)
9 views38 pages

Dynamic Programming (DP) 02 - Class Notes

Uploaded by

mr.shrey02
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
9 views38 pages

Dynamic Programming (DP) 02 - Class Notes

Uploaded by

mr.shrey02
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 38

CS & IT 20 25

ENGINEERING
Algorithms

Dynamic Programming (DP)

Lecture No.- 01 02 By- Aditya sir


Recap of Previous Lecture

Topic PYQDjkstraSSSPAlgo
SSSP Code
Topic Dijkstra
DynamicProgramming Into
Topics to be Covered

Topic Concept
of Dynamic Programming

Topic Applications

3
DPvsgdyi.­IO
iknapsalk
101 311
2 Coin Problem
Change or
Genedy may may
not
3 Multi graph optimal soln
stage give

greedy local optimisation

DP global optimization
4
Topic : Dynamic Programming (DP)
multi stage graph

17
Guady

Optimal

2 11 3 2
P Path a

2 C2

Enumeration or Brute Force


n Cn

DP Brute Force Drawback Complexity

12 Ex
Topic : Dynamic Programming: (DP)
• One way of solving problem in which it is not possible to make sequence of
decisions in step-wise manner leading to optimal solutions is to enumerate all
decision sequences (Brute force) & then pick up the best solution (optimal).


0
But the drawback with brute force/ Enumeration is excessive Time/Space
requirements
cutdown
• Dynamic Programming (DP) Mstopping
based on Enumeration often tries to reduce the
amount of enumeration by curtailing those decision sequence from where these r
is no possibility of getting optimal solution. (That's how it may bring down the
time complexity)
• In Dynamic programming these set of optimal decisions are made by applying
EItlsorty.fr
Principle of Optimality

global optimality
Topic : Dynamic Programming: (DP)
• Principle of Optimality: states that whatever the initial state & decision are the
remaining sequence of decisions must constitute an optimal decision sequence
with regard to the state resulting from the first decision.

• Essential difference between Greedy method & Dynamic programming (DP) is


that Greedy method always generates only one decision sequence. Whereas in
L
DP enumeration many decision sequences can be generated.

sub problems
Weds local

Qtimphocbal own
MMD
Topic : Dynamic Programming: (DP)
• Another important feature of D.P is that optimal solutions of the sub problems
are retained (cached/ stored in a table) to avoid recomputing their values.

v.FI (Invariably this feature also leads to saving of time)


e

1 Memoization (Top-Down Approach)


D.P implémentation
Revie
2 Tabulation (Bottom-Up) Iterative

Rec

T BU Iter
Topic : The Elements/Properties of D.P riv.sn
(i) Splitting of original problems into Subproblems: Be able to split the original
problem into subproblems in a recursive manner (so that the Subproblems can
be further divided into sub-subproblems). This process of splitting should
continue till the Subproblems becomes small.
nano
(ii) Subproblems Optimality (Optimal Substructure): An optimal solution to the
problem must result from optimal solutions to the subproblems with combine
operation.

id(iii) Overlapping Subproblems: Many subproblems themselves contain common sub-


subproblems. Therefore, it is desirable to solve the small problem & store/cache
O
their results, so that they can be used in other subproblems.
e
1 Tabulata
2
memoise
Topic : Dynamic Programming: (DP)
Examples
Fibonacci Number:

AEnery
Fib series: 0,1,1,2,3,5,8,13,21,34...........
UWW
Fib(n) = Fib (n-1) + Fib (n-2), n > 1
term is the

twoterms
sum
of
Previ
Fib(n) =1, n = 1
base
small
Fib (n) = 0, n = 0
condont problemed
Recursive equation
1
Terminating
Condition
Topic : Dynamic Programming: (DP)

O
Normal Recursive Implementation of Fib (n):

Algo Fib (int n


{ 171m fin Fln 1 F n 2

If (n ≤ 1) return (n);
else
IF 3 tf2
{
Bc
return (Fib (n-1)+ Fib (n-2)); Ft

}
}
E E fy FIito.is
1 BC BC
TC
0121
Time Recurrence i
Complexity

T n C 7 1

n Tfn 1 T n 2 9 771

CHI
m

Tin T n 1 p n i 9 net approximation

27
a a
T n 2 27 n 2

2 T n 2 2ata
Gianna

27 n 3 a 29 9
n 2

23T n 3 229 219 209

term
geural
n 2kt n k 2k 1 a

n k 1 k In 1

27
1
T n
2 7111 21 1 a

2,1 21 1 9

T n
0121s

27
NormalRoussineapproach Treeoffunctional
9 Find
FIs on aproblem F 5

FINDApp
unique sub
problems g
F 4 f s F 2 FID
Flo

x x
1 E E E F1 I

Duplicate
F1
27
ded
can
Topic : Dynamic Programming: (DP)

read
sent
Top-down - Memoized implementation of Fib(n)

Algo memiFb(n)
Mfo n 00 initiallization
{ Imp
if (m [n] is undefined)
off
{
to if (n <=1) result = n;
e

else
result = memFib (n-1) + memFib(n-2);
m[n] = result; //memorizing (caching)
}
return (m[n]);
} more
IDownlodwalkthrough ring
9 157 7 mentib's
res 2 1 3
m 47 3
b u
29
I p res 1 1 met

i mingi 5ft

I Lmenting map
memfib i I

Auxilary mlz no sec calls


times memfib 2

mem feb 13 m 3
27
Function call Tree above approach
for
3 2 5 Am
G fib 5
Fib 5
I
2 1 3
Fiblu
am
Fib13
011
1 1 2 fib 3
fristffe
9 01
1 Feb 2 IFIDIon Easy

4 a
Fib 1 Fibro 0 2 Oln See
011
011791
27
10
Spallomplexity

Recursion stack O n stack size

L Animyarray
on
É 2
Ñf

fowkmyiyo0mm I

27
Topic : Dynamic Programming: (DP)
APR 2
Tabulation tn
Bottom-up approach of D.P for Fib (n)

Algo memFib (n)


{ Top Down
M [0] = 0
F FG
M [1] = 1 initialization 1m14 5 3

1 for i = 2 to n
I

toffffffe
{

1
M [i] = M [i-1] + M [i-2]; me
}
return (M[n]);
}
Top DI
TC up
Bottom
9
Fibls
sµt
1 2 m 2
mf Tm of 1 0 1

3 ms m 2
mfiT 1 2

i 4 mfu7 MB mf2 2 3

Tc o n 0 2n 0 n

27 Sc 01m
Topic : Dynamic Programming: (DP)
III
It
Dynamic Programming vs Greedy Method vs Divide & Conquer: one
• In all methods the problem is divided into subproblem;
• Greedy Method: Building up of the solution to the problem is done in step-wise
O
manner (incrementally) by applying local options only (local optimality).
• Divide & conquer: Breaking up a problem into separate problems (independent),
then solve each subproblem separately (i.e. independently) & combine the
fee­s

solution of subproblems to get the solution of original problem.

l• Dynamic Programming: Breaking up of a problem into a series of overlapping


subproblems & building up solution of larger & larger subproblems.

Dif
DID
between
Topic : Dynamic Programming: (DP)
• Unlike Divide and Conquer, D.P typically involves solving all subproblems,
rather than a small portion of subproblem.
• D.P tends to solve each subproblem only once since the results of the
subproblems are stored, which are used later again when required. This is going
to reduce the computation drastically. (In most case, the complexity)

Brute force: O(2)


Ex- Fib offer
D.P Implementation: O(n)

mF
Dacand
Imp diff DI
1
DI independentO Sub problems
some separately

Sub problems
2 DP Glapping

27
mergesot.ir I 10

II
itentment
10
5
Do
A
IF 71
tt

a ao
A
IT 172 IF
27
Fibonacci

F 5

Flu

FIFCjr

owlapping Sub problems

DP
Applications of
27
Single Source Shortest Paths
SSSI
1 Dijkstra gaudy
2
Blanford DI

both SSSP Algos


Distinction between
to use when
and which

Imp points forGATES


27
1
Dijkstra's
SSSP Genedy algo always
gives optimal
solution to the SSSP problem

provided the edges in the given graph

weight
are
of the

has we weight edge then


2
If the graph any not
SSSP algo may
or
may give
Dijkstra e

solution path costs


optimal

27
the has or more ne wt edges
3
If graph o

then the
Cycle
But NOT a we cut

Bellman Ford SSSP algo DP based always


solution to SSSP problem
Tinesoptimal
ne wt cycle that
has
4
If the graph any source then
is reachable from
works
algo DX

BE
27
nwweightcy.cl
A B

10
A B
Net weight
Cycle 5 A B 10
of
10 5 20 A B C A B

50
5 10 5

10 0 5 10
5
ve wf Cycle a

wt is not
min path defined
27
But No neg wit cycle
with me out edge
91 Graph

A B
SSSP
Dijkstra
Not at of cycle
10 5 3
51 5
not guarantee
c Ledge
the

Bellman Ford
optimal

27
ILP
Gimity
ApplyDijkstraSSSPAI.­BE A

111
A
sep
f
F

2 A E 2x
E 7 1 3 1 2 1 4
113 6X
EB 7 9
A D C B 7 1 3 2 1 d
27
7 5 Fed Actualminimum.fm
Implote
In Dijkstra SSSP to give
failed
1
prov og
to vertices because
optimal path cost few
in
once the vertex is selected
Dijkstra
considered to be relaxed
is NOT
it further
2 Ford the relaxation is carried out
In Bellman
In

war t to the edges in multiplpiterations


27
ditrations
Idea
A D
B
1 DEEP
3

10 1 16
A B
A C 10

c
D
later
no relaxation

A B
20 16
A C
16

C I
27
5
Bellman Fordwalkthrough
3 2

111 Ins 6 a 112 6 0 4


1 B C B G
a
SOI D E
For 00 2
1 5 7

Her Itery
62
Iterations BE
A I
A

0
27 7
Ez E z
B 6X
Dj A A B 2
A C 4 A C 4
Belfman
12 1 q.es

Impobservnation

Dijkstra's Greedy SSSP Algo Failed to give us the


shortest path to Bellman Ford SSSP
few vertices But
gives shortest paths optional soln to
all
the vertices

wf out
27
even though we edge are present but no w
cycle
010

THANK - YOU

Telegram Link for Aditya Jain sir:


https://t.me/AdityaSir_PW

26

You might also like