0% found this document useful (0 votes)
9 views83 pages

Dynamic Programming

Dynamic Programming (DP) is an algorithmic technique that solves complex problems by breaking them into overlapping subproblems and storing their results to avoid redundant calculations. It can be implemented using two approaches: Top-Down (Memoization) and Bottom-Up, and is characterized by properties such as Overlapping Subproblems and Optimal Substructure. The document also discusses practical applications of DP, including calculating binomial coefficients and solving the 0-1 Knapsack Problem.

Uploaded by

nisargbhatt.n
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
9 views83 pages

Dynamic Programming

Dynamic Programming (DP) is an algorithmic technique that solves complex problems by breaking them into overlapping subproblems and storing their results to avoid redundant calculations. It can be implemented using two approaches: Top-Down (Memoization) and Bottom-Up, and is characterized by properties such as Overlapping Subproblems and Optimal Substructure. The document also discusses practical applications of DP, including calculating binomial coefficients and solving the 0-1 Knapsack Problem.

Uploaded by

nisargbhatt.n
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 83

Dynamic Programming

Dynamic Programming (DP)


• Dynamic programming (usually referred to as DP ) is a very powerful technique
to solve a particular class of problems.
• It demands very elegant formulation of the approach and simple thinking and the
coding part is very easy.
• The idea is very simple, If you have solved a problem with the given input, then
save the result for future reference, so as to avoid solving the same problem
again.. shortly 'Remember your Past' :) .
• If the given problem can be broken up in to smaller sub-problems and these
smaller sub problems are in turn divided in to still-smaller ones, and in this
process, if you observe some over-lappping sub problems, then its a big hint
for DP.
• Also, the optimal solutions to the sub problems contribute to the optimal solution
of the given problem ( referred to as the Optimal Substructure Property ).
2
Prepared By - Rutu V. Ataliya 3
Prepared By - Rutu V. Ataliya 4
Prepared By - Rutu V. Ataliya 5
Weighted graph

Prepared By - Rutu V. Ataliya 6


Naïve greedy

Approach

Is there a better way?

Prepared By - Rutu V. Ataliya 7


Bottom-up approach

Prepared By - Rutu V. Ataliya 8


Bottom-up approach

Calculate for all remaining node

Prepared By - Rutu V. Ataliya 9


DP
• There are two ways of doing this.
• 1.) Top-Down : Start solving the given problem by breaking it down. If you see
that the problem has been solved already, then just return the saved answer. If it
has not been solved, solve it and save the answer. This is usually easy to think of
and very intuitive. This is referred to as Memoization.
• 2.) Bottom-Up : Analyze the problem and see the order in which the sub-
problems are solved and start solving from the trivial subproblem, up towards the
given problem. In this process, it is guaranteed that the subproblems are solved
before solving the problem. This is referred to as Dynamic Programming.
• Note that divide and conquer is slightly a different technique. In that, we divide
the problem in to non-overlapping subproblems and solve them independently,
like in mergesort and quick sort.

10
Why DP?
• Going bottom-up is a common strategy for dynamic
programming problems, which are problems where the solution is
composed of solutions to the same problem with smaller inputs (as
with the Fibonacci problem, above). The other common strategy for
dynamic programming problems is memorization.

11
Dynamic Programming vs. Recursion and
Divide & Conquer
• In a recursive program, a problem of size n is solved by first solving a
sub-problem of size n-1.
• In a divide & conquer program, you solve a problem of size n by first
solving a sub-problem of size k and another of size k-1, where 1 < k <
n.
• In dynamic programming, you solve a problem of size n by first solving
all sub-problems of all sizes k, where k < n.

12
13
Recursive approach

14
Store (Memoize)

15
Bottom-up approach

16
Two main properties of problem
• Dynamic Programming is an algorithmic paradigm that solves a given
complex problem by breaking it into subproblems and stores the results of
subproblems to avoid computing the same results again. Following are the
two main properties of a problem that suggest that the given problem can
be solved using Dynamic programming.

1) Overlapping Subproblems

2) Optimal Substructure

17
Calculating Binomial Coefficient
• In mathematics, particularly in combinatorics, binomial coefficient is a coefficient
of any of the terms in the expansion of ( a + b) ^ n .
• It is denoted by C (n, k) or n where, ( 0 ≤ k ≤ n ).
k

• Binomial coefficients are coefficients of the binomial formula:


(a + b)n = C(n,0) an b0 + . . . + C(n , k) a n-k b k + . . . + C(n , n) a0 bn
• C(n, k), the number of combinations of k elements from an n-element set (0  k 
n)

20
Calculating Binomial Coefficient
• Recurrence: C(n , k) = C (n-1, k-1) + C (n-1, k) for n > k > 0
C(n , 0) = 1,
C(n , n) = 1 for n  0 ……………….. (2)

• Value of C (n , k) can be computed by filling a table:


0 1 2 . . . k-1 k
0 1
1 1 1
.
.
.
n-1 C(n-1,k-1) C(n-1,k)
n C(n,k)
21
Calculating Binomial Coefficient
• Dynamic algorithm constructs a n x k table, with the first column and diagonal
filled out using the equation (2).
• Construct the table: The table is then filled out iteratively, row by row using the
recursive relation.

22
Calculating Binomial Coefficient
• Compute C (4, 2) .

C(n , k) = C (n-1, k-1) + C (n-1, k) for n > k > 0


C(n , 0) = 1,
C(n , n) = 1 for n  0

• Here n = 4 and k = 2.
C(4 , 2) = C (4-1, 2-1) + C (4-1, 2)
C(4 , 2) = C (3, 1) + C (3, 2) -------------------------- (1)
• As there are two unknowns : C (3, 1) and C(3, 2) in above equation we will
compute these sub instances of C(4, 2).

23
Calculating Binomial Coefficient
• Now lets compute , C (3, 1) and C(3, 2) .
C(3 , 1) = C (2, 0) + C (2, 1)
As, C(n, 0) = 1 We can write
C(2, 0) = 1
C(3 , 1) = 1 + C (2, 1) ------------------------ (2)
• Hence let us compute C(2, 1) .
n=2,k=1
C(2 , 1) = C (2-1, 1-1) + C (2-1, 1)
= C ( 1, 0) + C (1, 1) ----------------------- (3)
• But as C(n, 0) =1 and C(n , n) = 1 we get
C(1, 0) = 1 and C(1, 1) = 1

24
Calculating Binomial Coefficient
• Substitute these values in above equation 3 we get,
C(2 , 1) = C (1, 0) + C (1, 1)
=1+1
C(2, 1) = 2 ---------------------- (4)

• Put this value in equation (2) and we get ,


C(3 , 1) = 1 + 2
C(3, 1) = 3 --------------------- (5)

• Now to solve equation 1 we will first compute C(3, 2) with n = 3 and k=2.
C(3 , 2) = C (2, 1) + C (2, 2)
25
Calculating Binomial Coefficient
• But as C(n, n) = C(2, 2) = 1, we will put values of C(2, 1) from equation (4) and
C(2, 2) in C(3, 2) we get,
C(3 , 2) = C (2, 1) + C (2, 2)
= 2+1
C(3, 2) = 3 ------------------- (6)

• Put equation (5) and (6) in equation (1) , then we get


C(4 , 2) = C (3, 1) + C (3, 2)
= 3+3
C(4, 2) = 6 is the final answer.

26
Calculating Binomial Coefficient

27
How DP approach is used?
• While computing C(n, k) the smaller overlapping sequences get generated by C(n-
1, k-1) and C(n-1, k).
• These overlapping, smaller instances of problem need to be solved first. The
solutions which we obtained by solving these instances will ultimately generate
the final solution.
• Thus for computing binomial coefficient DP is used.
• If we record binomial coefficients n and k values ranging from 0 to n and 0 to k
then it will look like Pascal’s triangle.
• To compute C(n, k) we fill up the table row by row starting with C(n, 0)= 1 and
ending with C(n, n) = 1.
• The cell at current row is calculated by two adjacent cells of previous row.

28
Calculating Binomial Coefficient

29
Calculating Binomial
Coefficient Time Complexity
• In calculating binomial coefficient the basic operation is addition i.e. ,
C [i, j] = C [i-1 , j-1] + C [i-1, j]
• Let A(n, k) denotes total additions made in computing C(n, k).
k i-1 n k
A(n, k) = ∑ ∑ 1 + ∑ ∑ 1
i = 1 i=1 i = k+1 j = 1

k n n
= ∑ [ ( i-1) – 1 + 1] + ∑ ( k-1 +1) , because ∑ 1 = ( n-1+1)
i=1 i=k+1 i=1

30
Calculating Binomial
Coefficient Time Complexity
k n
A(n, k) = ∑ ( i -1) + ∑ k
i=1 i = k+1
n
= [ 1+2+3+……(k-1) ] + k ∑ 1
i = k+1

= k (k – 1) + k ( n – (k+1) +1)
2
= k (k – 1) + k ( n – k)
2
31
Calculating Binomial
Coefficient Time Complexity
A(n, k) = k2 - k + n k - k2
2 2

A(n, k) = Ɵ (n k)

• Hence time complexity of binomial coefficient is Ɵ (n k) .

32
Thief Story
• The problem is often given as a story:
• A thief breaks into a house.
• Around the thief are various objects: a diamond ring, a silver candle lamps, a
Radio, a large portrait of Elvis painted on a black velvet background (a "velvet-
elvis").
• The thief has a knapsack that can only hold a certain capacity.
• Each of the items has a value and a size, and cannot hold all of the items in the
knapsack.

Department of Information Technology CSPIT


Knapsack Problem

Item Size Value

1 - ring 1 15

2 – candle lamp 5 10

3 - radio 3 9

4 - elvis 4 5

Department of Information Technology CSPIT


• The problem is,
• which items should the thief take?
• If the knapsack were large enough, the thief could take all of the items and run.
• But, that is not the case (the problem states that the knapsack cannot hold all of the
items).
• There are three types of "thieves" that we shall consider:
• a greedy thief,
• a foolish and slow thief,
• a wise thief.
• Each of these thieves have a knapsack that can old a total size of 8.

Department of Information Technology CSPIT


Greedy Thief
• The greedy thief breaks into the window, and sees the items.
• He makes a mental list of the items available, and grabs the most
expensive item first.
• The ring goes in first, leaving a capacity of 7, and a value of 15.
• Next, he grabs the candelabra, leaving a knapsack of size 6 and a value of
25. No other items will fit in his knapsack, so he leaves.

Department of Information Technology CSPIT


Foolish and Slow Thief
• The foolish and slow thief climbs in the window, and sees the items.
• This thief was a slow programmer without the knowledge of design techniques.
• Possessing a solid background in Boolean logic, he figures that he can simply
compute all combinations of the objects and choose the best.
• So, he starts going through the binary combinations of objects - all 2^4 of them.
• While he is still drawing the truth table, the police show up, and arrest him.
• Although his solution would certainly have given him the best answer, it just took
long to compute.

Department of Information Technology CSPIT


Wise Thief
• The wise thief appears, and observes the items.
• He notes that an empty knapsack has a value of 0.
• Further, he notes that a knapsack can either contain each item, or not.
• Further, his decision to include an item will be based on a quick calculation - either
the knapsack with some combination of the previous items will be worth more, or
else the knapsack of a size that will fit the current item was worth more.
• So, he does this quick computation, and figures out that the best knapsack he can
take is made up of items 1,3, and 4, for a total value of 29.

Department of Information Technology CSPIT


Dynamic Programming
• The wise thief used a technique that is known as "dynamic programming."
• In this case, a table was made to track "the best knapsack so far.“
• The complete table that is show in subsequent examples is for demonstration
purposes.
• In the given example, there is a column that indicates a range of values from 0 to the
9.
• This corresponds to the "target weight" of the knapsack.
• The table stops at the maximum capacity of the knapsack. There are
then n+1 columns, one each for the items that can be selected.

Department of Information Technology CSPIT


Dynamic Programming
• The first column is initialized to zero. Logically, this corresponds to a knapsack with
zero items having zero worth. The first row is also initialized to zero, corresponding to
a knapsack of zero capacity.

• Finally, the values of the table are filled in, from left to right, and from top to bottom.
For each cell item, the total worth of a knapsack is determined as either the worth of
a knapsack without the current item (expressed as the value directly to the left of the
current value), or it is the value of the knapsack with the current item added into it.

Department of Information Technology CSPIT


Solution
0-1 Knapsack Problem

Ex :

Obj : 1,2,3,4
Profit : 1,4,5,7
Weight : 1,3,4,5

Department of Information Technology CSPIT


Negative Weighted Single-Source
Shortest Path Algorithm

(Bellman-Ford Algorithm)
The Bellman-Ford algorithm

Returns a boolean:
• TRUE if and only if there is no negative-weight cycle reachable
from the source: a simple cycle <v0, v1,…,vk>, where v0=vk
and
k

 weight ( v
i 1
i 1 ,v i )  0

• FALSE otherwise

If it returns TRUE, it also produces the shortest paths

43
Negative-Weight Edges
• Negative-weight edges may form negative-weight cycles

• If such cycles are reachable from


a b
-4
the source, then δ(s, v) is not properly 3 4
c 6 d g
5 8
s 0
defined! y -3
2 3 7
• Keep going around the cycle, and get -6
e f
w(s, v) = -  for all v on the cycle

44
Negative-Weight Edges
• s  a: only one path
δ(s, a) = w(s, a) = 3
a b
• s  b: only one path
-4
3 -1
3 4
c 6 d g
δ(s, b) = w(s, a) + w(a, b) = -1 s 0
5
5 11
8
-
-3
y
• s  c: infinitely many paths
2 3 7
- -
e -6 f
s, c, s, c, d, c, s, c, d, c, d, c

cycle has positive weight (6 - 3 = 3)

s, c is shortest path with weight δ(s, b) = w(s, c) = 5


45
Negative-Weight Edges
• s  e: infinitely many paths: a b
-4
• s, e, s, e, f, e, s, e, f, e, f, e 3 -1
4
3
c d g
• cycle e, f, e has negative weight: 5
6
8
s 0 5 11 -
3 + (- 6) = -3 y -3
2 3 7
• can find paths from s to e with arbitrarily large - -
e -6 f
negative weights
• δ(s, e) = -   no shortest path exists between s h i
2
and e  
h, i, j not
• Similarly: δ(s, f) = - , -8 3 reachable
δ(s, g) = -   from s
j
δ(s, h) = δ(s, i) = δ(s, j) = 

46
Cycles
• Can shortest paths contain cycles?
• Negative-weight cycles No!

• Shortest path is not well defined

• Positive-weight cycles: No!

• By removing the cycle, we can get a shorter path

• Zero-weight cycles
• No reason to use them

47
Single Source Shortest-Path:
The General Case (with negative edges)
• Bellman-Ford algorithm.

Iteratively relax all edges |V|-1


times

If no negative cycles, then:


1) d(v)= (v)
2) triangle inequality should hold

 Running time?
 O(VE).
All-Pairs Shortest Paths
All-Pairs Shortest Paths
• Given a directed graph G = (V, E), weight function w : E → R, |V| = n.
• Assume no negative weight cycles.
• Goal: create an n × n matrix of shortest-path distances δ(u, v).
• Could run BELLMAN-FORD once from each vertex:
• O(V2E)—which is O(V4) if the graph is dense (E = (V2)).
• If no negative-weight edges, could run Dijkstra’s algorithm once from each vertex:
• O(V E lg V) with binary heap—O(V3 lg V) if dense,
• We’ll see how to do in O(V3) in all cases, with no fancy data structure.
All Pairs Shortest Path – Floyd-Warshall
Algorithm
• Dynamic programming approach.
• Use optimal substructure of shortest paths: Any subpath of a
shortest path is a shortest path.
• Create a 3-dimensional table:
• Let dij(k) –shortest path weight of any path from i to j where all intermediate
vertices are from the set {1,2, …, k}.
• Ultimately, we would like to know the values of dij(n).
Computing dij(k)

• Base condition: dij(0) = ?


• dij(0) = wij.
• For k>0:
• Let p=<vi, . . . , vj> be a shortest path from vertex i to vertex j with all
intermediate vertices in {1,2, …, k}.
• If k is not an intermediate vertex, then all intermediate vertices are in {1,2,
…, k-1}.
• If k is an intermediate vertex, then p is composed of 2 shortest subpaths
drawn from {1,2, …, k-1}.
Recursive Formulation for dij(k)
Algorithm

• Running time = ?
• O(n3).
• Memory required = ?
• O(n2) (if we drop the superscripts).
Example
Step 1
Step 2
Step 3
Step 4
Step 5
All-Pairs Shortest Path: Johnson’s Algorithm
• Idea: If the graph is sparse (|E|<<|V|2), it pays to run Dijkstra’s algorithm
once from each vertex.
• O(VE log V) using binary heap, O(V2 log V + V E) using Fibonacci heap.
• But Dijkstra’s algorithm does not handle negative edges.
• Johnson’s Algorithm: reweight edges to form equivalent graph with non-
negative edges.
• Floyd-Warshall still has advantages:
• very simple implementation
• no fancy data structures
• small constant.
https://www.youtube.com/watch?v=KQ9zlKZ5Rzc
&t=27s

Prepared By - Rutu V. Ataliya 62


• https://www.youtube.com/watch?v=B06q2yjr-Cc

Prepared By - Rutu V. Ataliya 63


Assembly Line Scheduling
• A Manufacturing Problem: Find Fast Way Through A Factory
• The problem of assembly line scheduling can be described as follows:
• In an automobile factory, the automobiles are produced using assembly lines.
• The chassis (The base frame of a car or other wheeled vehicle) enters each
assembly line and along this line path there are various stations at which the parts
are added.
• Then a finished auto exists at the end of the line.
• The problem is to determine which stations to choose from line 1 and which
to choose from line 2 in order to minimize the total time through the factory
for one auto.
• The structure of assembly lines along with the stations is as shown below-
64
Assembly Line Scheduling
• Find: The sequence of stations from lines 1 and 2 for which the
assembly line time is minimal

• Brute Force: Enumerate all possible substation sets. Unfortunately
there are 2n subsets of a set of n elements. Thus the problem has a
poisonous O(2n) time.

65
Assembly Line Scheduling

66
Assembly Line Scheduling
Where,
ei = entry time of the chassis
a1j = assembly time at line 1
a2j = assembly time at line 2
t ij = time required to change the assembly line
xi = exit time of the chassis

• There is no cost required to stay on the same line. But if it is required to change
the assembly line than the time required is t ij .
• Using dynamic programming approach we have to determine which stations to
choose from line 1 and from line 2 so that within minimum amount of time the
auto gets produced.
• This problem can be solved by applying the steps of dynamic programming.
67
Assembly Line Scheduling
• Step1 : Characterizing the structure of optimal solution
The goal of this problem is to compute the fastest assembly time. Hence we
need to know, the fastest time from entry to S1n and from entry to S2n for assembly
line 1 and assembly line 2 respectively.
Then we have to consider two exiting points X1 and X2.

• Step 2 : Recursively define the value of an optimal solution.


In this step we have to compute fastest possible time to get station Sij. Thus
fastest possible time is denoted by fi [j] where I is either 1 or 2 and j is , i ≤ j ≤ n

68
Assembly Line Scheduling
• The fastest possible time can be obtained using formula –
f1[j] = min { f1 [ j – 1] + a1j , f2[ j – 1] + t 2(j-1) + a1j } , j ≥ 2
f2[j] = min { f1 [ j – 1] + t 1(j-1) + a2j , f2 [ j – 1] + a2j }, j ≥ 2
f1 [1] = e1 + a11 , j = 1
f2 [1] = e2 + a21 , j = 1

• Let , f * be the fastest time for entire assembling. It can be computed as :

f* = min (f1[n] + x1 , f2[n] + x2 )

Step 3 : Compute the fastest time for assembly using above equations.
69
Assembly Line Scheduling Example
a11 a12 a13 a14 a15 a16
2 8 9 3 4 1
e1 X1
t11 3
4 t12 1 t13
2 t14 1 t15 3 3

Enter Exit

t21 t22 t23 t24 t25 7


e2 2 3 4 1 1 3 X2

6 11 2 2 7 3
a21 a22 a23 a24 a25 a26 70
Assembly Line
Scheduling Example
Using the formula defined in step 2 , we can compute the fastest time for assembly.
• Step 1:
f1 [1] = e1 + a11 f1[ j ] = min { f1 [ j – 1] + a1j , f2[ j – 1] + t 2(j-1) + a1j }
f1[ 2 ] = min { f1 [ 2 – 1] + a12 , f2[ 2 – 1] + t 2(2-1) + a12 }
=4+2
= min { f1 [1] + a12 , f2[ 1 ] + t 21 + a12 }
f1 [1] = 6 = min { 6+ 8 , 8 + 3 + 8 }
= min { 14 , 19 }
f [1] = e + a f1[ 2 ] = 14
2 2 21
=2+6
f2 [1] = 8

71
Assembly Line Scheduling Example
f2[ j ] = min {f1[ j – 1] + t 1(j-1) + a2j , f2 [ j – 1] + a2j }
= min {f1[ 2 – 1] + t 1(2-1) + a22 , f2 [ 2 – 1] + a22 }
= min {f1[ 1] + t 11 + a22 , f2 [1] + a22 }
= min {6 + 3 + 11 , 8+ 11 }
= min {20 , 19 }
f2[ 2 ] = 19

Hence we have to fill out table entries as shown below:


J=1 J=2 J=3 J=4 J=5 J=6
F1 [ j ] 6 14
F2 [ j ] 6 19
72
Assembly Line Scheduling Example
f1[ j ] = min { f1 [ j – 1] + a1j , f2[ j – 1] + t 2(j-1) + a1j }
f1[ 3 ] = min { f1 [ 3 – 1] + a13 , f2[ 3 – 1] + t 2(3-1) + a13 }
= min { f1 [2] + a13 , f2[ 2 ] + t 22 + a13 }
= min { 14+ 9 , 19 + 4 + 9 }
= min { 23 , 32 }
f1[ 2 ] = 23

f2[ j ] = min {f1[ j – 1] + t 1(j-1) + a2j , f2 [ j – 1] + a2j }


f2[ 3 ] = min {f1[ 3 – 1] + t 1(3-1) + a23 , f2 [ 3 – 1] + a23 }
= min {f1[ 2] + t 12 + a23 , f2 [2] + a23 }
= min {14 + 1 + 2 , 19+ 2 }
= min {17 , 21 }
f2[ 3 ] = 17 73
Assembly Line Scheduling Example
• Similarly after computing all the values our final table will look like this -

J=1 J=2 J=3 J=4 J=5 J=6


F1 [ j ] 6 14 23 21 24 25
F2 [ j ] 6 19 17 19 26 29

• Now we can compute f* as –


f* = min (f1[n] + x1 , f2[n] + x2 )
= min (f1[6] + x1 , f2[6] + x2 )
= min ( 25 + 3 , 29 + 7 )
= min ( 28 , 36)
f* = 28
74
Assembly Line Scheduling Example
Step 4: Computing fastest path from computed information.
• For each i=1 or i=2 and for each j varying from 2 to n we can compute l[j].
• The l[j] will be either 1 or 2 depending upon : whether f1[j] or f2[j] is selected.

For example, in this case


l1 [ 2 ] = 1
Because while computing f1[ 2 ] we have obtained min value from f1[ 1 ] and not
from f1[ 2 ] .
f1[ 2 ] = min { f1 [ 2 – 1] + a12 , f2[ 2 – 1] + t 2(2-1) + a12 }
= min { f1 [1] + a12 , f2[ 1 ] + t 21 + a12 }
= min { 6+ 8 , 8 + 3 + 8 }
= min { 14 , 19 }
f1[ 2 ] = 14 75
Assembly Line Scheduling Example
• Similarly compute all l[j] values for the given data-
l2 [ 2 ] = 2 l value table
l1 [ 3 ] = 1
J=2 J=3 J=4 J=5 J=6
l2 [ 3 ] = 1
l1 [ j ] 1 1 2 2 1
l1 [ 4 ] = 2
l2 [ j ] 2 1 2 2 2
l2 [ 4 ] = 2
l1 [ 5 ] = 2 As we get f* value from f1 [n] + X1 , l* = 1 ,
l2 [ 5 ] = 2 i.e. f* value is derived from f1[n] . Hence l* = 1 .
l1 [ 6 ] = 1
l2 [ 6 ] = 2

76
Assembly Line Scheduling Example
• Now to obtain the path which gives fastest time in producing auto we will use all
the li[ j ] values.
• Let i = l* = 1 so we will start from passing the message as line 1 station 6.
• Then ,
For ( j = n ; j ≥ 2 ; j --)
{
i = l1[j]
print Line " i " station " j - 1“
}

77
Assembly Line Scheduling Example
• First message will transmitted from line 1 station 6 .
• Let , j = 6 , then i = l1[ 6 ] = 1. Hence message will be line 1 station 5 .
• Let , j = 5 , then i = l1[ 5 ] = 2. Hence message will be line 2 station 4 .
• Let , j = 4 , then i = l2[ 4 ] = 2. Hence message will be line 2 station 3 .
• Let , j = 3 , then i = l2[ 3 ] = 1. Hence message will be line 1 station 2 .
• Thus the optimal path in assembly line scheduling will be –
Line 1 station 6
Line 1 station 5
Line 2 station 4
Line 2 station 3
Line 1 station 2
78
Line 1 station 1
Optimal Path in Assembly Line Scheduling
S1 S2 S3 S4 S5 S6
2 8 9 3 4 1

4 3 1 2 1 3
3

Enter Exit

7
2 3 4 1 1 3

6 11 2 2 7 3
S1 S2 S3 S4 S5 S6 79
Assembly Line
Scheduling Algorithm
Algorithm Fastest_time_computation ( a[] [] , t[] [], e[], x[],n)
{
//Problem Description: This algorithm is for computing fi[j] and li[j] values.
f1[1] <- e[1] + a[1][1]
f2[1] <- e[2] + a[2][1]
For ( j<- 2 to n ) do
{
if ( f1[j-1]+a[1][j] ≤ f2[j-1] + t[2][j-1]+a[1][j]) then
{
f1[j] <- f1[j-1]+a[1][j]
l1[j]<- 1
} 80
Assembly Line
Scheduling Algorithm
else
{
f1[j] <- f2[j-1] + t[2][j-1]+a[1][j]
l1[j]<- 2
}
if ( f2[j-1]+a[2][j] ≤ f1[j-1] + t[1][j-1]+a[2][j]) then
{
f2[j] <- f2[j-1]+a[2][j]
l2[j]<- 2
}

81
Assembly Line
Scheduling Algorithm
else
{
f2[j] <- f1[j-1] + t[1][j-1]+a[2][j]
l2[j]<- 1
}
} // end of for loop
If ( f1[n] + x[1] ≤ f2 [n] + x[2]) then
{
f_star <- f1[n] + x[1]
l_star <- 1
}

82
Assembly Line
Scheduling Algorithm and Analysis
Else
{
f_star <- f2[n] + x[2]
l_star <- 2
}
} // end of the algorithm

Analysis:
• The basic operation in assembly line scheduling is computation of fi[j] and li[j]
values for all the stations.
• Hence total running time complexity will be Ɵ ( n ) .
83
Master Theorem Proof

Prepared By - Rutu V. Ataliya 84


Prepared By - Rutu V. Ataliya 85

You might also like