0% found this document useful (0 votes)
15 views122 pages

Unit 3

The document discusses greedy and dynamic programming techniques for solving optimization problems, including Huffman coding and the knapsack problem. It outlines the main concepts of greedy algorithms, such as making locally optimal choices to achieve a globally optimal solution, and provides a detailed example of the fractional knapsack problem. Additionally, it explains the Huffman coding process for data compression, emphasizing the importance of character frequency in encoding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views122 pages

Unit 3

The document discusses greedy and dynamic programming techniques for solving optimization problems, including Huffman coding and the knapsack problem. It outlines the main concepts of greedy algorithms, such as making locally optimal choices to achieve a globally optimal solution, and provides a detailed example of the fractional knapsack problem. Additionally, it explains the Huffman coding process for data compression, emphasizing the importance of character frequency in encoding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT-3

Greedy and Dynamic Programming


Greedy Programming
• Huffman coding using greedy approach
• Comparison of brute force and Huffman method of encoding
• Knapsack problem using greedy approach
• Tree traversals
• Minimum spanning tree - greedy
• Kruskal's algorithm - greedy
• Minimum spanning tree - Prims algorithm
Optimization Problems
Problem with an objective function to either:
• Maximize some profit
• Minimize some cost
Optimization problems appear in so many applications
• Maximize the number of jobs using a resource [Activity-Selection Problem]
• Encode the data in a file to minimize its size [Huffman Encoding Problem]
• Collect the maximum value of goods that fit in a given bucket [knapsack
Problem]
• Select the smallest-weight of edges to connect all nodes in a graph
[Minimum Spanning Tree]
Solving Optimization Problems
Two techniques for solving optimization problems:
• Greedy Algorithms (“Greedy Strategy”)
• Dynamic Programming
Space of optimization problems
• Greedy algorithms can solve some problems optimally
• Dynamic programming can solve more problems optimally (superset)
• We still care about Greedy Algorithms because for some problems:
• Dynamic programming is overkill (slow)
• Greedy algorithm is simpler and more efficient
Greedy Algorithms Main Concept
• Divide the problem into multiple steps (sub-problems)
• For each step take the best choice at the current moment (Local
optimal) (Greedy choice)
• A greedy algorithm always makes the choice that looks best at the
moment
• The hope: A locally optimal choice will lead to a globally optimal
solution
• For some problems, it works. For others, it does not
Greedy Algorithms Main Concept
• Constructs a solution to an optimization problem piece by
• piece through a sequence of choices that are:
1.feasible, i.e. satisfying the constraints.
2.locally optimal (with respect to some neighborhood definition).
3.greedy (in terms of some measure)
Fractional Knapsack Problem
• Given n objects each have a weight wi and a value vi , and given a
knapsack of total capacity W. The problem is to pack the knapsack
with these objects in order to maximize the total value of those
objects packed without exceeding the knapsack’s capacity.
• I.e., we can take an amount xi of each item i such that

• The total benefit of the items taken is determined by the objective


function
Fractional Knapsack Algorithm
Algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)
for i = 1 to n
do x[i] = 0
weight = 0
for i = 1 to n
if weight + w[i] ≤ W then
x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
break
return x
Fractional Knapsack Example
• Let us consider that the capacity of the knapsack W = 60 and the list
of provided items are shown in the following table
Huffman Coding:
Greedy Approach
Encoding and Compression of Data
• Fax Machines
• ASCII
• Variations on ASCII
• min number of bits needed
• cost of savings
• patterns
• modifications
Purpose of Huffman Coding
• Proposed by Dr. David A. Huffman in 1952
• “A Method for the Construction of Minimum
Redundancy Codes”
• Applicable to many forms of data transmission
• Our example: text files
The Basic Algorithm
• Huffman coding is a form of statistical coding
• Not all characters occur with the same frequency!
• Yet all characters are allocated the same amount of
space
• 1 char = 1 byte, be it e or x
The Basic Algorithm
• Any savings in tailoring codes to frequency of
character?
• Code word lengths are no longer fixed like ASCII.
• Code word lengths vary and will be shorter for the
more frequently used characters.
The (Real) Basic Algorithm
1. Scan text to be compressed and tally
occurrence of all characters.
2. Sort or prioritize characters based on number of
occurrences in text.
3. Build Huffman code tree based on
prioritized list.
4. Perform a traversal of tree to determine all code words.
5. Scan text again and create new file using the
Huffman codes.
Building a Tree
Scan the original text

• Consider the following short text:

Eerie eyes seen near lake.

• Count up the occurrences of all characters in the text


Building a Tree
Scan the original text
Eerie eyes seen near lake.
• What characters are present?

E e r i space
ysnarlk.
Building a Tree
Scan the original text
Eerie eyes seen near lake.
• What is the frequency of each character in the text?

Char Freq. Char Freq. Char Freq.


E 1 y 1 k 1
e 8 s 2 . 1
r 2 n 2
i 1 a 2
space 4 l 1
Building a Tree
Prioritize characters
• Create binary tree nodes with character and
frequency of each character
• Place nodes in a priority queue
• The lower the occurrence, the higher the priority in the
queue
Building a Tree
Prioritize characters
• Uses binary tree nodes

public class HuffNode


{
public char myChar;
public int myFrequency;
public HuffNode myLeft, myRight;
}
priorityQueue myQueue;
Building a Tree
• The queue after inserting all nodes

E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8

• Null Pointers are not shown


Building a Tree
• While priority queue contains two or more nodes
• Create new node
• Dequeue node and make it left subtree
• Dequeue next node and make it right subtree
• Frequency of new node equals sum of frequency of left and right
children
• Enqueue new node back into queue
Building a Tree

E i y l k . r s n a sp e
1 1 1 1 1 1 2 2 2 2 4 8
Building a Tree

y l k . r s n a sp e
1 1 1 1 2 2 2 2 4 8

E i
1 1
Building a Tree

y l k . r s n a sp e
2
1 1 1 1 2 2 2 2 4 8
E i
1 1
Building a Tree

k . r s n a sp e
2
1 1 2 2 2 2 4 8
E i
1 1

y l
1 1
Building a Tree

2
k . r s n a 2 sp e
1 1 2 2 2 2 4 8
y l
1 1
E i
1 1
Building a Tree

r s n a 2 2 sp e
2 2 2 2 4 8
y l
E i 1 1
1 1

k .
1 1
Building a Tree

r s n a 2 2 sp e
2
2 2 2 2 4 8
E i y l k .
1 1 1 1 1 1
Building a Tree

n a 2 sp e
2 2
2 2 4 8
E i y l k .
1 1 1 1 1 1

r s
2 2
Building a Tree

n a 2 sp e
2 4
2
2 2 4 8

E i y l k . r s
1 1 1 1 1 1 2 2
Building a Tree

2 4 e
2 2 sp
8
4
y l k . r s
E i 1 1 1 1 2 2
1 1

n a
2 2
Building a Tree

2 4 4 e
2 2 sp
8
4
y l k . r s n a
E i 1 1 1 1 2 2 2 2
1 1
Building a Tree

4 4 e
2 sp
8
4
k . r s n a
1 1 2 2 2 2

2 2

E i y l
1 1 1 1
Building a Tree

4 4 4
2 sp e
4 2 2 8
k . r s n a
1 1 2 2 2 2
E i y l
1 1 1 1
Building a Tree

4 4 4
e
2 2 8
r s n a
2 2 2 2
E i y l
1 1 1 1

2 sp
4
k .
1 1
Building a Tree

4 4 4 6 e
2 sp 8
r s n a 2 2
4
2 2 2 2 k .
E i y l 1 1
1 1 1 1

What is happening to the characters with a low number of occurrences?


Building a Tree

4
6 e
2 2 2 8
sp
4
E i y l k .
1 1 1 1 1 1
8

4 4

r s n a
2 2 2 2
Building a Tree

4
6 e 8
2 2 2 8
sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2
Building a Tree

8
e
8
4 4
10
r s n a
2 2 2 2 4
6
2 2
2 sp
4
E i y l k .
1 1 1 1 1 1
Building a Tree

8 10
e
8 4
4 4
6
2 2 2
r s n a sp
2 2 2 2 4
E i y l k .
1 1 1 1 1 1
Building a Tree

10
16
4
6
2 2 e 8
2 sp 8
4
E i y l k . 4 4
1 1 1 1 1 1

r s n a
2 2 2 2
Building a Tree

10 16

4
6
e 8
2 2 8
2 sp
4 4 4
E i y l k .
1 1 1 1 1 1
r s n a
2 2 2 2
Building a Tree
26

16
10

4 e 8
6 8
2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Building a Tree
After enqueueing this node
there is only one node left in
priority queue.
26

16
10

4 e 8
6 8
2 2
2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Building a Tree
Dequeue the single node left in the
queue.
26

This tree contains the new code 10


16
words for each character.
4 e 8
6 8
Frequency of root node should 2 2 2 sp 4 4
equal number of characters in text. 4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2

Eerie eyes seen near lake.  26 characters


Encoding the File
Traverse Tree for Codes

• Perform a traversal of the


tree to obtain new code
words 26
• Going left is a 0 going right is 16
a1 10

• code word is only completed 4 e 8


6 8
when a leaf node is reached
2 2 2 sp 4 4
4
E i y l k .
1 1 1 1 1 1 r s n a
2 2 2 2
Encoding the File
Traverse Tree for Codes
Char Code
E 0000
i 0001
y 0010 26
l 0011
16
k 0100 10
. 0101
4 e
space 011 6 8
8
e 10 2 2 2 4 4
sp
r 1100 4
s 1101 E i y l k .
1 1 1 1 1 1 r s n a
n 1110 2 2 2 2
a 1111
Encoding the File
• Rescan text and encode file
using new code words Char Code
E 0000
Eerie eyes seen near lake. i 0001
y 0010
l 0011
000010110000011001110001010110110100
k 0100
. 0101
111110101111110001100111111010010010 space 011
1 e 10
r 1100
s 1101
n 1110
a 1111
 Why is there no need
for a separator
character?
.
Encoding the File
Results
• Have we made things any
000010110000011001110001010110110100
better? 111110101111110001100111111010010010
• 73 bits to encode the text 1

• ASCII would take 8 * 26 =


208 bits

If modified code used 4 bits per


character are needed. Total bits
4 * 26 = 104. Savings not as great.
Algorithm for creating the Huffman Tree-

• Step 1- Create a leaf node for each character and build a min heap using all the nodes (The
frequency value is used to compare two nodes in min heap)

• Step 2- Repeat Steps 3 to 5 while heap has more than one node

• Step 3- Extract two nodes, say x and y, with minimum frequency from the heap

• Step 4- Create a new internal node z with x as its left child and y as its right child. Also
frequency(z)= frequency(x)+frequency(y)

• Step 5- Add z to min heap

• Step 6- Last node in the heap is the root of Huffman tree


Tree Traversal
• process of visiting each node
• 4 different standard traversals:
• preorder
• inorder
• Postorder
• Level order(also called breadth-first)
Exercise Answers

• Preorder traversal sequence:


F, B, A, D, C, E, G, I, H (root, left, right)
• Inorder traversal sequence:
A, B, C, D, E, F, G, H, I (left, root, right)
• Postorder traversal sequence:
A, C, E, D, B, H, I, G, F (left, right, root)
• Level-order traversal sequence:
F, B, G, A, D, I, C, E, H
Minimum Spanning Trees
• The spanning tree of a graph G can be defined as a tree which includes all the
vertices of G.In graph traversal,we have seen that the DFS and BFS traversal
result in two trees
• DFS – spanning tree and BFS – spanning tree
• The minimum spanning tree problem is related to the weighted graph, where we
find a spanning tree so that the sum of all the weighted of all edges in the tree is
minimum.
• Two efficient method available for finding a spanning tree are
• Kruskal’s algorithm
• Prim’s algorithm
Kruskal’s algorithm
• list all the edges of the graph G in the increasing order of weights.
• Select the smallest edge from the list and add it into the spanning
tree, if the inclusion of this edge does not make a cycle
• if the selected edge with smallest weight forms a cycle, remove it
from the list.
• repeat step 2-3 until the tree contains n-1 edges or list is empty
• If the tree contains less than n-1 edges and the list is empty, no
spanning tree is possible
Kruskal’s algorithm
Edges Weight selection
V2-v4 1
V3-v6 1
V1-v4 2
V2-v5 2
V1-v2 4
V1-v3 5
V6-v7 6
V4-v7 7
V3-v4 8
V5-v7 9
Kruskal’s algorithm
Edges Weight selection
V2-v4 1 accept
V3-v6 1

accept

V1-v4 2 accept
V2-v5 2 accept
V1-v2 4 reject
V1-v3 5 accept
V6-v7 6 accept
V4-v7 7
V3-v4 8
V5-v7 9
MST= 17
The Kruskal Algorithm
// input: a graph G with n nodes and m edges
// output: E: a MST for G

1. EG[1..m]  Sort the m edges in G in increasing weight order


2. E  {} //the edges in the MST
3. i  1 //counter for EG
4. While |E| < n - 1 do
if adding EG[i] to E does not add a cycle then
E  E  {EG[i]}
ii+1
5. return E
Is this algorithm Greedy?
Yes
Complexity: O(|E|log2 |E|)
Prim’s Algorithm

• According to prim’s algorithm, a minimum spanning tree grows in


successive stages. The prim’s algorithm find a new vertex to add it to
the tree by choosing the edge <vi,vj>,the smallest among all edges,
where vi I the tree and vj is yet to be included in the tree.The prim’s
algorithm can easily be implemented using the adjacency matrix
representation of a graph. Let us now illustrate the above method of
finding a minimum spanning tree
Prim’s Algorithm
We start with v1 and pick the smallest entry; thus v4 is the
nearest neighbour to v1

c 0 T
0

V2 4 F V1
V3 5 F V1
V4 2 F V1
V5 - F -
V6 - F -
V7 - F -
Prim’s Algorithm
Step 2

V1 0 T 0

V2 1 F V4

V3 5 F V1

V4 2 T V1

V5 - F -

V6 - F -

V7 7 F V4
Prim’s Algorithm

Step 3

V1 0 T 0

V2 1 T V4

V3 5 F V1

V4 2 T V1

V5 2 F V2

V6 - F -

V7 7 F V4
Prim’s Algorithm
Step 4
V1 0 T 0

V2 1 T V4

V3 5 F V1

V4 2 T V1

V5 2 T V2

V6 - F -

V7 7 F V4
Prim’s Algorithm

Step 5:

V1 0 T 0

V2 1 T V4

V3 5 T V1

V4 2 T V1

V5 2 T V2

V6 1 F V3

V7 7 F
V4
Prim’s Algorithm
Step 6

V1 0 T 0

V2 1 T V4

V3 5 T V1

V4 2 T V1

V5 2 T V2

V6 1 T V3

V7 6 F V6
The Prim Algorithm
// input: a graph G
// output: E: a MST for G

1. Select a starting node, v


2. T  {v} //the nodes in the MST
3. E  {} //the edges in the MST
4. While not all nodes in G are in the T do
3.1 Choose the edge v’ in G − T such that there is a v in T:
weight(v,v’) is the minimum in
{weight(u,w) : w in G − T and u in T}
3.2 T  T  {v’}
3.3 E  E  {(v,v’)}
5. return E
Complexity: - O((|E|+|V|)log2|E|)
- In class we show O(|E||V|)
Dynamic Programming
• Optimization
• If a problem has only one correct solution, then optimization is not required
• For example, there is only one sorted sequence containing a given set of
numbers.
• Optimization problems have many solutions.
• We want to compute an optimal solution e. g. with minimal cost and maximal
gain.
• There could be many solutions having optimal value.
• Dynamic programming is very effective technique.
• Development of dynamic programming algorithms can be broken into a sequence
steps as in the next.
Why Dynamic Programming?
• Dynamic programming, like divide and conquer method, solves problems by
combining the solutions to sub-problems.
• Divide and conquer algorithms:
• Partition the problem into independent sub-problem
• Solve the sub-problem recursively
• Combine their solutions to solve the original problem
• In contrast, dynamic programming is applicable when the sub- problems are not
independent.
• Dynamic programming is typically applied to optimization problems.
Dynamic programming
• Dynamic programming is a way of improving on inefficient divide-and-conquer
algorithms.
• By “inefficient”, we mean that the same recursive call is made over and over.
• If same subproblem is solved several times, we can use table to store result of a
subproblem the first time it is computed and thus never have to recompute it
again.
• Dynamic programming is applicable when the subproblems are dependent, that
is, when subproblems share subsubproblems.
• “Programming” refers to a tabular method
Difference between DP and Divide-and-
Conquer
• Using Divide-and-Conquer to solve these problems is inefficient
because the same common subproblems have to be solved many
times.
• DP will solve each of them once and their answers are stored in a
table for future use.
Elements of Dynamic Programming (DP)
• DP is used to solve problems with the following characteristics:
• Simple sub-problems
• We should be able to break the original problem to smaller sub-problems that
have the same structure
• Optimal substructure of the problems
• The optimal solution to the problem contains within optimal solutions to its sub-
problems.
• Overlapping sub-problems
• there exist some places where we solve the same sub-problem more than once.
Dynamic Programming
• 0/1 knapsack problem
• Matrix chain multiplication using dynamic programming
• Longest common subsequence using dynamic programming
• Optimal binary search tree (OBST)using dynamic programming
Knapsack 0-1 problem
• So now we must re-work the way we build upon previous sub-problems…
• Let B[k, w] represent the maximum total value of a subset Sk with weight w.
• Our goal is to find B[n, W], where n is the total number of items and W is the
maximal weight the knapsack can carry.

• So our recursive formula for subproblems:


B[k, w] = B[k - 1,w], if wk > w
= max { B[k - 1,w], B[k - 1,w - wk] + vk}, otherwise

• In English, this means that the best subset of Sk that has total weight w is:
1) The best subset of Sk-1 that has total weight w, or
2) The best subset of Sk-1 that has total weight w-wk plus the item k
Knapsack 0-1 Problem –
Recursive Formula

• The best subset of Sk that has the total weight w, either


contains item k or not.

• First case: wk > w


• Item k can’t be part of the solution! If it was the total weight would
be > w, which is unacceptable.

• Second case: wk ≤ w
• Then the item k can be in the solution, and we choose the case with
greater value.
Knapsack 0-1 Algorithm
for w = 0 to W { // Initialize 1st row to 0’s
B[0,w] = 0
}
for i = 1 to n { // Initialize 1st column to 0’s
B[i,0] = 0
}
for i = 1 to n {
for w = 0 to W {
if wi <= w { //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
}
else B[i,w] = B[i-1,w] // wi > w
}
}
Knapsack 0-1 Problem

• Let’s run our algorithm on the following data:


• n = 4 (# of elements)
• W = 5 (max weight)
• Elements (weight, value):
(2,3), (3,4), (4,5), (5,6)
Knapsack 0-1 Example
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

// Initialize the base cases


for w = 0 to W
B[0,w] = 0

for i = 1 to n
B[i,0] = 0
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 vi = 3
2 0 wi = 2
3 0 w=1
4 0 w-wi = -1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 vi = 3
2 0 wi = 2
3 0 w=2
4 0 w-wi = 0

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 vi = 3
2 0 wi = 2
3 0 w=3
4 0 w-wi = 1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 vi = 3
2 0 wi = 2
3 0 w=4
4 0 w-wi = 2

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 vi = 3
2 0 wi = 2
3 0 w=5
4 0 w-wi = 3

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 wi = 3
3 0 w=1
4 0 w-wi = -2

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 wi = 3
3 0 w=2
4 0 w-wi = -1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 4 wi = 3
3 0 w=3
4 0 w-wi = 0

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 4 4 wi = 3
3 0 w=4
4 0 w-wi = 1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 vi = 4
2 0 0 3 4 4 7 wi = 3
3 0 w=5
4 0 w-wi = 2

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 w = 1..3
4 0 w-wi = -3..-1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 5 w=4
4 0 w-wi = 0

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 5 7 w=5
4 0 w-wi = 1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 vi = 5
2 0 0 3 4 4 7 wi = 4
3 0 0 3 4 5 7 w=5
4 0 w-wi = 1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 vi = 6
2 0 0 3 4 4 7 wi = 5
3 0 0 3 4 5 7 w = 1..4
4 0 0 3 4 5 w-wi = -4..-1

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 vi = 6
2 0 0 3 4 4 7 wi = 5
3 0 0 3 4 5 7 w=5
4 0 0 3 4 5 7 w-wi = 0

if wi <= w //item i can be in the solution


if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
Knapsack 0-1 Example 1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7

We’re DONE!!
The max possible value that can be carried in this knapsack is $7
Knapsack 0-1 Algorithm
• This algorithm only finds the max possible value that can be carried in
the knapsack
• The value in B[n,W]

• To know the items that make this maximum value, we need to trace
back through the table.
Knapsack 0-1 Algorithm
Finding the Items
• Let i = n and k = W
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1 // Assume the ith item is not in the knapsack
// Could it be in the optimally packed knapsack?
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3)
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=4
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 6
wi = 5
3 0 0 3 4 5 7
B[i,k] = 7
4 0 0 3 4 5 7
B[i-1,k] = 7

i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the i th item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3)
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=3
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 5
wi = 4
3 0 0 3 4 5 7
B[i,k] = 7
4 0 0 3 4 5 7
B[i-1,k] = 7

i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the i th item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4)
3: (4,5)
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=2
1 0 0 3 3 3 3 k=5
2 0 0 3 4 4 7 vi = 4
wi = 3
3 0 0 3 4 5 7
B[i,k] = 7
4 0 0 3 4 5 7
B[i-1,k] = 3
k – wi = 2
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the i th item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4)
3: (4,5)
Item 1
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
wi = 2
3 0 0 3 4 5 7
B[i,k] = 3
4 0 0 3 4 5 7
B[i-1,k] = 0
k – wi = 0
i=n,k=W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the i th item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Items: Knapsack:
Knapsack 0-1 Algorithm 1: (2,3) Item 2
Finding the Items 2: (3,4)
3: (4,5)
Item 1
4: (5,6)
i/w 0 1 2 3 4 5
0 0 0 0 0 0 0 i=1
1 0 0 3 3 3 3 k=2
2 0 0 3 4 4 7 vi = 3
wi = 2
3 0 0 3 4 5 7
B[i,k] = 3
4 0 0 3 4 5 7
B[i-1,k] = 0
k – wi = 0
k = 0, so we’re DONE!

The optimal knapsack should contain:


Item 1 and Item 2
Knapsack
for w = 0 to W
0-1 Problem – Run Time
B[0,w] = 0 O(W)

for i = 1 to n
B[i,0] = 0 O(n)

for i = 1 to n Repeat n times


for w = 0 to W
O(W)
< the rest of the code >

What is the running time of this algorithm?


O(n*W) – of course, W can be mighty big
What is an analogy in world of sorting?

Remember that the brute-force algorithm takes: O(2n)


Matrix Chain Multiplication

• Given : a chain of matrices {A 1,A2,…,An}.


• Once all pairs of matrices are parenthesized, they can be multiplied by using the
standard algorithm as a sub-routine.
• A product of matrices is fully parenthesized if it is either a single matrix or the
product of two fully parenthesized matrix products, surrounded by parentheses .
[Note: since matrix multiplication is associative, all parenthesizations yield the same product.]
Matrix Chain Multiplication cont.
• For example, if the chain of matrices is {A, B, C, D}, the product A, B,
C, D can be fully parenthesized in 5 distinct ways:
(A ( B ( C D ))),
(A (( B C ) D )),
((A B ) ( C D )),
((A ( B C )) D),
((( A B ) C ) D ).
• The way the chain is parenthesized can have a dramatic impact on the
cost of evaluating the product.
Matrix Chain Multiplication Optimal
Parenthesization
• Example: A[30][35], B[35][15], C[15][5]
minimum of A*B*C
A*(B*C) = 30*35*5 + 35*15*5 = 7,585
(A*B)*C = 30*35*15 + 30*15*5 = 18,000
• How to optimize:
• Brute force – look at every possible way to parenthesize : Ω(4n/n3/2)
• Dynamic programming – time complexity of Ω(n3) and space complexity of
Θ(n2).
Matrix Chain Multiplication Structure of Optimal
Parenthesization

• For n matrices, let Ai..j be the result of AiAi+1….Aj


• An optimal parenthesization of AiAi+1…An splits the product between
Ak and Ak+1 where 1  k < n.
• Example, k = 4 (A1A2A3A4)(A5A6)

Total cost of A1..6 = cost of A1..4 plus total


cost of multiplying these two matrices
together.
Matrix Chain Multiplication Overlapping Sub-
Problems

• Overlapping sub-problems helps in reducing the running


time considerably.
• Create a table M of minimum Costs
• Create a table S that records index k for each optimal sub-problem
• Fill table M in a manner that corresponds to solving the
parenthesization problem on matrix chains of increasing length.
• Compute cost for chains of length 1 (this is 0)
• Compute costs for chains of length 2
A1..2, A2..3, A3..4, …An-1…n
• Compute cost for chain of length n
A1..n

Each level relies on smaller sub-strings


Matrix Chain Multiplication
• (i) m[i, j] = 0 if i = j
• (ii) m[i, j] = minimum number of scalar multiplications required to
multiply (Ai….Ak) + minimum number of scalar
multiplications required to multiply (Ak+1….An) + cost of
multiplying two resultant matrice

m[i, j] = m[i, k]+ m[k, j]+ P[i -1]x P[k]xP[ j]


LONGEST COMMON SUBSEQUENCE USING DP
OPTIMAL BINARY SEARCH TREE USING DP
• Given a sorted array key [0.. n-1] of search keys and an array freq[0..
n-1] of frequency counts, where freq[i] is the number of searches
for keys[i]. Construct a binary search tree of all keys such that the
total cost of all the searches is as small as possible.
Let us first define the cost of a BST. The cost of a BST node is the level
of that node multiplied by its frequency. The level of the root is 1.
OPTIMAL BINARY SEARCH TREE USING DP
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE
OPTIMAL BINARY SEARCH TREE USING DP -
EXAMPLE

You might also like