0% found this document useful (0 votes)
93 views24 pages

Lecture 5 Greedy Algorithm

The document discusses dynamic programming and greedy algorithms. It begins by introducing dynamic programming as an approach for solving optimization problems that involves breaking the problem down into subproblems. It then discusses greedy algorithms, which make locally optimal choices at each step in the hope of reaching a globally optimal solution. As an example, it analyzes the activity selection problem and shows that a greedy algorithm can optimally solve this problem by always selecting the activity that finishes earliest. It proves that this greedy approach is optimal by demonstrating it has the optimal substructure and greedy choice properties.

Uploaded by

Ibrahim Choudary
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
0% found this document useful (0 votes)
93 views24 pages

Lecture 5 Greedy Algorithm

The document discusses dynamic programming and greedy algorithms. It begins by introducing dynamic programming as an approach for solving optimization problems that involves breaking the problem down into subproblems. It then discusses greedy algorithms, which make locally optimal choices at each step in the hope of reaching a globally optimal solution. As an example, it analyzes the activity selection problem and shows that a greedy algorithm can optimally solve this problem by always selecting the activity that finishes earliest. It proves that this greedy approach is optimal by demonstrating it has the optimal substructure and greedy choice properties.

Uploaded by

Ibrahim Choudary
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 24

Advance Algorithm

Dynamic Programming

Dr. Muhammad Safyan


Department of computer Science
Government College University, Lahore
Today’s Agenda

Greedy Algorithm
Optimal Solutions

Dynamic Programming
Algorithms for optimization problems typically go through a
sequence of steps, with a set of choices at each step. For many
optimization problems, using dynamic programming to determine
the best choices is overkill; simpler, more efficient algorithms will
do.
Greedy Algorithm:
A greedy algorithm always makes the choice that looks best at
the moment. That is, it makes a locally optimal choice in the hope
that this choice will lead to a globally optimal solution.
Branch and Bound:?
Approaches to Solve a problem

• Greedy algorithms do not always yield optimal solutions, but for


many problems they do.
• First ,we Study a simple but nontrivial problem,
• the activity-selection problem, for which a greedy algorithm
efficiently computes an optimal solution.
• We shall arrive at the greedy algorithm by first considering a
dynamic-programming approach and then showing that we can
always make greedy choices to arrive at an optimal solution.
Approaches to Solve a problem

• It is strategy like other than divide and conquer, and Dynamic


problem.
Problem
• P: A--B
• Feasible solution: satisfying the constraints
• Optimal Solution: which provide maximum of minimum from
the feasible solution.
General Format

Algorithm Greedy(a,n)

{
for i=1 to n do
{
x=select(a);

if feasible(x) then
solution=solution+x;
}

}
An activity-selection problem

• Scheduling several competing activities that require exclusive use


of a common resource. E.g use of lecture theater.

• Goal of selecting a maximum-size set of mutually compatible


activities.

of n proposed activities that wish to use a resource, such as a


lecture hall, which can serve only one activity at a time.

• Activities ai and aj are compatible if the intervals

do not overlap
Activity-Selection
Formally:
Given a set S of n activities
si = start time of activity i
fi = finish time of activity i
Find max-size subset A of compatible
activities
3
4 6
2
1 5

 Assume that f1  f2  …  fn
An activity-selection problem

We assume that the activities are sorted in monotonically


increasing order of finish time.

Consider the following set S activities


Dynamic Programming :activity-selection problem
Several Steps are involved
• Dynamic Programming consider several choices when
determining which sub problems to use in an optimal
solution
• Then we will observe that we need to consider only
one choice—the greedy choice.
• when we make the greedy choice, only one sub
problem remains.
• Based on these observations, we shall develop a
recursive greedy algorithm to solve the activity-
scheduling problem.
• We shall complete the process of developing a greedy
solution by converting the recursive algorithm to an
iterative one
Why it is Greedy?

Greedy in the sense that it leaves as much opportunity as possible


for the remaining activities to be scheduled
The greedy choice is the one that maximizes the amount of
unscheduled time remaining
Why this Algorithm is Optimal?
• We will show that this algorithm uses the
following properties
• The problem has the optimal substructure
property
• The algorithm satisfies the greedy-choice
property
• Thus, it is Optimal
Optimal substructure: Dynamic Programming
• Let Sij the set of activities start after activity ai finishes
and that finish before activity a j starts.
• Goal is to find the maximum set of mutually
compatible activities in S ij.
• Let set of maximum sub activities are Aij which
includes some activity a k.
• By including a k in an optimal solution,
Left with two subproblems,
• Start after ai finishes and
• finishes before ak starts
• Fnding mutually compatible activities in the set S kj
(activities that start after activity ak finishes and that finish
before activity a j starts)
Optimal substructure:
Recursive Greedy Algorithm
Iterative: Greedy Algorithm
Greedy-Choice Property

Show there is an optimal solution that begins with a


greedy choice (with activity 1, which as the earliest finish
time)
Suppose A  S in an optimal solution
Order the activities in A by finish time. The first activity in A is k
If k = 1, the schedule A begins with a greedy choice
If k  1, show that there is an optimal solution B to S that begins with
the greedy choice, activity 1
Let B = A – {k}  {1}
f1  fk  activities in B are disjoint (compatible)
B has the same number of activities as A
Thus, B is optimal
Optimal Substructures
Once the greedy choice of activity 1 is made, the problem
reduces to finding an optimal solution for the activity-
selection problem over those activities in S that are
compatible with activity 1
Optimal Substructure
If A is optimal to S, then A’ = A – {1} is optimal to S’={i S: si  f1}
Why?
If we could find a solution B’ to S’ with more activities than A’,
adding activity 1 to B’ would yield a solution B to S with more
activities than A  contradicting the optimality of A
After each greedy choice is made, we are left with an
optimization problem of the same form as the original
problem
By induction on the number of choices made, making the greedy
choice at every step produces an optimal solution
Elements of Greedy Strategy

An greedy algorithm makes a sequence of choices,


each of the choices that seems best at the moment is
chosen
NOT always produce an optimal solution
Two ingredients that are exhibited by most problems
that lend themselves to a greedy strategy
Greedy-choice property
Optimal substructure
Greedy-Choice Property

A globally optimal solution can be arrived at by


making a locally optimal (greedy) choice
Make whatever choice seems best at the moment and
then solve the sub-problem arising after the choice is
made
The choice made by a greedy algorithm may depend
on choices so far, but it cannot depend on any future
choices or on the solutions to sub-problems
Of course, we must prove that a greedy choice at
each step yields a globally optimal solution
Optimal Substructures

A problem exhibits optimal substructure if an optimal solution to


the problem contains within it optimal solutions to sub-problems
If an optimal solution A to S begins
with activity 1, then A’ = A – {1} is
optimal to S’={i S: si  f1}
Knapsack Problem

One wants to pack n items in a luggage


The ith item is worth vi dollars and weighs wi pounds
Maximize the value but cannot exceed W pounds
vi , wi, W are integers
0-1 knapsack  each item is taken or not taken
Fractional knapsack  fractions of items can be taken
Both exhibit the optimal-substructure property
0-1: If item j is removed from an optimal packing, the remaining
packing is an optimal packing with weight at most W-wj
Fractional: If w pounds of item j is removed from an optimal
packing, the remaining packing is an optimal packing with weight
at most W-w that can be taken from other n-1 items plus wj – w of
item j
Greedy Algorithm for Fractional Knapsack problem

Fractional knapsack can be solvable by the greedy


strategy
Compute the value per pound vi/wi for each item
Obeying a greedy strategy, take as much as possible of the
item with the greatest value per pound.
If the supply of that item is exhausted and there is still more
room, take as much as possible of the item with the next
value per pound, and so forth until there is no more room
O(n lg n) (we need to sort the items by value per pound)
Greedy Algorithm?
Correctness?

You might also like