Printed File
Printed File
Greedy Algorithm:
Knapsack Problem:
1
Formally the problem can be stated as:
Knapsack Problem-Example:
2
3
Types of Knapsack Problem:
4
Fractional Knapsack- Example Problem:
Item A B C D
Profit 280 100 120 120
Weight 40 10 20 24
Pi/ Wi 7 10 6 5
Arranging the above the tables with descending order of Pi/ Wi
Item B A C D
Profit 100(P1) 280(P2) 120(P3) 120(P4)
Weight 10 (W1) 40(W2) 20(W3) 24(W4)
Pi/ Wi 10 7 6 5
Consider the knapsack capacity W=60
Conditions:
To complete a job, one has to process the job on a machine for one unit of time.
Only one machine is available for processing jobs.
A feasible solution for this problem is a subset J of jobs such that each job in this subset can be
completed by its deadline.
An optimal solution is a feasible solution with maximum value.
5
Algorithm for Job Scheduling:
6
Example1:
Let number of jobs= n=3
(p1, p2, p3, p4)= (100, 10, 15, 27) deadline=1 job should be done on first day
(d1, d2, d3, d4)= (2, 1, 2,1) deadline=2 job should be done on first day or
second day Here maximum deadline=2 means only two jobs can be done per day & no parallel
execution of jobs is done.
Feasibl Processin Value Explanation
e g
Solutio Sequenc
n e
(1, 2) 2,1 110 2’s deadline <1’s deadline
(1, 3) 1,3 or 3,1 115 1’s deadline = 3’s deadline
(1,4) 4,1 127 4’s deadline <1’s deadline
(Maximum Profit)
(2, 3) 2, 3 25 2’s deadline < 3’s deadline
(2,4) Impossible because of parallel Both are having deadline=1
execution
(3, 4) 4, 3 42 4’s deadline <3’s deadline
(1) 1 100
(2) 2 10
(3) 3 15
(4) 4 27
Example2:
Multistage graphs:
7
2. Backward approach.
8
Multistage graphs- Forward Approach- Example:
9
Multistage graphs- Backward Approach:
1
.
Example:
In Dynamic Programming, an optimal sequence of decisions is obtained by making explicit appeal to The
Principle of Optimality.
Principle of Optimality states that an optimal sequence of decisions has the property that whatever the
initial state and decisions are, the remaining decisions must constitute an optimal decision sequence with
regard to the state resulting from the first decision.
Steps in Dynamic Programming:
1. Characterize the structure of optimal solution.
2. Recursively defines the value of optimal solution.
3. The optimal solution has to be constructed from information.
4. Compute an optimal solution from computed information.
2
Difference between Greedy Method & Dynamic Programming:
3
All-pairs shortest paths:
Example:
4
Single-Source Shortest Paths:
Graphs can be used to represent the highway structure of a state or country with vertices representing
cities and edges representing sections of highway.
The edges can then be assigned weights which may be either the distance between the two cities
connected by the edge or the average time to drive along that section of highway.
5
Single-Source Shortest Paths- Algorithm:
Example1:
For the example, to reach source to destination (1 to 7) we have shortest path with value 42.
6
Example2:
*****************
7
The traveling sales person problem:
Let G=(V,E) be a directed graph with edge costs Cij.
The variable Cij is defined such that
Cij >0 for all i & j.
Cij= ∞ if (i ,j ) € E.
Let |V|= n and assume that n>1.
A tour of G is a directed simple cycle that includes every vertex in V.
The cost of a tour is the sum of the cost of the edges on the tour.
The traveling sales person problem is to find a tour of minimum cost.
Notations:
g(i, s) = length of the shortest path starting at vertex i, going through all vertices in s and terminating at
vertex 1.
g(1, V-{1}) = length of an optimal sales person tour.
Principle of Optimality states that
Example:
8
9
Backtracking- General Method:
The name Backtrack was first coined by D.H.Lehmer in 1950’s.
It is a method of determining the correct solution to a problem by examining all the available paths.
If a particular path leads to unsuccessful solution then its previous solution is examined in-order to final correct
solution.
In many applications of the backtrack method, the desired solution is expressible as an n-tuple (x1, x2,.. xn) where
xi is chosen from some finite set Si. Often the problem to be solved calls for finding one vector that maximizes or
minimizes or satisfies a criterion function P(x1,x2… xn)
In Brute force algorithm, we consider all feasible solutions for finding optimal solution.
In Backtracking algorithm, it is having ability to yield same answer with far fewer than m trails.
Many of the problems, we solve using backtracking require that all solutions satisfy the complex set of
constraints.
Two types of Constraints
1. Explicit Constraints are the rules that restrict each xi to take on values only from a given set.
Eg: xi>=0 or Si= {all non negative real numbers}
Xi= 0 or 1 or Si= { 0 , 1}
2. Implicit Constraints are the rules that determine which of the tuples in the solution space I satisfy the
criterion functions. Thus Implicit Constraints describe the way in which the xi must relate to each other.
1
2
3
Recursive Backtracking Algorithm:
4
Applications of Backtracking:
Backtracking method is applied to solve various problems like:
1. N Queens Problem
2. Sum of Subsets Problem
3. Graph Coloring
4. Hamiltonian Cycles
5. Knapsack Proble
5
N Queens Problem (8 Queens Problem)
N Queens Problems means:
1. Place N Queens placed on N X N chess board.
2. No Two Queens are placed in same row or same column or diagonal.
3. No Two Queens attack to each other.
6
4- Queens Problem –state space tree:
7
N-Queens Problem- algorithm1: Placing a new queen in kth row & ith column.
8
8-Queens Problem solution:
9
Sum of Subsets Problem-Algorithm
10
Sum of Subsets Problem-Example
11
3.2.3 Graph coloring:
Let G be a graph and m be a positive integer.
It is to find whether that nodes of G can be colored in such a way that no two adjacent nodes have the same color
yet only m colors are used where m is a chromatic number.
If d is degree of a given graph G, then it is colored with d+ 1 colors.
Degree means number of edges connected to that node.
12
Graph coloring- m coloring algorithm
13
Graph coloring- generating color algorithm
14
15