0% found this document useful (0 votes)
3 views

Analysis and Design of Algorithms

The document covers the design and analysis of algorithms with a focus on graph theory, including definitions of graphs, types of graphs (directed and undirected), and key concepts such as paths, cycles, and graph representations. It details algorithms for graph traversal like Depth-First Search (DFS) and Breadth-First Search (BFS), as well as Dijkstra's algorithm for finding shortest paths in weighted graphs. Additionally, it discusses complexity classes such as P, NP, and NP-complete problems.

Uploaded by

hut86176
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Analysis and Design of Algorithms

The document covers the design and analysis of algorithms with a focus on graph theory, including definitions of graphs, types of graphs (directed and undirected), and key concepts such as paths, cycles, and graph representations. It details algorithms for graph traversal like Depth-First Search (DFS) and Breadth-First Search (BFS), as well as Dijkstra's algorithm for finding shortest paths in weighted graphs. Additionally, it discusses complexity classes such as P, NP, and NP-complete problems.

Uploaded by

hut86176
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

CSC 311 DESIGN AND ANALYSIS OF ALGORITHMS

SESSION TOPICS
1)Graphs:
➔ Graph algorithms.

➔ Depth-first search.

➔ Breadth-first search;

➔ Directed Graphs;

➔ Shortest paths;

➔ Minimum Spanning Trees;

➔ Network flowGraphs;

2)P, NP, NP complete problem classess.

Weeks-10,11,12 1
GRAPHS
A graph can be viewed as a collection of points in the plane
called “vertices” or “nodes,” some of them connected by line
segments called “edges” or “arcs.”

A graph G = <V , E> is a pair of two sets:


a finite nonempty set V of items called vertices
a set E of pairs of vertices called edges.
Unordered Vertices: edge (u, v) is the same as edge (v, u),
Adjacenet Vertices: u and v are adjacent if they are connected
by the undirected edge (u, v).
Incidence:
vertices u and v are endpoints of the edge (u, v)
we say that u and v are incident to this edge;
also say that the edge (u, v) is incident to its endpoints u
and v.
Undirected graph G: every edge in it is unordered.
Weeks-10,11,12 2
GRAPHS
Directed graphs (digraphs):
Verticex (u, v) is not the same as vertex (v, u), so the edge (u,
v) is directed from the vertex u, called the edge’s tail, to the
vertex v, called the edge’s head.

The edge (u, v) leaves u and enters v.

For a digraph, every edge is directed.

Weeks-10,11,12 3
GRAPHS
It is normally convenient to label vertices of a graph or a digraph
with letters, integer numbers, or, if an application calls for it,
character strings. The graph depicted below has six vertices and
seven undirected edges: (A)
V = {a, b, c, d, e, f },
E = {(a, c), (a, d), (b, c), (b, f ), (c, e), (d, e), (e, f )}.
The digraph depicted has six vertices and eight directed
edges(B): V = {a, b, c, d, e, f },
E = {(a, c), (b, c), (b, f ), (c, e), (d, a), (d, e), (e, c), (e, f )}.

B:DIRECTED(DIGRAPH)
A: UNDIRECTED Weeks-10,11,12 4
GRAPHS
We disallow multiple edges between the same vertices of
an undirected graph
The number of edges |E| possible in an undirected graph with |V|
vertices and no loops:
0 ≤ |E| ≤ |V |(|V | − 1)/2.

The largest number of edges in a graph if there is an edge


connecting each of its |V | vertices with all |V | − 1 other vertices.
Complete graph: every pair of the vertices are connected by an
edge is called complete, denoted by K |V | .
Dense graph: one with relatively few possible edges missing

Sparse graph: one with few edges relative to the number of its
vertices.
Denseness or sparseness may influence how to represent the
graph and, the running time of an algorithm being designed.
Weeks-10,11,12 5
GRAPHS
Graph Representations
Two ways are used:
the adjacency matrix
adjacency lists.
The adjacency matrix: a graph with n vertices is an n × n boolean
matrix with one row and one column for each of the graph’s
vertices, in which the element in the ith row and the j th column
is equal to 1 if there is an edge from the i th vertex to the jth
vertex, and equal to 0 if there is no such edge. For undirected
graph is adjacency matrix always symmetric, i.e., A[i, j ] = A[j, i]
for every 0 ≤ i, j ≤ n − 1
The adjacency lists: of a graph or a digraph is a collection of
linked lists, one for each vertex, that contain all the vertices
adjacent to the list’s vertex (i.e., all the vertices connected to it
by an edge). Usually, such lists start with a header identifying a
vertex for which the list is compiled.
Weeks-10,11,12 6
GRAPHS
Graph Representations: adjacency matrix; adjacency lists.

Adjacency Matrix Adjacency Lists

Weeks-10,11,12 7
GRAPHS
Weighted graph (or weighted digraph): is a graph (or di-
graph) with numbers assigned to its edges.
These numbers are called weights or costs (benefits).
Relevance in real life: real-world applications like shortest path
in a transportation; communication network or the traveling
salesman problem.

Weighted graph is represented by adjacency matrix by


replacing its element A[i, j ] by the weight of the edge from the
ith to the jth vertex. If there is no edge then replace by the
symbol ∞ to show there is no such edge.

Such a matrix is called the weight matrix or cost matrix.

Weeks-10,11,12 8
GRAPHS
Weighted graph (or weighted digraph): is a graph

Weighted graph Weight graph Adjacency list

Weeks-10,11,12 9
GRAPHS: PATHS AND CYCLES
Path: a path from vertex u to vertex v of a graph G can be
defined as a sequence of adjacent (connected by an edge)
vertices that starts with u and ends with v.
Simple Path: all vertices of a path are distinct.
Length of a path: is the total number of vertices in the vertex
sequence defining the path minus 1, which is the same as the
number of edges in the path.
A directed path: is a sequence of vertices in which every
consecutive pair of the vertices is connected by an edge
directed from the vertex listed first to the vertex listed next.
Connected graph: every pair of its vertices u and v there
is a path from u to v.
Cycle: is a path of a positive length that starts and ends at
the same vertex and does not traverse the same edge more
than once. For example, f , h, i, g, f is a cycle for earlier graph.
Acyclic graph: has no cycles
Weeks-10,11,12 10
GRAPHS: PATHS AND CYCLES
A Hamiltonian cycle of a directed graph G = (V, E) is a simple
cycle that contains each vertex in V .
Determining whether a directed graph has a Hamiltonian cycle
is NP-complete.
A cut (S, V - S) of an undirected graph G = (V, E) is a partition of
V.
A cut in a connected graph is a subset E’ of edges such
that G \ E’ is not connected. Here, G \ E’ is a short-hand for (V,
E \ E’ ). If S is a set of nodes with ∅ ≠ S ≠ V , the set of edges
with exactly one endpoint in S forms a cut.
Weight of a cut is the number of edges crossing the cut.
A cut respects a set A of edges if no edge in A crosses the
cut.
An edge is a light edge crossing a cut if its weight is the
minimum of any edge crossing the cut.
More generally, we say that an edge is a light edge satisfying a
given property if its weight is the minimum of any edge
Weeks-10,11,12 11
satisfying the property.
GRAPHS: PATHS AND CYCLES
A clique in an undirected graph G = <V, E> is a subset V’ ≤ V of
vertices, each pair of which is connected by an edge in E.

In other words, a clique is a complete subgraph of G. The size


of a clique is the number of vertices it contains.

The clique problem is the optimization problem of finding a


clique of maximum size in satisfying the property.

Weeks-10,11,12 12
GRAPHS
Graph algorithms: Depth-first search.
A graph’s traversal begins at an arbitrary vertex by marking
it as visited.
Each iteration: the algorithm proceeds to an unvisited vertex
that is adjacent to the one it is currently in.
If there are several such vertices, a tie can be resolved
arbitrarily (max, larger, smaller, alphabetical, etc).
This process continues until a dead end - a vertex with no
adjacent unvisited vertices is encountered.
At a dead end, the algorithm backs up one edge to the vertex
it came from and tries to continue visiting unvisited vertices
from there. The algorithm eventually halts after backing up
to the starting vertex, with the latter being a dead end.
By then, all the vertices in the same connected component
as the starting vertex have been visited.
If unvisited vertices still remain, the depth-first search must
be restarted at any one of them.
Weeks-10,11,12 13
GRAPHS
Graph algorithms: Depth-first search.

Traversal’s stack (the first


number indicates the order
in which a vertex is visited,
i.e., pushed onto the stack;
Graph the second one indicates
the order in which it DFS forest with tree
becomes a dead-end, i.e., and back edges
Weeks-10,11,12 shown with solid 14
popped off the stack) and dashed lines.
GRAPHS
Graph algorithms: Depth-first search.
Efficiency
It is efficient in that it takes just the time proportional to the
size of the data structure used for representing the graph in
question.
Thus, for the adjacency matrix representation, the traversal
time is in Θ(|V |2 )a;
The adjacency list representation, it is in Θ(|V | + |E|) where
|V| and |E| are the number of the graph’s vertices and edges,
respectively.

In the implementation it is convenient to use a stack

Weeks-10,11,12 15
GRAPHS
Graph algorithms: Depth-first search.
DFS(G)
//Implements a depth-first search traversal of a given graph
//Input: Graph G = V , E
//Output: Graph G with its vertices marked with consecutive integers
// in the order they are first encountered by the DFS traversal
mark each vertex in V with 0 as a mark of being “unvisited”
count ← 0
for each vertex v in V do
if v is marked with 0 then dfs(v)
//----------------------------------------------------------------
dfs(v)
//visits recursively all the unvisited vertices connected to vertex v
//by a path and numbers them in the order they are encountered
//via global variable count
count ← count + 1; mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0 then dfs(w)

Weeks-10,11,12 16
GRAPHS
Graph algorithms: Depth-first search

Weeks-10,11,12 17
GRAPHS
Graph algorithms: Depth-first search

Weeks-10,11,12 18
GRAPHS
Graph algorithms: Breadth-first search
● Proceeds by visiting first all the vertices that are adjacent
to a starting vertex
● Then visit all unvisited vertices two edges apart from it,
and so on, until all the vertices in the same connected
component as the starting vertex are visited.
● If there still remain unvisited vertices, the algorithm has to
be restarted at an arbitrary vertex of another connected
component of the graph.
● It is convenient to use a queue to trace the operation of
breadth-first search.
● The queue is initialized with the traversal’s starting vertex,
which is marked as visited.
● On each iteration, the identify all unvisited vertices that are
adjacent to the front vertex, mark them as visited, and add
them to the queue; after that, the front vertex is removed
from the queue Weeks-10,11,12 19
GRAPHS
Graph algorithms: Breadth-first search

Traversal queue,
with numbers
indicating the
BFS forest with tree and
order in which
the vertices are cross edges shown with
Graph solid and dotted lines,
visited, i.e.,
added to (and respectively.
removed from)
the queue.

Weeks-10,11,12 20
GRAPHS
Graph algorithms: Breadth-first search
BFS(G)
//Implements a breadth-first search traversal of a given graph
//Input: Graph G = V , E
//Output: Graph G with its vertices marked with consecutive integers
//in the order they are visited by the BFS traversal
mark each vertex in V with 0 as a mark of being “unvisited”
count ← 0
for each vertex v in V do
if v is marked with 0 then bfs(v)
//------------------------------------------------------------
bfs(v)
//visits all the unvisited vertices connected to vertex v
//by a path and numbers them in the order they are visited
//via global variable count
count ← count + 1; mark v with count and initialize a queue with v
while the queue is not empty do
for each vertex w in V adjacent to the front vertex do
if w is marked with 0
count ← count + 1; mark w with count
Weeks-10,11,12 21
add w to the queue
remove the front vertex from the queue
GRAPHS
Graph algorithms: Breadth-first search

Weeks-10,11,12 22
GRAPHS
Graph algorithms: Breadth-first search

Weeks-10,11,12 23
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm
The single-source shortest-paths problem:
For a given vertex called the source in a weighted connected graph, find
shortest paths to all its other vertices.

We are not interested here in a single shortest path that starts at the
source and visits all the other vertices. This coud be a version of of the
traveling salesman problem

The single-source shortest-paths problem asks for a family of paths,


each leading from the source to a different vertex in the graph, though
some paths may, of course, have edges in common.

It has many practical applications such as transportation planning and


packet routing in communication networks, including the Internet;
finding shortest paths in social networks, speech recognition, document
formatting, robotics, compilers, airline crew scheduling; entertainment:
pathfinding in video games and finding best solutions to puzzles.
Weeks-10,11,12 24
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm
The single-source shortest-paths problem:
● This problem could also be solved using Floyd’s algorithm
but we consider Dijkstra’s algorithm that is applicable to
undirected and directed graphs with nonnegative weights
only.
● Dijkstra’s algorithm finds the shortest paths to a graph’s
vertices in order of their distance from a given source.
● First, it finds the shortest path from the source to a vertex
nearest to it, then to a second nearest, and so on.
● In general, before its ith iteration commences, the algorithm
has already identified the shortest paths to i − 1 other
vertices nearest to the source. These vertices, the source,
and the edges of the shortest paths leading to them from
the source form a subtree Ti of the given graph.
Weeks-10,11,12 25
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm
The single-source shortest-paths problem:
● Edge weights are nonnegative so, the next vertex nearest to the
source can be found among the vertices adjacent to the vertices
of Ti .
● The set of vertices adjacent to the vertices in Ti can be referred
to as “fringe vertices”; they are candidates from which Dijkstra’s
algorithm selects the next vertex nearest to the source.
● To identify the ith nearest vertex, the algorithm computes, for
every fringe vertex u, the sum of the distance to the nearest tree
vertex v (given by the weight of the edge (v, u)) and the length d v
of the shortest path from the source to v (previously determined
by the algorithm) and then selects the vertex with the smallest
such sum.
● Comparing the lengths of such special paths is the central
insight of Dijkstra’s algorithm.
Weeks-10,11,12 26
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm
The single-source shortest-paths problem:
● Label each vertex with two labels:
● The numeric label d indicates the length of the shortest
path from the source to this vertex found by the algorithm
so far; when a vertex is added to the tree, d indicates the
length of the shortest path from the source to that vertex.
● The other label indicates the name of the next-to-last vertex
on such a path, i.e., the parent of the vertex in the tree
being constructed.
● With such labeling, finding the next nearest vertex u ∗
becomes a simple task of finding a fringe vertex with the
smallest d value.

Weeks-10,11,12 27
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm
The single-source shortest-paths problem:
Ties are broken arbitrarily.
After we have identified a vertex u∗ to be added to the
tree, we need to perform
two operations
1)Move u∗ from the fringe to the set of tree vertices.
2)For each remaining fringe vertex u that is connected
to u∗ by an edge of weight w(u∗ , u) such that du∗ +
w(u∗ , u) < du , update the labels of u by u∗ and du∗ +
w(u∗ , u), respectively.

Weeks-10,11,12 28
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm

Idea of Dijkstra’s algorithm:


The subtree of the shortest
paths already found is
shown in bold. The next
nearest to the source v0
vertex, u∗ , is selected by
comparing the lengths of
the subtree’s paths
increased by the distances
to vertices adjacent to the
subtree’s vertices.

Weeks-10,11,12 29
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm

Weeks-10,11,12 30
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm
Declare all nodes unscanned and initialized and parent
while there is an unscanned node with tentative distance < +∞ do
u:= the unscanned node with minimal tentative distance
relax all edges (u, v) out of u and declare u scanned

Weeks-10,11,12 31
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm

Weeks-10,11,12 32
GRAPHS
Directed Graphs:Shortest Paths
Dijkstra’s Algorithm

Weeks-10,11,12 33
GRAPHS
Directed Graphs:Minimum Spanning Trees
Consider a connected undirected graph G = (V, E) with real
edge costs c: E → R+ .
A minimum spanning tree (MST) of G is defined by:
a set T ⊆ E of edges such that the graph (V, T) is a tree
c(T ) := ∑e∈T c(e) is minimized.

For example, the nodes can be islands, the edges are


possible ferry connections, and the costs are the costs of
opening a connection.

Weeks-10,11,12 34
GRAPHS
Directed Graphs:Minimum Spanning Trees
Minimum spanning trees (MSTs) are perhaps the simplest
variant of an important family of problems known as network
design problems.
MSTs show up in many seemingly unrelated problems such as
clustering, finding paths that minimize the maximum edge cost
used, or finding approximations for harder problems.

The Jarník-Prim algorithm grows an MST starting from a single


node
Kruskal’s algorithm grows many trees in unrelated parts of the
graph and merges them into larger and larger trees.

Weeks-10,11,12 35
GRAPHS
Directed Graphs:Minimum Spanning Trees
The Jarník-Prim (JP) algorithm for MSTs
● Starting from an (arbitrary) source node s, the JP algorithm

grows a minimum spanning tree by adding one node after


the other.
● At any iteration, S is the set of nodes already added to the

tree and the cut E’ is the set of edges with exactly one
endpoint in S.
● A minimum cost edge leaving S is added to the tree in

every iteration.
● The main challenge is to find this edge efficiently. To this

end, the algorithm maintains the shortest connection


between any node v ∈ V \ S to S in a priority queue Q.
● The smallest element in Q gives the desired edge. When a

node is added to S, its incident edges are checked to see


whether they yield improved connections to nodes in V \S.
Weeks-10,11,12 36
GRAPHS
Directed Graphs:Minimum Spanning Trees
The Jarník-Prim (JP) algorithm for MSTs
● When node u is added to S and an incident edge e = (u, v)

is inspected, the algorithm needs to know whether v ∈ S. A


bit- vector could be used to encode this information.
● When all edge costs are positive, we may reuse the d-array

to encode this information.


● For any node v, d[v] = 0 encodes v ∈ S and d[v] > 0

encodes v ┐∈ S.
● This action saves space and a comparison in the innermost

loop.
● Observe that c(e) < d[v] is only true if d[v] > 0, i.e., v ┐∈ S,

and e is an improved connection for v to S.

Weeks-10,11,12 37
GRAPHS
Directed Graphs:Minimum Spanning Trees
The Jarník-Prim (JP) algorithm for MSTs

A sequence of cuts (dotted lines) corresponding to an execution of the


Jarník-Prim Algorithm with starting node a.
The edges (a, c), (c, b) and (b, d) are added to the MST.
Weeks-10,11,12 38
GRAPHS
Directed Graphs:Minimum Spanning Trees
The Jarník-Prim (JP) algorithm for MSTs

Application of Prim’s
algorithm. The
parenthesized labels
of a vertex in the
middle column
indicate the nearest
tree vertex and edge
weight; selected
vertices and edges are
shown in bold.

Weeks-10,11,12 39
GRAPHS
Directed Graphs:Minimum Spanning Trees
The Jarník-Prim (JP) algorithm for MSTs

Weeks-10,11,12 40
GRAPHS
Directed Graphs:Minimum Spanning Trees
The Jarník-Prim (JP) algorithm for MSTs

Weeks-10,11,12 41
GRAPHS
Directed Graphs:Minimum Spanning Trees
The Jarník-Prim (JP) algorithm for MSTs

Weeks-10,11,12 42
GRAPHS
Directed Graphs:Minimum Spanning Trees
KRUSKAL’S ALGORITHM
● This is an alternative algorithm.
● It needs no sophisticated graph representation, but
● already works when the graph is represented by its list of
edges.
● For sparse graphs with m = O(n), its running time is
competitive with the JP algorithm.
● It scans over the edges of G in order of increasing cost and
maintains a partial MST T ; T is empty initially.
● The algorithm maintains the invariant that T can be
extended to an MST.
● When an edge e is considered, it is discarded or added to
the MST.
● The decision depends on the cycle or cut property. The
endpoints of e either belong to the same connected
component of (V, T ) or not.
Weeks-10,11,12 43
GRAPHS
Directed Graphs:Minimum Spanning Trees
Kruskal’s Algorithm
It is essential that edges are considered in order of
increasing cost. Therefore e can be discarded by the cycle
property.
It is essential that edges are considered in order of
increasing cost.
We may also add e to T by the cut property. The invariant is
maintained.
It can be implemented very efficiently so that the main cost
factor is sorting the edges. This takes time O(m log m) if we
use an efficient comparison-based sorting algorithm.
The constant factor involved is rather small so that for
m = O(n)
It may be possible to do better than the O(m + nlog n) JP
algorithm.
Weeks-10,11,12 44
GRAPHS
Directed Graphs:Minimum Spanning Trees
Kruskal’s Algorithm

Kruskal’s algorithm first


proves that (b, d) and (b, c)
are MST edges using the cut
property. Then (c, d) is
excluded because it is the
heaviest edge on the cycle hb,
c, di, and, finally, (a, b)
completes the MST.
Weeks-10,11,12 45
GRAPHS
Directed Graphs:Minimum Spanning Trees
Kruskal’s Algorithm

Weeks-10,11,12 46
GRAPHS
Directed Graphs:Minimum Spanning Trees
Kruskal’s Algorithm

Application of
Kruskal’s
algorithm.
Selected
edges are
shown in
bold.

Weeks-10,11,12 47
GRAPHS
Directed Graphs:Minimum Spanning Trees
Kruskal’s Algorithm

Shaded edges belong to the forest A being grown. Each edge is sorted order by weight.
Weeks-10,11,12 48
An arrow points to the edge under consideration at each step of the algorithm. If the
GRAPHS
Directed Graphs:Minimum Spanning Trees
Kruskal’s Algorithm

Weeks-10,11,12 49
DIRECTED GRAPHS: NETWORK FLOW GRAPHS
We look at the important problem of maximizing the flow of a
material through a transportation network (pipeline system,
communication system, electrical distribution system, and
so on).
Flow Network (or network)
We represent the transportation network by a connected
weighted digraph with n vertices numbered from 1 to n and a
set of edges E, with the following properties:-
1)It contains exactly one vertex with no entering edges; this
vertex is called the source and assumed to be numbered 1.
2)It contains exactly one vertex with no leaving edges; this
vertex is called the sink and assumed to be numbered n.
3)The weight uij of each directed edge (i, j ) is a positive
integer, called the edge capacity. (This number represents
the upper bound on the amount of the material that can be
sent from i to j through a link represented by this edge.)
Weeks-10,11,12 50
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

A network graph. The vertex numbers are vertex “names”;


the edge numbers are edge capacities.
Weeks-10,11,12 51
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

THE FLOW-CONSERVATION REQUIREMENT


● It is assumed that the source and the sink are the only

source and destination of the material, respectively;


● All the other vertices can serve only as points where

● a flow can be redirected without consuming or adding

any amount of the material.


● In other words, the total amount of the material entering

an intermediate vertex must be equal to the total amount


of the material leaving the vertex.

Let the amount sent through edge (i, j ) by xij , then for any
intermediate vertex i, the flow-conservation
requirement can be expressed by the following equality
constraint (total outflow from source=total inflow into sink=value of flow):
∑xji = ∑xij for i = 2, 3, . . . , n − 1,
j : (j,i)∈E x ij i: (I,j)∈E x ij
Weeks-10,11,12 52
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

THE MAXIMUM FLOW PROBLEM


This is stated formally as an optimization problem:

maximize v = ∑x1j
i: (1,j)∈E

subject to
∑xji - ∑xij = 0 for i = 2, 3, . . . , n − 1,
j : (j,i)∈E x ij i: (I,j)∈E for every edge (i, j ) ∈ E.

This linear programming problem can be solved by the


simplex method or by another algorithm for general linear
programming problems. However, the special structure of
problem can be exploited to design faster algorithms. In
particular, it is quite natural to employ the iterative-
improvement. Weeks-10,11,12 53
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

THE MAXIMUM FLOW PROBLEM


FORD-FULKERSON METHOD (AUGMENTING-PATH METHOD)
● We start with the zero flow (i.e., set x = 0 for every
ij
● edge (i, j ) in the network).

● Then, on each iteration, we can try to find a path from

source to sink along which some additional flow can be


sent. Such a path is called flow augmenting.
● If a flow-augmenting path is found, we adjust the flow

● along the edges of this path to get a flow of an increased

value and try to find an augmenting path for the new flow.
If no flow-augmenting path can be found, we conclude
that the current flow is optimal.

Weeks-10,11,12 54
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

THE MAXIMUM FLOW PROBLEM


FORD-FULKERSON METHOD (AUGMENTING-PATH METHOD)
To find a flow-augmenting path for a flow x, we need to
consider paths from source to sink in the underlying
undirected graph in which any two consecutive vertices i, j
are either:
i. (forward edges): connected by a directed edge from i to j
with some positive unused capacity r ij = u ij − x ij (so that
we can increase the flow through that edge by up to r ij
units), or

ii. (backward edges): connected by a directed edge from j


to i with some positive flow x j i (so that we can decrease
the flow through that edge by up to x j i units).

Weeks-10,11,12 55
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

THE MAXIMUM FLOW PROBLEM


FORD-FULKERSON METHOD (AUGMENTING-PATH METHOD)

Illustration of the
augmenting-path
method. Flow-
augmenting paths
are
shown in bold. The
flow amounts and
edge capacities are
indicated by
the numbers before
and after the slash,
respectively

Weeks-10,11,12 56
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

THE MAXIMUM FLOW PROBLEM


FORD-FULKERSON METHOD (AUGMENTING-PATH METHOD)
ALGORITHM ShortestAugmentingPath(G)
//Implements the shortest-augmenting-path algorithm
//Input: A network with single source 1, single sink n, and
//positive integer capacities u ij on its edges (i, j ); Output: A maximum flow x
assign x ij = 0 to every edge (i, j ) in the network
label the source with ∞, − and add the source to the empty queue Q
while not Empty(Q) do
i ← Front(Q); Dequeue(Q)
for every edge from i to j do //forward edges
if j is unlabeled
r ij ← u ij − x ij
if r ij > 0
lj ← min{li , r ij }; label j with lj , i +
Enqueue(Q, j )
for every edge from j to i do //backward edges
if j is unlabeled
if x j i > 0
l j ← min{l i , x j i }; label j with l j , i −
Enqueue(Q, j )
Weeks-10,11,12 57
DIRECTED GRAPHS: NETWORK FLOW GRAPHS

THE MAXIMUM FLOW PROBLEM


FORD-FULKERSON METHOD (AUGMENTING-PATH METHOD)
ALGORITHM ShortestAugmentingPath(G) CONTINUE

if the sink has been labeled


//augment along the augmenting path found
j ← n //start at the sink and move backwards using second labels
while j = 1 //the source hasn’t been reached
if the second label of vertex j is i +
x ij ← x ij + l n
else //the second label of vertex j is i −
x JI ← x j i − l n
j ← i; i ← the vertex indicated by i’s second label
erase all vertex labels except the ones of the source
reinitialize Q with the source
return x //the current flow is maximum

Weeks-10,11,12 58
GENERAL PROBLEM CLASSES
POLYNOMIAL (P) PROBLEM CLASS
A decision problem is a problem that can be posed as a yes-
no question of the input values.
An example of a decision problem is deciding whether a
given natural number is prime.
Another is the problem "given two numbers x and y, does x
evenly divide y?".
The answer is either 'yes' or 'no' depending upon the values
of x and y.

An algorithm is said to be of polynomial time if its running


time is upper bounded by a polynomial expression in the
size of the input for the algorithm, i.e., T(n) = O(kn) for some
positive constant k.

Weeks-10,11,12 59
GENERAL PROBLEM CLASSES
POLYNOMIAL (P) PROBLEM CLASS
Problems for which a deterministic polynomial time
algorithm exists belong to the complexity class P, which is
central in the field of computational complexity theory.
Polynomial time can be said to be "tractable", "feasible",
"efficient", or "fast".
Some examples of polynomial time algorithms:
● Selection sort algorithm on n integers performs in O(n2)
● Basic arithmetic operations (addition, subtraction, multiplication,
division, and comparison);
● Maximum matchings in graphs;
● Computing the product and the greatest common divisor
● of two integers;
● Sorting a list; searching for a pattern in a text string; checking
connectivity and acyclicity of a graph;
● Finding a minimum spanning tree;
● Finding shortest paths in a weighted graph.
Weeks-10,11,12 60
GENERAL PROBLEM CLASSES
POLYNOMIAL (P) PROBLEM CLASS
Class P is a class of decision problems that can be solved in
polynomial time by (deterministic) algorithms. This class of
problems is called polynomial.

An algorithm A runs in polynomial time or is a polynomial


time algorithm if there is a polynomial p(n) such that its
execution time on inputs of size n is O(p(n)).

A problem can be solved in polynomial time if there is a


polynomial time algorithm solving it. We equate efficiently
solvable with polynomial time solvable.

Weeks-10,11,12 61
GENERAL PROBLEM CLASSES
POLYNOMIAL (P) PROBLEM CLASS
Not every problem can be solved in polynomial time.
Problems with exponentially large output cannot be solved
in polynomial but exponential time. Example is generating
subsets of a given set or finding all the permutations of n
distinct items. There are many important problems are not
decision problems in their most natural formulation can be
reduced to a series of decision problems that are easier to
study. For example, the minimum number of colors needed
to color the vertices of a graph so that no two adjacent
vertices are colored the same color.
Some decision problems cannot be solved at all by any algorithm.
Such problems are called undecidable, as opposed to decidable
problems that can be solved by an algorithm. Example is halting
problem: given a computer program and an input to it, determine
whether the program will halt on that input or continue working
indefinitely on it. Weeks-10,11,12 62
GENERAL PROBLEM CLASSES
POLYNOMIAL (P) PROBLEM CLASS
More problems that cannot be solved in polynomial time
● Hamiltonian circuit problem: Determine whether a given

graph has a Hamiltonian circuit—a path that starts and


ends at the same vertex and passes through all the other
vertices exactly once.
● Traveling salesman problem: Find the shortest tour

through n cities with known positive integer distances


between them (find the shortest Hamiltonian circuit in a
complete graph with positive integer weights).
● Knapsack problem: Find the most valuable subset of n

items of given positive integer weights and values that fit


into a knapsack of a given positive integer capacity.
● Partition problem: Given n positive integers, determine

whether it is possible to partition them into two disjoint


subsets with the same sum.
Weeks-10,11,12 63
GENERAL PROBLEM CLASSES
POLYNOMIAL (P) PROBLEM CLASS
More problems that cannot be solved in polynomial time
Bin-packing problem: Given n items whose sizes are positive
rational numbers not larger than 1, put them into the
smallest number of bins of size 1.
Graph-coloring problem: For a given graph, find its
chromatic number, which is the smallest number of colors
that need to be assigned to the graph’s vertices so that no
two adjacent vertices are assigned the same color.
Integer linear programming problem: Find the maximum (or
minimum) value of a linear function of several integer-valued
variables subject to a finite set of constraints in the form of
linear equalities and inequalities.

Weeks-10,11,12 64
GENERAL PROBLEM CLASSES
POLYNOMIAL (P) PROBLEM CLASS
Not every problem can be solved in polynomial time.
Problems with exponentially large output cannot be solved
in polynomial but exponential time. Example is generating
subsets of a given set or finding all the permutations of n
distinct items.
There are many important problems are not decision
problems in their most natural formulation can be reduced to
a series of decision problems that are easier to study.
For example, the minimum number of colors needed to color
the vertices of a graph so that no two adjacent vertices are
colored the same color.

Weeks-10,11,12 65
GENERAL PROBLEM CLASSES
NON DETERMINISTIC POLYNOMIAL (NP) PROBLEM CLASS

A nondeterministic algorithm solves a decision problem if


and only if for every yes instance of the problem it returns
yes on some execution. It be capable of “guessing” a
solution at least once and to be able to verify its validity.
This should be correct all the time.
A nondeterministic algorithm is a two-stage procedure that
takes as its input an instance I of a decision problem and
does the following:
Nondeterministic (“guessing”) stage: An arbitrary string S is
generated that can be thought of as a candidate solution to
the given instance I (but may be complete gibberish as well)
Deterministic (“verification”) stage: A deterministic
algorithm takes both I and S as its input and outputs yes if S
represents a solution to instance I. It can return an no.
Weeks-10,11,12 66
GENERAL PROBLEM CLASSES
NON DETERMINISTIC POLYNOMIAL (NP) PROBLEM CLASS
A nondeterministic algorithm is said to be nondeterministic
polynomial if the time efficiency of its verification stage is
polynomial.

Class NP is the class of decision problems that can be


solved by non-deterministic polynomial algorithms. This
class of problems is called non-deterministic polynomial.

Most decision problems are in NP. This class includes all the
problems in P : P ⊆ NP.
Examples: Hamiltonian circuit problem, the partition prob-
lem, decision versions of the traveling salesman, the
knapsack, graph coloring, and many hundreds of other
difficult combinatorial optimization problems
Non-example: the halting problem
Weeks-10,11,12 67
GENERAL PROBLEM CLASSES
NP-COMPLETE PROBLEM CLASS
An NP-complete problem is a problem in NP that is as
difficult as any other problem in this class because, by
definition, any other problem in NP can be reduced to it in
polynomial time.
A decision problem D1 is said to be polynomially reducible to
a decision problem D2 , if there exists a function t that
transforms instances of D1 to instances of D2 such that:
1. t maps all yes instances of D1 to yes instances of D2 and
all no instances of D1 to no instances of D2
2. t is computable by a polynomial time algorithm

This definition immediately implies that if a problem D1 is


polynomially reducible to some problem D2 that can be solved in
polynomial time, then problem D1 can also be solved in polynomial
time. Weeks-10,11,12 68
GENERAL PROBLEM CLASSES
NP-COMPLETE PROBLEM CLASS
A decision problem D is said to be NP-complete if:
1)it belongs to class NP
2)every problem in NP is polynomially reducible to D.

The fact that closely related decision problems are


polynomially reducible to each other is not very surprising.
Example: Hamiltonian circuit problem is polynomially
reducible to the decision version of the traveling salesman
problem (verifying this is left as an exercise).

NP-complete problems
Hamiltonian circuit; traveling salesman; partition; bin
packing; and graph coloring.

Weeks-10,11,12 69
GENERAL PROBLEM CLASSES
NP-COMPLETE PROBLEM CLASS

Notion of an NP-
complete problem.
Polynomial-time
reductions of NP
problems to an
NP-complete
problem are
shown by arrows

Weeks-10,11,12 70
GENERAL PROBLEM CLASSES
NP-COMPLETE PROBLEM CLASS Proving NP-
completeness by
reduction
NP-completeness
implies that if there
exists a deterministic
polynomial-time
algorithm for just one
NP-complete problem,
then every problem in
NP can be solved in
polynomial time by a
deterministic algorithm,
and hence P = NP.

But is P = NP ? (exercise
to ponder)

Weeks-10,11,12 71
GENERAL PROBLEM CLASSES
SATISFIABILITY
Boolean Satisfiability (SAT) Problem:
Given a boolean expression in conjunctive form, decide
whether it has a satisfying assignment. A boolean expression
in conjunctive form (CF) is a conjunction C1 ∧ C2 ∧ . . . ∧ Ck
of clauses.
A clause is a disjuntion l1 ∨ l2 ∨ . . . ∨ lh of literals and a literal
is a variable or a negated variable. So x1 ∨ ¬x3 ∨ ¬x9 is a
clause.

Weeks-10,11,12 72
GENERAL PROBLEM CLASSES
SATISFIABILITY- CNF = CONJUCTIVE NORMAL FORM
2-CNF satisfiability vs. 3-CNF satisfiability:
● A boolean formula contains variables whose values are 0 or 1;
boolean connectives such as ∧(AND), v (OR),
● And : ¬ (NOT); and parentheses. A boolean formula is satisfiable
if there exists some assignment of the values 0 and 1 to its
variables that causes it to evaluate to 1.
● A boolean formula is in k-conjunctive normal form, or k-CNF, if it
is the AND of clauses of ORs of exactly k variables or their
negations. For example, the boolean formula
● (x1 v ¬ x2 ) ∧ (¬ x1 v x3) ∧ (¬ x2 v ¬ x3) is in 2-CNF. It has the
satisfying assignment x1 = 1; x2 = 0; x3 = 1.
● We can determine in polynomial time whether a 2-CNF formula is
satisfiable; however determining whether a 3-CNF formula is
satisfiable is NP-complete.

Weeks-10,11,12 73
GENERAL PROBLEM CLASSES
SATISFIABILITY- CNF = CONJUCTIVE NORMAL FORM
3-CNF satisfiability example- 3 variable in ():
(x1 ∨ x2 ∨ y1 ) ∧ (x3 ∨ y2 ∨ ¬y1 ) ∧… ∧(xn-2 ∨ yn-3 ∨¬yn-4 ) ∧( xn-1 xn ¬yn-3 )

Look for values of x1, x2, .., xn and y1, y2, .. yn that make the
expression true. This is NP problem.

Problems that are reducable to 3-CNF satisfiability problems are


also NP and these include:
Clique Problem: Given an undirected graph and an integer k,
decide whether the graph contains a complete subgraph (= a
clique) on k nodes;
Knapsack Problem;
Traveling Salesman Problem;
Graph Coloring.
Exercise: find more about these problems.

Weeks-10,11,12 74
GENERAL PROBLEM CLASSES
SATISFIABILITY- CNF = CONJUCTIVE NORMAL FORM
3-CNF satisfiability example- 3 variable in ():
(x1 ∨ x2 ∨ y1 ) ∧ (x3 ∨ y2 ∨ ¬y1 ) ∧… ∧(xn-2 ∨ yn-3 ∨¬yn-4 ) ∧( xn-1 xn ¬yn-3 )

The reduction
algorithm
begins with an
instance of 3-
CNF-SAT. Let
D= C1 ∧ C2
∧ .. ∧ Ck be a
boolean
formula in 3-
CNF with k
clauses.

Weeks-10,11,12 75
GENERAL PROBLEM CLASSES
NP-HARD PROBLEM CLASS
● A problem is NP-hard if an algorithm for solving it can be
translated into one for solving any NP-problem
(nondeterministic polynomial time) problem. NP-hard
therefore means "at least as hard as any NP-problem,"
although it might, in fact, be harder.
● A problem is said to be NP-hard if everything in NP can be
transformed in polynomial time into it, and a problem is NP-
complete if it is both in NP and NP-hard.
● The NP-complete problems represent the hardest problems
in NP.
● There are no known polynomial-time algorithms for these
problems, and there are serious theoretical reasons to
believe that such algorithms do not exist.

Weeks-10,11,12 76
GENERAL PROBLEM CLASSES: NP-HARD PROBLEM CLASS

Euler diagram for P, NP, NP-complete, and NP-hard set of


problems. The left side is valid under the assumption that P≠NP,
while the right side is valid under the assumption that P=NP
(except that the empty language and its complement are never
NP-complete, and in general, not every problem in P or NP is
NP-complete) Weeks-10,11,12 77
CSC 311 DESIGN AND ANALYSIS OF ALGORITHMS
EXERCISES
(1)Define the terms: graph; vertex; incidence; directed
garaph; undirected graph; number of edges in a
graph; complete graph; dense graph; sparse graph;
(2)Describe how to represent a graph. Write a program.
(3)Describe adjacency matrix and adjacency list.
(4)Describe a weighted graph and its relevance in real
life.
(5)Describe the cost matrix.
(6)Describe paths and cycles in graphs.
(7)Describe a hamiltonian cycle.
(8)Describe and implement the depth-first-search
agorithm.
(9)Describe and implement the breadth-first-search
algorithm.
(10)Describe and implement the Dijkstra’s algorith for
shortest path. Weeks-10,11,12 78
CSC 311 DESIGN AND ANALYSIS OF ALGORITHMS
EXERCISES
(1)Describe the minimum spanning tree (MST) concept.
(2)Describe and implement Prim’s algorithm for MST.
(3)Describe and implement Kruskal’s algorithm for MST.
(4)Describe the notion of Flow Networks
(5)Describe and implement Ford-Fulkerson Method.
(6)Discuss the P, NP and NP complete problem classes
giving examples of problems in each class.
(7)Discuss satisfiability problems.

Weeks-10,11,12 79
CSC 311 DESIGN AND ANALYSIS OF ALGORITHMS
REFERENCES
1)Levitin, Anany (2012). Introduction to the design & analysis of
algorithms / Anany Levitin. 3rd ed. Pearson. ISBN-13: 978-0-13-
231681-1, ISBN-10: 0-13-231681-1.
2)Kurt Mehlhorn and Peter Sanders(2007). Algorithms and Data
Structures. The Basic Toolbox.
3)Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
Clifford Stein (2009). Introduction to algorithms, Third Edition.
4)Dasgupta, C. H. Papadimitriou, and U. V. Vazirani (2006).
Algorithms.
5)The Design and Analysis of Computer Algorithms, Aho,
Hopcroft and Ullman
6)Internet Resources.

Weeks-10,11,12 80

You might also like