0% found this document useful (0 votes)
4 views

Algorithm Design Theory

Uploaded by

prem sequeira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Algorithm Design Theory

Uploaded by

prem sequeira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Algorithm

Algorithm is refers to steps followed by a program to


execute a problem.

Analysis of Algorithm
Understanding the time and storage takes by a
program is called Analysis of Algorithm.
Space consumed is called Space Complexity.
Time consumed is Time Complexity.

Omega Notation, Ω
Omega Notation refers to the best time/least time
takes by program

Big O Notation, O
Big O Notation is refers to the word time/maximum
time a program can take.
Theta Notation, Θ
Theta Notation refers to the average time taken by
program. It can be somewhere in the middle of Big
O and Omega Notation.

Divide and Conquer technique


In Divide and Conquer technique, we divide or
problems to sub problems and solve them. After
solving all the sub problems, we add them together
to get the answer.
Example: - Merge Sort, Binary Search
Algorithm:-
1. Break the problem into sub problem
2. Solve each of the sub problem
3. Combine the solution of sub problems to get
the solution of whole problem.
Greedy Technique
In Greedy Technique, we select the best current
option ignore the future outcomes to find the whole
solution. Greedy method might not give to best
result every time.
Algorithm:-
1. State the problem to be solved and the
objective to be optimized
2. Select the locally optimal choice at current
place.
3. Jump to the optimal choice and again select
an optimal choice except previous one.
4. Repeat until problem is solved.

Dynamic Programming
Similar to Divide and Conquer, this technique also
divide the problem in sub problems, solve them
then combine their solution to get whole solution.
The difference between Dynamic Programming and
Divide and Conquer technique is that Dynamic
Programming reuses the results of sub problem if
the same sub problem arrives again. That way, it
reduces time complexity.
Algorithm:-
1. Break the problem into sub problem.
2. Solve each of the sub problems.
3. Use stored solutions of sub problems if they
arrive again.
4. Combine all sub problem’s solutions to get
whole solution.

Backtracking
In backtracking technique, as we proceed to the
solution, if at any state we get dead end or a
solution which won’t lead to solution, then we undo
that step and jump back to previous solution.
Algorithm:-
1. Determine all the possible paths in that
problem.
2. Select a path at current solution.
3. If the path leads to solution, jump to that
new solution.
4. If the path does not lead to solution, jumps
back to previous solution and ignore that
path.
5. Repeat until all possible paths have been
explored.

Branch and Bound, B&B


Branch and Bound is similar to Backtracking where
we determine all the possible ways to solve our
problem and search all paths to get the solution.
B&B are generally used in problems where greedy
and dynamic programming does not work.
The difference between B&B and Backtracking is
that Backtracking stops the problem as it gets
solution and does not search rest of paths where
B&B search all paths even if solution is found to
check if there is a better solution.
Algorithm:-
1. Determine the problem and choose all the
possible paths to solve.
2. Select a path at the current solution.
3. If the path leads to solution, jump to that
new solution.
4. If the path does not lead to solution, jumps
back to previous solution and ignore that
path.
5. Update the solution if a better solution is
found.
6. Repeat until all paths have been explored.
Selection Sort
In selection sort, we find minimum value in our
array then place it in first place by replacing it with
first element. Then again find minimum value from
rest of the array and repeat.
Time Complexity:-
 Best case: Ω(n2)
 Average case: Θ(n2)
 Worst case: O(n2)
Algorithm:-
1. Assume first element of the array as
minimum value.
2. Compare it with all elements to check if there
is an element less that current minimum
value.
3. If no then it’s already sorted and proceed to
step 6.
4. If yes then update the new minimum value.
5. Now replace it with previous minimum value
to bring it to first place.
6. Repeat with the rest of the array until whole
array is sorted.
Example:-

Insertion Sort
In Insertion Sort, we put the element in his right
place as soon as we select it. This is the simplest
sorting technique. We select an element, and then
compare it with his previous element to check if it’s
greater/smaller than any other element. As we
compare it, if any previous element is greater than
current element, shift the sorted list to right and put
current element there.
Time Complexity:-
 Best case: Ω(n)
 Average case: Θ(n2)
 Worst case: O(n2)
Algorithm:-
1. Assume the first element is already sorted
and proceed to next element.
2. Compare the element with all elements in
sorted list.
3. If any element is greater than current
element, shift all elements of sorted list to
right and put current element in his right
place.
4. If no element in sorted list is greater than
current element, continue.
5. Repeat until all elements are sorted.
Example:-

Bubble Sort
In Bubble Sort, we compare all elements with their
next element, and if the next element is smaller
than the previous element, swap them. Running this
execution once will put the highest value to the
right side so we have to run this n time (where n is
no. of elements in array).
Time Complexity:-
 Best case: Ω(n)
 Average case: Θ(n2)
 Worst case: O(n2)
Algorithm:-
1. Select the first element and compare it to
next element.
2. If the next element is smaller than first
element, swap them.
3. If the next element is greater then first
element, continue.
4. Now compare next element with his next
element and repeat step 2 & 3.
5. Repeat step 4 until array ends, so that highest
value will come to the most right place. Since,
highest value is in right place, it is sorted.
6. Now repeat the process until array is sorted.
Example:-
Merge Sort
Merge Sort uses divide and conquer technique. It
divides array into sub arrays and sort them then
combine all sorted sub arrays to get original sorted
array.
Time Complexity:-
 Best case: Ω(nlog(n))
 Average case: Θ(nlog(n))
 Worst case: O(nlog(n))
Algorithm:-
1. Divide the array into two sub arrays.
2. If sub array is more than one element, repeat
step 1.
3. If sub array is has one element. Continue.
4. Repeat process until all sub arrays a divided.
5. Merge sub array with putting smaller
element first and greater element after.
6. Repeat step 5 until whole array is sorted.
Example:-

Step 1:

Step 2:

Step 3:
Step 4:

Quick Sort
Quick Sort also uses Divide and Rule technique.
Similar to Merge Sort, it divides array in sub arrays
and sort them then combine them to get solution.
But instead of dividing it from middle, Quick Sort
choose an element as pivot and places it in his
correct position by comparing it with all elements so
that all elements smaller would be on left of it and
all elements greater would be on right.
Now two new sub arrays of elements smaller than
pivot and greater than pivot.
At last, sort and merge the sub arrays.
Time Complexity:-
 Best case: Ω(nlog(n))
 Average case: Θ(nlog(n))
 Worst case: O(n2)
Algorithm:
1. Choose a element as pivot.
2. Compare it with all elements. Put smaller
elements on left side and larger on right side.
3. Now create a sub array of elements smaller
than pivot and another sub array of elements
greater than pivot.
4. Repeat step 2 and 3 on both sub arrays
(recursively) until sub arrays have one
element.
5. Merge sub arrays and put elements in
ascending order.
6. Repeat step 5 until all sub arrays are merge
and original array is sorted.
Example:-
Linear Search Algorithm
Linear Search is the simplest searching technique
using common logic. In this searching technique, we
compare out target to every element of the array to
check his existence.
Time Complexity:-
 Best case: Ω(1)
 Average case: Θ(n)
 Worst case: O(n)
Algorithm:-
1. Determine the element to be search.
2. Compare it to the first element.
3. If found, return it’s position.
4. If not found, jump to next element.
5. Repeat step 3 & 4 until array ends.
6. If target not found in array, return “not
found”.
Example:-

Binary Search Algorithm


Binary Search follows divide and conquer technique.
This algorithm only works on sorted list. In this
algorithm, we divide out list from middle value and
check if our target is greater or smaller than middle
value. Then ignore the unnecessary array and
proceed to explore array which may contain our
target.
Time Complexity:-
 Best case: Ω(1)
 Average case: Θ(logn)
 Worst case: O(logn)
Algorithm:-
1. Determine the target to search.
2. If target is equal to middle value, target is
found.
3. If target is smaller than middle value, make
left side of list our new list.
4. If target is greater than middle value, make
right side of list our new list.
5. Repeat step 2, 3 & 4 until each element of list
is checked.
6. If target now found in array, return “not
found”.
Example:-
Fig 1.

Fig2.

Fig 3.
Standard Matrix Multiplication Algorithm
Standard Matrix Multiplication refers to the normal
way of multiplying two matrices where we first
multiple each element of first row of first matrix
with each element of first column of second matrix
and add them to get one element of our resulting
matrix.
Time Complexity:- O(n3)
Algorithm:-
1. Multiply each element of row of first matrix
with corresponding elements of
corresponding column.
2. Sum their multiplication’s result to get one
element of Result matrix.
3. Repeat until we get every element of result
matrix.
Example:-

Divide and Conquer Algorithm for Matrix


Divide and Conquer Algorithm is about
multiplication of two matrices where both matrices
must be square and have 2n dimension i.e. 2, 4, 8,
16 etc.
In this algorithm, we divide our matrix in 4 equal
sub matrices and multiple those sub matrices with
their corresponding sub matrix of other matrix.
Then combine all sub matrix’s result to get original
result. It’s average time complexity is less than
standard method.
Time Complexity:- O(n3)
NOTE: Its recurrence relation is
T(n) = 8T(n/2) + n2
Algorithm:-
1. If both matrix have dimension of 2x2,
multiply them with standard method.
2. If both matrix’s dimension are more than 2x2,
divide them into 4 equal sub matrices.
3. Repeat step 1 & 2 to all sub matrices,
multiply first matrix’s sub matrix with second
matrix’s corresponding sub matrix.
4. Repeat process until every sub matrix is
multiplied.
5. Combine the results of sub matrices to get
original solution.
Example:-

Fig 1.

Fig 2.

Fig 3.

Strassen’s Algorithm
Strassen’s Algorithm shows that matrix
multiplication is possible with only 7 multiplications
and 18 addition or subtraction. It only works on
square matrix where n is a power of 2.
These steps going to be shown next can also be
used as Algorithm of this method.
For two matrices of dimension 2x2

We need to compute 7 equations


p1 = a(f – h)
p2 = (a + b)h
p3 = (c + d)e
p4 = d(g – e)
p5 = (a + d)(e + h)
p6 = (b – d)(g + h)
p7 = (a – c)(e + f)
Now arrange these equations in the resulting matrix
as shown below

After putting values and solving it, we will get our


resulting matrix.
Since, addition and subtraction have less time
complexity than multiplication. Strassen’s Algorithm
have less time complexity than Divide and Conquer
technique.
Time Complexity:- O(n2.81)
NOTE: Its recurrence relation is,
T(n) = 7T(n/2) + n2
Graph
A graph (G) may be defined as a finite set of vertices
(V) and set of edges (E).
G = (V, E)
Vertex: A Node is called vertex.
Edge: A line connecting two vertex is called edge.
Example:-

There are two types of graph


 Undirected Graph
 Directed Graph
Undirected Graph
Undirected Graph only has a line segment for
connection two nodes. We can go forward of
backward in undirected graph.
Example:-

Directed Graph
Directed Graph has a vertex (geometry vertex/ a line
segment with direction) connect two vertices. We
can only travel in one direction in this graph.
Example:-
Adjacent vertices
Two vertices are said to be adjacent if there is a
edge connecting both vertices.

Example:-
Path
A path is defined as sequence of distinct vertices in
which each vertex is adjacent to the next.
Adjacency List
Adjacency List is a way to show how many vertices
are connected with one vertex.
Example:-

Adjacency list of this graph is


adj[A] = {B, E, D}
adj[B] = {A, D, C}
adj[C] = {B, D}
adj[D] = {E, A, B, C}
adj[E] = {A, D}
Adjacency Matrix
Adjacency matrix is a matrix with n x n dimensions
where n is the no. of nodes/vertex. In this matrix
reach row and column represent a node and values
represent the connection of that node (row) with
other nodes (columns).
If there is a column vertex connected with row
vertex, we put 1 in matrix value.
If there is no column vertex connected with row
vertex, we put 0 in matrix value.
Example:-
Undirected Graph Matrix
No. of nodes = 5
Dimension of matrix = 5x5
For above graph, adjacency matrix is

Directed Graph Matrix


No. of nodes = 5
Dimension of matrix = 5x5
For above graph, adjacency matrix is

Graph Traversal Technique


Graph Traversal means travelling to reach vertex of
graph through a particular path.
Two standard types of Graph Traversal Technique
are:-
 Depth First Search (DFS)
 Breadth First Search (BFS)

Depth First Search


In Depth First Search (DFS), our primary focus is on
searching new node until we reaches to its dead
end. We select a node then select only one another
node connect to that first node. Now transfer our
focus on the new node and do the same. As we
reach to the end, we backtracks to the closest
parent having unsearched node.
Time Complexity:- O(V + E)

Algorithm:-
1. Select a vertex from graph and name it
current node.
2. Select another vertex from current node and
now name new node as current node.
3. Repeat step 2 until we reaches to dead end.
4. If it is dead end, backtracks to the closest
parent having an unsearched node then name
it current node.
5. Repeat step 2, 3 & 4 until there is no
unsearched node.
Example:-

For the above graph, DFS tree is


Breadth First Search
In BFS, our primary focus is search all nodes of
current node then proceed to new node. We select
a vertex and search all his nodes, then select one of
those vertex and again search all his nodes until we
reach to dead end. If dead end, backtracks to closest
node having an unsearched node.
Time Complexity:- O(V + E)
Algorithm:-
1. Select a vertex from graph and name it as
current node.
2. Search all vertices connect to current node.
3. Select a vertex connected to current node
and name it as current node.
4. Repeat step 2 & 3 until all we reaches to dead
end.
5. If dead end, backtracks to closest parent
having an unsearched node.
6. Repeat step 2, 3, 4 & 5 until we search all
vertices.
Example:-

For the above graph, BFS is


Shortest Path Algorithm
Shortest Path Algorithm also knows as Minimum
cost spanning tree is a tree where cost of traversing
all vertices in a graph is the least comparing to
possible paths.
Two types of algorithm to find minimum cost
spanning tree:-
 Kruskal’s Algorithm
 Prim’s Algorithm

Kruskal’s Algorithm
In Kruskal’s Algorithm, our primary focus is on
selecting the least weighted edge from the whole
graph while avoid cycles. We do it until all the
vertices are searched.
Time Complexity:- O(E*logE)
Algorithm:-
1. Select the least weighted edge from graph.
2. Select another least weighted edge from
remaining edges.
3. If a cycle occurs, ignore that edge.
4. Repeat step 2 & 3 until all vertex are
connected.
Example:-

For the above graph, we select least weighted


edges with avoid cycles. Then our output with
Kruskal’s Algorithm will be
NOTE: minimum cost spanning is drawn with violet
lines, do not draw non violet lines.

Prim’s Algorithm
In Prim’s Algorithm, our primary focus is not
selecting a vertex from the graph and only select
least weighted edge from the selected graph. In this
algorithm, we ignore the whole graph until gets to
any vertex.
Time Complexity:- O(E*logE)
Algorithm:-
1. Select a vertex from the graph.
2. Select the least weighted edge connecting to
the already selected vertex.
3. Now select the vertex with that selected
edge.
4. If any cycle occurs, ignore that edge.
5. Repeat step 2, 3 & 4 until we get all vertices.
Example:-

For the above graph, we select least weighted


Edges only connect with already selected
vertices.
Minimum cost spanning tree with Prim’s
Algorithm will be

Dijkstra’s Algorithm
Dijkstra’s Algorithm is about search the least weight
route starting from an specific vertex to all vertices.
This algorithm find the shortest path from a given
source vertex to all other vertices in the graph. This
problem is also called single source shortest path
problem.
Time Complexity:- O(V2)
Algorithm:-
Create weight matrix table
1. Set 0 to sourced vertex & infinity to
remaining vertices.
2. Mark the smallest unmarked value & mark
that vertex
3. Find those vertices which are directly
connected with marked vertex & update all.
Update Formula:-
New_destination_value =
min(old_desination_value, marked_value +
edge_weight)
Example:-
For above graph, we have to make a matrix
where no. of columns = no. of vertices.
Let, our starting node be A
For the above graph, our matrix is

For the above matrix, our final output graph


representing the least cost route from A to all
vertices is

You might also like