Algorithm Design Theory
Algorithm Design Theory
Analysis of Algorithm
Understanding the time and storage takes by a
program is called Analysis of Algorithm.
Space consumed is called Space Complexity.
Time consumed is Time Complexity.
Omega Notation, Ω
Omega Notation refers to the best time/least time
takes by program
Big O Notation, O
Big O Notation is refers to the word time/maximum
time a program can take.
Theta Notation, Θ
Theta Notation refers to the average time taken by
program. It can be somewhere in the middle of Big
O and Omega Notation.
Dynamic Programming
Similar to Divide and Conquer, this technique also
divide the problem in sub problems, solve them
then combine their solution to get whole solution.
The difference between Dynamic Programming and
Divide and Conquer technique is that Dynamic
Programming reuses the results of sub problem if
the same sub problem arrives again. That way, it
reduces time complexity.
Algorithm:-
1. Break the problem into sub problem.
2. Solve each of the sub problems.
3. Use stored solutions of sub problems if they
arrive again.
4. Combine all sub problem’s solutions to get
whole solution.
Backtracking
In backtracking technique, as we proceed to the
solution, if at any state we get dead end or a
solution which won’t lead to solution, then we undo
that step and jump back to previous solution.
Algorithm:-
1. Determine all the possible paths in that
problem.
2. Select a path at current solution.
3. If the path leads to solution, jump to that
new solution.
4. If the path does not lead to solution, jumps
back to previous solution and ignore that
path.
5. Repeat until all possible paths have been
explored.
Insertion Sort
In Insertion Sort, we put the element in his right
place as soon as we select it. This is the simplest
sorting technique. We select an element, and then
compare it with his previous element to check if it’s
greater/smaller than any other element. As we
compare it, if any previous element is greater than
current element, shift the sorted list to right and put
current element there.
Time Complexity:-
Best case: Ω(n)
Average case: Θ(n2)
Worst case: O(n2)
Algorithm:-
1. Assume the first element is already sorted
and proceed to next element.
2. Compare the element with all elements in
sorted list.
3. If any element is greater than current
element, shift all elements of sorted list to
right and put current element in his right
place.
4. If no element in sorted list is greater than
current element, continue.
5. Repeat until all elements are sorted.
Example:-
Bubble Sort
In Bubble Sort, we compare all elements with their
next element, and if the next element is smaller
than the previous element, swap them. Running this
execution once will put the highest value to the
right side so we have to run this n time (where n is
no. of elements in array).
Time Complexity:-
Best case: Ω(n)
Average case: Θ(n2)
Worst case: O(n2)
Algorithm:-
1. Select the first element and compare it to
next element.
2. If the next element is smaller than first
element, swap them.
3. If the next element is greater then first
element, continue.
4. Now compare next element with his next
element and repeat step 2 & 3.
5. Repeat step 4 until array ends, so that highest
value will come to the most right place. Since,
highest value is in right place, it is sorted.
6. Now repeat the process until array is sorted.
Example:-
Merge Sort
Merge Sort uses divide and conquer technique. It
divides array into sub arrays and sort them then
combine all sorted sub arrays to get original sorted
array.
Time Complexity:-
Best case: Ω(nlog(n))
Average case: Θ(nlog(n))
Worst case: O(nlog(n))
Algorithm:-
1. Divide the array into two sub arrays.
2. If sub array is more than one element, repeat
step 1.
3. If sub array is has one element. Continue.
4. Repeat process until all sub arrays a divided.
5. Merge sub array with putting smaller
element first and greater element after.
6. Repeat step 5 until whole array is sorted.
Example:-
Step 1:
Step 2:
Step 3:
Step 4:
Quick Sort
Quick Sort also uses Divide and Rule technique.
Similar to Merge Sort, it divides array in sub arrays
and sort them then combine them to get solution.
But instead of dividing it from middle, Quick Sort
choose an element as pivot and places it in his
correct position by comparing it with all elements so
that all elements smaller would be on left of it and
all elements greater would be on right.
Now two new sub arrays of elements smaller than
pivot and greater than pivot.
At last, sort and merge the sub arrays.
Time Complexity:-
Best case: Ω(nlog(n))
Average case: Θ(nlog(n))
Worst case: O(n2)
Algorithm:
1. Choose a element as pivot.
2. Compare it with all elements. Put smaller
elements on left side and larger on right side.
3. Now create a sub array of elements smaller
than pivot and another sub array of elements
greater than pivot.
4. Repeat step 2 and 3 on both sub arrays
(recursively) until sub arrays have one
element.
5. Merge sub arrays and put elements in
ascending order.
6. Repeat step 5 until all sub arrays are merge
and original array is sorted.
Example:-
Linear Search Algorithm
Linear Search is the simplest searching technique
using common logic. In this searching technique, we
compare out target to every element of the array to
check his existence.
Time Complexity:-
Best case: Ω(1)
Average case: Θ(n)
Worst case: O(n)
Algorithm:-
1. Determine the element to be search.
2. Compare it to the first element.
3. If found, return it’s position.
4. If not found, jump to next element.
5. Repeat step 3 & 4 until array ends.
6. If target not found in array, return “not
found”.
Example:-
Fig2.
Fig 3.
Standard Matrix Multiplication Algorithm
Standard Matrix Multiplication refers to the normal
way of multiplying two matrices where we first
multiple each element of first row of first matrix
with each element of first column of second matrix
and add them to get one element of our resulting
matrix.
Time Complexity:- O(n3)
Algorithm:-
1. Multiply each element of row of first matrix
with corresponding elements of
corresponding column.
2. Sum their multiplication’s result to get one
element of Result matrix.
3. Repeat until we get every element of result
matrix.
Example:-
Fig 1.
Fig 2.
Fig 3.
Strassen’s Algorithm
Strassen’s Algorithm shows that matrix
multiplication is possible with only 7 multiplications
and 18 addition or subtraction. It only works on
square matrix where n is a power of 2.
These steps going to be shown next can also be
used as Algorithm of this method.
For two matrices of dimension 2x2
Directed Graph
Directed Graph has a vertex (geometry vertex/ a line
segment with direction) connect two vertices. We
can only travel in one direction in this graph.
Example:-
Adjacent vertices
Two vertices are said to be adjacent if there is a
edge connecting both vertices.
Example:-
Path
A path is defined as sequence of distinct vertices in
which each vertex is adjacent to the next.
Adjacency List
Adjacency List is a way to show how many vertices
are connected with one vertex.
Example:-
Algorithm:-
1. Select a vertex from graph and name it
current node.
2. Select another vertex from current node and
now name new node as current node.
3. Repeat step 2 until we reaches to dead end.
4. If it is dead end, backtracks to the closest
parent having an unsearched node then name
it current node.
5. Repeat step 2, 3 & 4 until there is no
unsearched node.
Example:-
Kruskal’s Algorithm
In Kruskal’s Algorithm, our primary focus is on
selecting the least weighted edge from the whole
graph while avoid cycles. We do it until all the
vertices are searched.
Time Complexity:- O(E*logE)
Algorithm:-
1. Select the least weighted edge from graph.
2. Select another least weighted edge from
remaining edges.
3. If a cycle occurs, ignore that edge.
4. Repeat step 2 & 3 until all vertex are
connected.
Example:-
Prim’s Algorithm
In Prim’s Algorithm, our primary focus is not
selecting a vertex from the graph and only select
least weighted edge from the selected graph. In this
algorithm, we ignore the whole graph until gets to
any vertex.
Time Complexity:- O(E*logE)
Algorithm:-
1. Select a vertex from the graph.
2. Select the least weighted edge connecting to
the already selected vertex.
3. Now select the vertex with that selected
edge.
4. If any cycle occurs, ignore that edge.
5. Repeat step 2, 3 & 4 until we get all vertices.
Example:-
Dijkstra’s Algorithm
Dijkstra’s Algorithm is about search the least weight
route starting from an specific vertex to all vertices.
This algorithm find the shortest path from a given
source vertex to all other vertices in the graph. This
problem is also called single source shortest path
problem.
Time Complexity:- O(V2)
Algorithm:-
Create weight matrix table
1. Set 0 to sourced vertex & infinity to
remaining vertices.
2. Mark the smallest unmarked value & mark
that vertex
3. Find those vertices which are directly
connected with marked vertex & update all.
Update Formula:-
New_destination_value =
min(old_desination_value, marked_value +
edge_weight)
Example:-
For above graph, we have to make a matrix
where no. of columns = no. of vertices.
Let, our starting node be A
For the above graph, our matrix is