Game Remix Algorithm
Game Remix Algorithm
Complexity
In this chapter, let us discuss the time complexity of
algorithms and the factors that influence it.
Time complexity of an algorithm, in general, is simply
defined as the time taken by an algorithm to implement
each statement in the code. It is not the execution time of
an algorithm. This entity can be influenced by various
factors like the input size, the methods used and the
procedure. An algorithm is said to be the most efficient when
the output is produced in the minimal time possible.
The most common way to find the time complexity for an
algorithm is to deduce the algorithm into a recurrence
relation. Let us look into it further below.
Solving Recurrence Relations
A recurrence relation is an equation (or an inequality) that is
defined by the smaller inputs of itself. These relations are
solved based on Mathematical Induction. In both of these
processes, a condition allows the problem to be broken into
smaller pieces that execute the same equation with lower
valued inputs.
These recurrence relations can be solved using multiple
methods; they are −
Substitution Method
Recurrence Tree Method
Iteration Method
Master Theorem
Substitution Method
The substitution method is a trial and error method; where
the values that we might think could be the solution to the
relation are substituted and check whether the equation is
valid. If it is valid, the solution is found. Otherwise, another
value is checked.
Procedure
The steps to solve recurrences using the substitution
method are −
Guess the form of solution based on the trial and error
method
Use Mathematical Induction to prove the solution is
correct for all the cases.
Example
Let us look into an example to solve a recurrence using the
substitution method,
T(n) = 2T(n/2) + n
Here, we assume that the time complexity for the equation
is O(nlogn). So according the mathematical induction
phenomenon, the time complexity for T(n/2) will
be O(n/2logn/2); substitute the value into the given
equation, and we need to prove that T(n) must be greater
than or equal to nlogn.
≤ 2n/2Log(n/2) + n
= nLogn – nLog2 + n
= nLogn – n + n
≤ nLogn
T(n2k)=T(1)T(n2k)=T(1)
n=2kn=2k
Applying logarithm on both sides of the equation,
logn=log2klogn=log2k
k=log2nk=log2n
Therefore, the time complexity of a binary search algorithm
is O(log n).
Master’s Method
Master’s method or Master’s theorem is applied on
decreasing or dividing recurrence relations to find the time
complexity. It uses a set of formulae to deduce the time
complexity of an algorithm.
Design and Analysis - Divide
and Conquer
Using divide and conquer approach, the problem in hand, is
divided into smaller sub-problems and then each problem is
solved independently. When we keep dividing the sub-
problems into even smaller sub-problems, we may
eventually reach a stage where no more division is possible.
Those smallest possible sub-problems are solved using
original solution because it takes lesser time to compute.
The solution of all sub-problems is finally merged in order to
obtain the solution of the original problem.
Analysis
Let us consider, the running time of Merge-Sort as T(n).
Hence,
T(n)={c2xT(n2)+dxnifn≤1otherwisewherecanddarecon
stantsT(n)={cifn≤12xT(n2)+dxnotherwisewherecanddareconstan
ts
T(n)=2iT(n/2i)+i⋅d⋅nT(n)=2iT(n/2i)+i⋅d⋅n
As,i=logn,T(n)=2lognT(n/2logn)
+logn⋅d⋅nAs,i=logn,T(n)=2lognT(n/2logn)+logn⋅d⋅n
=c⋅n+d⋅n⋅logn=c⋅n+d⋅n⋅logn
Therefore,T(n)=O(nlogn).Therefore,T(n)=O(nlogn).
Example
Following are the implementations of this operation in
various programming languages −
C C++JavaPython
#include <stdio.h>#define max 10int a[11] = { 10, 14, 19, 26, 27,
31, 33, 35, 42, 44, 0 };int b[10];void merging(int low, int mid,
int high){
int l1, l2, i;
for(l1 = low, l2 = mid + 1, i = low; l1 <= mid && l2 <= high;
i++) {
if(a[l1] <= a[l2])
b[i] = a[l1++];
else
b[i] = a[l2++];
}
while(l1 <= mid)
b[i++] = a[l1++];
while(l2 <= high)
b[i++] = a[l2++];
for(i = low; i <= high; i++)
a[i] = b[i];}void sort(int low, int high){
int mid;
if(low < high) {
mid = (low + high) / 2;
sort(low, mid);
sort(mid+1, high);
merging(low, mid, high);
} else {
return;
}}int main(){
int i;
printf("Array before sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);
sort(0, max);
printf("\nArray after sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);}
Output
Array before sorting
10 14 19 26 27 31 33 35 42 44 0
Array after sorting
0 10 14 19 26 27 31 33 35 42 44
Edge
Cost
Construct the forest of the graph on a plane with all the
vertices in it.
Select the least cost edge from the edge[] array and
add it into the forest of the graph. Mark the vertices
visited by adding them into the visited[] array.
Repeat the steps 2 and 3 until all the vertices are
visited without having any cycles forming in the graph
When all the vertices are visited, the minimum
spanning tree is formed.
Calculate the minimum cost of the output spanning
tree formed.
Examples
Construct a minimum spanning tree using kruskal’s
algorithm for the graph given below −
Solution
As the first step, sort all the edges in the given graph in an
ascending order and store the values in an array.
Edge B→D A→B C→F F→E B→C G→F A→G C→D D→E C→G
Cost 5 6 9 10 11 12 15 17 22 25
From the list of sorted edge costs, select the least cost edge
and add it onto the forest in output graph.
B→D=5
Minimum cost = 5
Visited array, v = {B, D}
Similarly, the next least cost edge is B → A = 6; so we add it
onto the output graph.
Minimum cost = 5 + 6 = 11
Visited array, v = {B, D, A}
The next least cost edge is C → F = 9; add it onto the output
graph.
Minimum Cost = 5 + 6 + 9 = 20
Visited array, v = {B, D, A, C, F}
Similarly all the other elements are popped using the same
process.
Every time an element is popped, it is added at the
beginning of the output array since the heap data structure
formed is a max-heap. But if the heapify method converts
the binary tree to the min-heap, add the popped elements
are on the end of the output array.
The final sorted list is,
3 8 9 10 12 14 18 23
Implementation
The logic applied on the implementation of the heap sort is:
firstly, the heap data structure is built based on the max-
heap property where the parent nodes must have greater
values than the child nodes. Then the root node is popped
from the heap and the next maximum node on the heap is
shifted to the root. The process is continued iteratively until
the heap is empty.
In this tutorial, we show the heap sort implementation in
four different programming languages.
C C++JavaPython
#include <stdio.h>void heapify(int[], int);void build_maxheap(int
heap[], int n){
int i, j, c, r, t;
for (i = 1; i < n; i++) {
c = i;
do {
r = (c - 1) / 2;
if (heap[r] < heap[c]) { // to create MAX heap array
t = heap[r];
heap[r] = heap[c];
heap[c] = t;
}
c = r;
} while (c != 0);
}
printf("Heap array: ");
for (i = 0; i < n; i++)
printf("%d ", heap[i]);
heapify(heap, n);}void heapify(int heap[], int n){
int i, j, c, root, temp;
for (j = n - 1; j >= 0; j--) {
temp = heap[0];
heap[0] = heap[j]; // swap max element with rightmost leaf
element
heap[j] = temp;
root = 0;
do {
c = 2 * root + 1; // left node of root element
if ((heap[c] < heap[c + 1]) && c < j-1)
c++;
if (heap[root]<heap[c] && c<j) { // again rearrange to
max heap array
temp = heap[root];
heap[root] = heap[c];
heap[c] = temp;
}
root = c;
} while (c < j);
}
printf("\nThe sorted array is: ");