0% found this document useful (0 votes)
8 views33 pages

Game Remix Algorithm

Uploaded by

gadisakarorsa
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
0% found this document useful (0 votes)
8 views33 pages

Game Remix Algorithm

Uploaded by

gadisakarorsa
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 33

Design and Analysis - Time

Complexity
In this chapter, let us discuss the time complexity of
algorithms and the factors that influence it.
Time complexity of an algorithm, in general, is simply
defined as the time taken by an algorithm to implement
each statement in the code. It is not the execution time of
an algorithm. This entity can be influenced by various
factors like the input size, the methods used and the
procedure. An algorithm is said to be the most efficient when
the output is produced in the minimal time possible.
The most common way to find the time complexity for an
algorithm is to deduce the algorithm into a recurrence
relation. Let us look into it further below.
Solving Recurrence Relations
A recurrence relation is an equation (or an inequality) that is
defined by the smaller inputs of itself. These relations are
solved based on Mathematical Induction. In both of these
processes, a condition allows the problem to be broken into
smaller pieces that execute the same equation with lower
valued inputs.
These recurrence relations can be solved using multiple
methods; they are −

Substitution Method


Recurrence Tree Method


Iteration Method


Master Theorem

Substitution Method
The substitution method is a trial and error method; where
the values that we might think could be the solution to the
relation are substituted and check whether the equation is
valid. If it is valid, the solution is found. Otherwise, another
value is checked.
Procedure
The steps to solve recurrences using the substitution
method are −

Guess the form of solution based on the trial and error
method


Use Mathematical Induction to prove the solution is
correct for all the cases.

Example
Let us look into an example to solve a recurrence using the
substitution method,
T(n) = 2T(n/2) + n
Here, we assume that the time complexity for the equation
is O(nlogn). So according the mathematical induction
phenomenon, the time complexity for T(n/2) will
be O(n/2logn/2); substitute the value into the given
equation, and we need to prove that T(n) must be greater
than or equal to nlogn.
≤ 2n/2Log(n/2) + n
= nLogn – nLog2 + n
= nLogn – n + n
≤ nLogn

Recurrence Tree Method


In the recurrence tree method, we draw a recurrence tree
until the program cannot be divided into smaller parts
further. Then we calculate the time taken in each level of the
recurrence tree.
Procedure

Draw the recurrence tree for the program


Calculate the time complexity in every level and sum
them up to find the total time complexity.

Example
Consider the binary search algorithm and construct a
recursion tree for it −

Since the algorithm follows divide and conquer technique,


the recursion tree is drawn until it reaches the smallest input
level T(n2k)T(n2k).

T(n2k)=T(1)T(n2k)=T(1)
n=2kn=2k
Applying logarithm on both sides of the equation,

logn=log2klogn=log2k
k=log2nk=log2n
Therefore, the time complexity of a binary search algorithm
is O(log n).
Master’s Method
Master’s method or Master’s theorem is applied on
decreasing or dividing recurrence relations to find the time
complexity. It uses a set of formulae to deduce the time
complexity of an algorithm.
Design and Analysis - Divide
and Conquer
Using divide and conquer approach, the problem in hand, is
divided into smaller sub-problems and then each problem is
solved independently. When we keep dividing the sub-
problems into even smaller sub-problems, we may
eventually reach a stage where no more division is possible.
Those smallest possible sub-problems are solved using
original solution because it takes lesser time to compute.
The solution of all sub-problems is finally merged in order to
obtain the solution of the original problem.

Broadly, we can understand divide-and-conquer approach


in a three-step process.
Divide/Break
This step involves breaking the problem into smaller sub-
problems. Sub-problems should represent a part of the
original problem. This step generally takes a recursive
approach to divide the problem until no sub-problem is
further divisible. At this stage, sub-problems become atomic
in size but still represent some part of the actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be
solved. Generally, at this level, the problems are considered
'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage
recursively combines them until they formulate a solution of
the original problem. This algorithmic approach works
recursively and conquer & merge steps works so close that
they appear as one.
Arrays as Input
There are various ways in which various algorithms can take
input such that they can be solved using the divide and
conquer technique. Arrays are one of them. In algorithms
that require input to be in the form of a list, like various
sorting algorithms, array data structures are most commonly
used.
In the input for a sorting algorithm below, the array input is
divided into subproblems until they cannot be divided
further.

Then, the subproblems are sorted (the conquer step) and


are merged to form the solution of the original array back
(the combine step).
Since arrays are indexed and linear data structures, sorting
algorithms most popularly use array data structures to
receive input.
Linked Lists as Input
Another data structure that can be used to take input for
divide and conquer algorithms is a linked list (for example,
merge sort using linked lists). Like arrays, linked lists are
also linear data structures that store data sequentially.
Consider the merge sort algorithm on linked list; following
the very popular tortoise and hare algorithm, the list is
divided until it cannot be divided further.
Then, the nodes in the list are sorted (conquered). These
nodes are then combined (or merged) in recursively until the
final solution is achieved.

Various searching algorithms can also be performed on the


linked list data structures with a slightly different technique
as linked lists are not indexed linear data structures. They
must be handled using the pointers available in the nodes of
the list.
Pros and cons of Divide and Conquer
Approach
Divide and conquer approach supports parallelism as sub-
problems are independent. Hence, an algorithm, which is
designed using this technique, can run on the multiprocessor
system or in different machines simultaneously.
In this approach, most of the algorithms are designed using
recursion, hence memory management is very high. For
recursive function stack is used, where function state needs
to be stored.
Examples of Divide and Conquer
Approach
The following computer algorithms are based on divide-and-
conquer programming approach −

Merge Sort


Quick Sort


Binary Search


Strassen's Matrix Multiplication


Closest pair (points)


Karatsuba

Merge sort is a sorting technique based on divide and


conquer technique. With worst-case time complexity being
Ο(n log n), it is one of the most used and approached
algorithms.
Merge sort first divides the array into equal halves and then
combines them in a sorted manner.
How Merge Sort Works?
To understand merge sort, we take an unsorted array as the
following −
We know that merge sort first divides the whole array
iteratively into equal halves unless the atomic values are
achieved. We see here that an array of 8 items is divided
into two arrays of size 4.

This does not change the sequence of appearance of items


in the original. Now we divide these two arrays into halves.

We further divide these arrays and we achieve atomic value


which can no more be divided.

Now, we combine them in exactly the same manner as they


were broken down. Please note the color codes given to
these lists.
We first compare the element for each list and then combine
them into another list in a sorted manner. We see that 14
and 33 are in sorted positions. We compare 27 and 10 and in
the target list of 2 values we put 10 first, followed by 27. We
change the order of 19 and 35 whereas 42 and 44 are
placed sequentially.

In the next iteration of the combining phase, we compare


lists of two data values, and merge them into a list of found
data values placing all in a sorted order.

After the final merging, the list becomes sorted and is


considered the final solution.

Merge Sort Algorithm


Merge sort keeps on dividing the list into equal halves until it
can no more be divided. By definition, if it is only one
element in the list, it is considered sorted. Then, merge sort
combines the smaller sorted lists keeping the new list sorted
too.
Step 1 − if it is only one element in the list, consider it
already sorted, so return.
Step 2 − divide the list recursively into two halves until it
can no more be divided.
Step 3 − merge the smaller lists into new list in sorted
order.
Pseudocode
We shall now see the pseudocodes for merge sort functions.
As our algorithms point out two main functions − divide &
merge.
Merge sort works with recursion and we shall see our
implementation in the same way.
procedure mergesort( var a as array )
if ( n == 1 ) return a
var l1 as array = a[0] ... a[n/2]
var l2 as array = a[n/2+1] ... a[n]
l1 = mergesort( l1 )
l2 = mergesort( l2 )
return merge( l1, l2 )
end procedure
procedure merge( var a as array, var b as array )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while
while ( a has elements )
add a[0] to the end of c
remove a[0] from a
end while
while ( b has elements )
add b[0] to the end of c
remove b[0] from b
end while
return c
end procedure
Example
In the following example, we have shown Merge-Sort
algorithm step by step. First, every iteration array is divided
into two sub-arrays, until the sub-array contains only one
element. When these sub-arrays cannot be divided further,
then merge operations are performed.

Analysis
Let us consider, the running time of Merge-Sort as T(n).
Hence,

T(n)={c2xT(n2)+dxnifn≤1otherwisewherecanddarecon
stantsT(n)={cifn≤12xT(n2)+dxnotherwisewherecanddareconstan
ts

Therefore, using this recurrence relation,

T(n)=2iT(n/2i)+i⋅d⋅nT(n)=2iT(n/2i)+i⋅d⋅n
As,i=logn,T(n)=2lognT(n/2logn)
+logn⋅d⋅nAs,i=logn,T(n)=2lognT(n/2logn)+logn⋅d⋅n
=c⋅n+d⋅n⋅logn=c⋅n+d⋅n⋅logn
Therefore,T(n)=O(nlogn).Therefore,T(n)=O(nlogn).
Example
Following are the implementations of this operation in
various programming languages −
C C++JavaPython
#include <stdio.h>#define max 10int a[11] = { 10, 14, 19, 26, 27,
31, 33, 35, 42, 44, 0 };int b[10];void merging(int low, int mid,
int high){
int l1, l2, i;
for(l1 = low, l2 = mid + 1, i = low; l1 <= mid && l2 <= high;
i++) {
if(a[l1] <= a[l2])
b[i] = a[l1++];
else
b[i] = a[l2++];
}
while(l1 <= mid)
b[i++] = a[l1++];
while(l2 <= high)
b[i++] = a[l2++];
for(i = low; i <= high; i++)
a[i] = b[i];}void sort(int low, int high){
int mid;
if(low < high) {
mid = (low + high) / 2;
sort(low, mid);
sort(mid+1, high);
merging(low, mid, high);
} else {
return;
}}int main(){
int i;
printf("Array before sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);
sort(0, max);
printf("\nArray after sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);}
Output
Array before sorting
10 14 19 26 27 31 33 35 42 44 0
Array after sorting
0 10 14 19 26 27 31 33 35 42 44

Kruskal’s Minimal Spanning


Tree
Kruskal’s minimal spanning tree algorithm is one of the
efficient methods to find the minimum spanning tree of a
graph. A minimum spanning tree is a subgraph that
connects all the vertices present in the main graph with the
least possible edges and minimum cost (sum of the weights
assigned to each edge).
The algorithm first starts from the forest – which is defined
as a subgraph containing only vertices of the main graph –
of the graph, adding the least cost edges later until the
minimum spanning tree is created without forming cycles in
the graph.
Kruskal’s algorithm has easier implementation than prim’s
algorithm, but has higher complexity.
Kruskal’s Algorithm
The inputs taken by the kruskal’s algorithm are the graph G
{V, E}, where V is the set of vertices and E is the set of
edges, and the source vertex S and the minimum spanning
tree of graph G is obtained as an output.
Algorithm

Sort all the edges in the graph in an ascending order
and store it in an array edge[].

Edge

Cost

Construct the forest of the graph on a plane with all the
vertices in it.


Select the least cost edge from the edge[] array and
add it into the forest of the graph. Mark the vertices
visited by adding them into the visited[] array.


Repeat the steps 2 and 3 until all the vertices are
visited without having any cycles forming in the graph


When all the vertices are visited, the minimum
spanning tree is formed.


Calculate the minimum cost of the output spanning
tree formed.

Examples
Construct a minimum spanning tree using kruskal’s
algorithm for the graph given below −

Solution
As the first step, sort all the edges in the given graph in an
ascending order and store the values in an array.

Edge B→D A→B C→F F→E B→C G→F A→G C→D D→E C→G

Cost 5 6 9 10 11 12 15 17 22 25

Then, construct a forest of the given graph on a single plane.

From the list of sorted edge costs, select the least cost edge
and add it onto the forest in output graph.
B→D=5
Minimum cost = 5
Visited array, v = {B, D}
Similarly, the next least cost edge is B → A = 6; so we add it
onto the output graph.
Minimum cost = 5 + 6 = 11
Visited array, v = {B, D, A}
The next least cost edge is C → F = 9; add it onto the output
graph.
Minimum Cost = 5 + 6 + 9 = 20
Visited array, v = {B, D, A, C, F}

The next edge to be added onto the output graph is F → E =


10.
Minimum Cost = 5 + 6 + 9 + 10 = 30
Visited array, v = {B, D, A, C, F, E}
The next edge from the least cost array is B → C = 11, hence
we add it in the output graph.
Minimum cost = 5 + 6 + 9 + 10 + 11 = 41
Visited array, v = {B, D, A, C, F, E}
The last edge from the least cost array to be added in the
output graph is F → G = 12.
Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53
Visited array, v = {B, D, A, C, F, E, G}

The obtained result is the minimum spanning tree of the


given graph with cost = 53.
Example
The final program implements the Kruskal’s minimum
spanning tree problem that takes the cost adjacency matrix
as the input and prints the shortest path as the output along
with the minimum cost.
C C++JavaPython
#include <stdio.h>#include <stdlib.h>const int inf = 999999;int
k, a, b, u, v, n, ne = 1;int mincost = 0;int cost[3][3] = {{0,
10, 20},{12, 0,15},{16, 18, 0}};int p[9] = {0};int applyfind(int
i){
while(p[i] != 0)
i=p[i];
return i;}int applyunion(int i,int j){
if(i!=j) {
p[j]=i;
return 1;
}
return 0;}int main(){
n = 3;
int i, j;
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
if (cost[i][j] == 0) {
cost[i][j] = inf;
}
}
}
printf("Minimum Cost Spanning Tree: \n");
while(ne < n) {
int min_val = inf;
for(i=0; i<n; i++) {
for(j=0; j <n; j++) {
if(cost[i][j] < min_val) {
min_val = cost[i][j];
a = u = i;
b = v = j;
}
}
}
u = applyfind(u);
v = applyfind(v);
if(applyunion(u, v) != 0) {
printf("%d -> %d\n", a, b);
mincost +=min_val;
}
cost[a][b] = cost[b][a] = 999;
ne++;
}
printf("Minimum cost = %d",mincost);
return 0;}
Output
Minimum Cost Spanning Tree:
0 -> 1
1 -> 2
Minimum cost = 25

Design and Analysis - Heap


Sort
Heap Sort is an efficient sorting technique based on
the <b>heap data structure.

The heap is a nearly-complete binary tree where the parent


node could either be minimum or maximum. The heap with
minimum root node is called min-heap and the root node
with maximum root node is called max-heap. The elements
in the input data of the heap sort algorithm are processed
using these two methods.
The heap sort algorithm follows two main operations in this
procedure −

Builds a heap H from the input data using the
heapify (explained further into the chapter) method,
based on the way of sorting – ascending order or
descending order.


Deletes the root element of the root element and
repeats until all the input elements are processed.

Heap Sort Algorithm


The heap sort algorithm heavily depends upon the heapify
method of the binary tree. So what is this heapify method?
Heapify Method
The heapify method of a binary tree is to convert the tree
into a heap data structure. This method uses recursion
approach to heapify all the nodes of the binary tree.
Note − The binary tree must always be a complete binary
tree as it must have two children nodes always.
The complete binary tree will be converted into either a
max-heap or a min-heap by applying the heapify method.
To know more about the heapify algorithm, please click
here.
Heap Sort Algorithm
As described in the algorithm below, the sorting algorithm
first constructs the heap ADT by calling the Build-Max-Heap
algorithm and removes the root element to swap it with the
minimum valued node at the leaf. Then the heapify method
is applied to rearrange the elements accordingly.
Algorithm: Heapsort(A)
BUILD-MAX-HEAP(A)
for i = A.length downto 2
exchange A[1] with A[i]
A.heap-size = A.heap-size - 1
MAX-HEAPIFY(A, 1)
Analysis
The heap sort algorithm is the combination of two other
sorting algorithms: insertion sort and merge sort.
The similarities with insertion sort include that only a
constant number of array elements are stored outside the
input array at any time.
The time complexity of the heap sort algorithm is O(nlogn),
similar to merge sort.
Example
Let us look at an example array to understand the sort
algorithm better −
12 3 9 14 10 18 8 23
Building a heap using the BUILD-MAX-HEAP algorithm from
the input array −
Rearrange the obtained binary tree by exchanging the nodes
such that a heap data structure is formed.
The Heapsort Algorithm
Applying the heapify method, remove the root node from the
heap and replace it with the next immediate maximum
valued child of the root.
The root node is 23, so 23 is popped and 18 is made the
next root because it is the next maximum node in the heap.

Now, 18 is popped after 23 which is replaced by 14.

The current root 14 is popped from the heap and is replaced


by 12.
12 is popped and replaced with 10.

Similarly all the other elements are popped using the same
process.
Every time an element is popped, it is added at the
beginning of the output array since the heap data structure
formed is a max-heap. But if the heapify method converts
the binary tree to the min-heap, add the popped elements
are on the end of the output array.
The final sorted list is,
3 8 9 10 12 14 18 23
Implementation
The logic applied on the implementation of the heap sort is:
firstly, the heap data structure is built based on the max-
heap property where the parent nodes must have greater
values than the child nodes. Then the root node is popped
from the heap and the next maximum node on the heap is
shifted to the root. The process is continued iteratively until
the heap is empty.
In this tutorial, we show the heap sort implementation in
four different programming languages.
C C++JavaPython
#include <stdio.h>void heapify(int[], int);void build_maxheap(int
heap[], int n){
int i, j, c, r, t;
for (i = 1; i < n; i++) {
c = i;
do {
r = (c - 1) / 2;
if (heap[r] < heap[c]) { // to create MAX heap array
t = heap[r];
heap[r] = heap[c];
heap[c] = t;
}
c = r;
} while (c != 0);
}
printf("Heap array: ");
for (i = 0; i < n; i++)
printf("%d ", heap[i]);
heapify(heap, n);}void heapify(int heap[], int n){
int i, j, c, root, temp;
for (j = n - 1; j >= 0; j--) {
temp = heap[0];
heap[0] = heap[j]; // swap max element with rightmost leaf
element
heap[j] = temp;
root = 0;
do {
c = 2 * root + 1; // left node of root element
if ((heap[c] < heap[c + 1]) && c < j-1)
c++;
if (heap[root]<heap[c] && c<j) { // again rearrange to
max heap array
temp = heap[root];
heap[root] = heap[c];
heap[c] = temp;
}
root = c;
} while (c < j);
}
printf("\nThe sorted array is: ");

for (i = 0; i < n; i++)


printf("%d ", heap[i]);}int main(){
int n, i, j, c, root, temp;
n = 5;
int heap[10] = {2, 3, 1, 0, 4}; // initialize the array
build_maxheap(heap, n);}
Output
Heap array: 4 3 1 0 2
The sorted array is: 0 1 2 3 4

You might also like