0% found this document useful (0 votes)
10 views9 pages

While analyzing an algorithm

The document discusses algorithm complexity, focusing on time and space complexity, and introduces asymptotic notations such as Big-O, Omega, and Theta for analyzing an algorithm's performance. It explains the importance of understanding worst-case, best-case, and average-case scenarios, along with examples of different time complexities and their implications on algorithm performance. Additionally, it covers memory footprint analysis and the trade-offs between space and time efficiency in algorithms.

Uploaded by

Chitra
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
10 views9 pages

While analyzing an algorithm

The document discusses algorithm complexity, focusing on time and space complexity, and introduces asymptotic notations such as Big-O, Omega, and Theta for analyzing an algorithm's performance. It explains the importance of understanding worst-case, best-case, and average-case scenarios, along with examples of different time complexities and their implications on algorithm performance. Additionally, it covers memory footprint analysis and the trade-offs between space and time efficiency in algorithms.

Uploaded by

Chitra
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 9

Algorithm Complexity

While analyzing an algorithm, we mostly consider time complexity and space


complexity. Time complexity of an algorithm quantifies the amount of time
taken by an algorithm to run as a function of the length of the input.

Similarly, Space complexity of an algorithm quantifies the amount of space


or memory taken by an algorithm to run as a function of the length of the
input.

Time and space complexity depends on lots of things like hardware,


operating system, processors, etc. However, we don't consider any of these
factors while analyzing the algorithm. We will only consider the execution
time of an algorithm.

Lets start with a simple example. Suppose you are given an array A and an
integer x and you have to find if x exists in array A.

Simple solution to this problem is traverse the whole array A and check if the
any element is equal to x.

for i : 1 to length of A
if A[i] is equal to x
return TRUE
return FALSE

Each of the operation on a computer takes approximately constant time. Let


each operation takes c time. The number of lines of code executed actually
depends on the value of x. During analyses of algorithm, mostly we will
consider worst case scenario, i.e., when x is not present in the array A. In the
worst case, the if condition will run N times where N is the length of the
array A. So in the worst case, total execution time will be (N∗c+c). N∗c for
the if condition and c for the return statement (ignoring some operations like
assignment of i).

As we can see that the total time depends on the length of the array A. If the
length of the array increases the time of execution will also increase.

Order of growth is how the time of execution depends on the length of the
input. In the above example, we can clearly see that the time of execution
linearly depends on the length of the array. Order of growth will help us to
compute the running time with ease. We will ignore the lower order terms,
since the lower order terms are relatively insignificant for large input. We use
different notation to describe limiting behavior of a function.

Common Functions for Big-O:


1. O(1) : Constant
2. O(log n) : Logarithmic
3. O(n) : Linear
4. O(n logn) : Log Linear
5. O(n2): Quadratic / Polynomial
6. O(n3) : Cubic / Polynomial
7. O(2n): Exponential
8. O(n!) : Factorial

The Meaning of Asymptotic Notation:

Asymptotic Notations are languages that allow us to analyze an algorithm’s


running time by identifying its behavior as the input size for the algorithm
increases. This is also known as an algorithm’s growth rate. Asymptotic
notations are mathematical tools to represent time complexity of algorithms
for asymptotic analysis

You address questions such as:


 Does the algorithm suddenly become incredibly slow when the input
size grows?
 Does it mostly maintain its quick run time as the input size increases?
Asymptotic Notation gives us the ability to answer these questions.
.

To summarize, Asymptotic Notations are the expressions that are used to


represent the complexity of an algorithm. There are three types of analysis
that we perform on a particular algorithm.

Best Case: In which we analyse the performance of an algorithm for the


input, for which the algorithm takes less time or space.

Worst Case: In which we analyse the performance of an algorithm for the


input, for which the algorithm takes long time or space.

Average Case: In which we analyse the performance of an algorithm for the


input, for which the algorithm takes time or space that lies between best and
worst case.

O-notation:
To denote asymptotic upper bound, we use O-notation. For a given
function g(n), we denote by O(g(n)) (pronounced “big-oh of g of n”) as the
set of functions:
O(g(n))= { f(n) : there exist positive constants c and n0 such that
0≤f(n)≤c∗g(n) for all n≥n0 }

The Big-O Asymptotic Notation gives us the Upper Bound Idea,


mathematically described below:

such that f(n)≤c.g(n) ∀ n≥n0


f(n) = O(g(n)) if there exists a positive integer n0 and a positive constant c,

The general step wise procedure for Big-O runtime analysis is as follows:
1. Figure out what the input is and what n represents.
2. Express the maximum number of operations, the algorithm performs in
terms of n.
3. Eliminate all excluding the highest order terms.
4. Remove all the constant factors.

The least upper bound or the tightest bound of the algorithm


Some of the useful properties on Big-O notation analysis are as follows:
Constant Multiplication:
If f(n) = c.g(n), then O(f(n)) = O(g(n)) ; where c is a nonzero constant.

Polynomial Function:
If f(n) = a0 + a1.n + a2.n2 + …… + am.nm, then O(f(n)) = O(nm).

If f(n) = f1(n) + f2(n) + …….+ fm(n) and fi(n)≤ fi+1(n) ∀ i=1, 2, ……., m,
Summation Function:

then O(f(n)) = O(max(f1(n), f2(n), ….., fm(n))).

Logarithmic Function:
If f(n) = logan and g(n)=logbn, then O(f(n))=O(g(n)); all log functions
grow in the same manner in terms of Big-O.

Ω-notation:
To denote asymptotic lower bound, we use Ω-notation. For a given
function g(n), we denote by Ω(g(n)) (pronounced “big-omega of g of n”) as
the set of functions:

Ω(g(n))= { f(n) : there exist positive constants c and n0 such that


0≤c∗g(n)≤f(n) for all n≥n0 }
It gives the tightest lower bound

Θ-notation:
To denote asymptotic tight bound, we use Θ-notation. For a given
function g(n), we denote by Θ(g(n)) (pronounced “big-theta of g of n”) as the
set of functions:

Θ(g(n))= { f(n) : there exist positive constants c1,c2 and n0 such that
0≤c1∗g(n)≤f(n)≤c2∗g(n) for all n>n0 }

Diagrams from Donald Knuth’s Book on Algorithms.

Time complexity notations


While analysing an algorithm, we mostly consider O-notation because it will
give us an upper limit of the execution time i.e. the execution time in the
worst case.

To compute O-notation we will ignore the lower order terms, since the lower
order terms are relatively insignificant for large input.
Let f(N)=2∗N2+3∗N+5
O(f(N))=O(2∗N2+3∗N+5)=O(N2)

Lets consider some examples:

1. int count = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < i; j++)
count++;

Lets see how many times count++ will run.

When i=0, it will run 0 times.


When i=1, it will run 1 times.
When i=2, it will run 2 times and so on.

Total number of times count++ will run is 0+1+2+...+(N−1)=N∗(N−12. So


the time complexity will be O(N2).

2. int count = 0;
for (int i = N; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count++;
This is a tricky case. In the first look, it seems like the complexity
is O(N∗logN). N for the j′s loop and logN for i′s loop. But its wrong. Lets
see why.

Think about how many times count++ will run.

When i=N, it will run N times.


When i=N/2, it will run N/2 times.
When i=N/4, it will run N/4 times and so on.

Total number of times count++ will run is N+N/2+N/4+...+1=2∗N. So the


time complexity will be O(N).

The table below is to help you understand the growth of several common
time complexities, and thus help you judge if your algorithm is fast enough
(assuming the algorithm is correct).
Worst Accepted
Length of Input (N)
Algorithm

≤[10..11] O(N!),O(N6)
≤[15..18] O(2N∗N2)
≤[18..22] O(2N∗N)
≤100 O(N4)
≤400 O(N3)
≤2K O(N2∗logN)
≤10K O(N2)
≤1M O(N∗logN)
≤100M O(N),O(logN),O(1)

Runtime Analysis of Algorithms

In general cases, one measures and compares the worst-case theoretical


running time complexities of algorithms for the performance analysis.

The fastest possible running time for any algorithm is O(1), commonly
referred to as Constant Running Time. In this case, the algorithm always
takes the same amount of time to execute, regardless of the input size. This is
the ideal runtime for an algorithm, but it’s rarely achievable.

In actual cases, the performance (Runtime) of an algorithm depends on n, that


is the size of the input or the number of operations is required for each input
item.

The algorithms can be classified as follows from the best-to-worst


performance (Running Time Complexity):
A logarithmic algorithm – O(logn)
Runtime grows logarithmically in proportion to n.

A linear algorithm – O(n)


Runtime grows directly in proportion to n.

A superlinear algorithm – O(nlogn)


Runtime grows in proportion to n.

A polynomial algorithm – O(nc)


Runtime grows quicker than previous all based on n.
A exponential algorithm – O(cn)
Runtime grows even faster than polynomial algorithm based on n.

A factorial algorithm – O(n!)


Runtime grows the fastest and becomes quickly unusable for even
small values of n.

Where, n is the input size and c is a positive constant.


Algorithmic Examples of Runtime Analysis:
Some of the examples of all those types of algorithms (in worst-case
scenarios) are mentioned below:
Logarithmic algorithm – O(logn) – Binary Search.

Linear algorithm – O(n) – Linear Search.

Superlinear algorithm – O(n logn) – Heap Sort, Merge Sort.

Polynomial algorithm – O(nc) – Strassen’s Matrix Multiplication, Bubble


Sort, Selection Sort, Insertion Sort, Bucket Sort.

Exponential algorithm – O(cn) – Tower of Hanoi.

Factorial algorithm – O(n!) – Determinant Expansion by Minors, Brute


force Search algorithm for Traveling Salesman Problem.

Mathematical Examples of Runtime Analysis:


The performances (Runtimes) of different orders of algorithms separate
rapidly as n (the input size) gets larger. Consider the mathematical example:
If n = 10, If n=20,
log(10) = 1; log(20) = 2.996;
10 = 10; 20 = 20;
10log(10)=10; 20log(20)=59.9;
2
10 =100; 202=400;
210=1024; 220=1048576;
10!=3628800; 20!=2.432902e+1818

Memory Footprint Analysis of Algorithms


For performance analysis of an algorithm, runtime measurement is not only
relevant metric but also we need to consider the memory usage amount of the
program. This is referred to as the Memory Footprint of the algorithm,
shortly known as Space Complexity.

Space Complexity is a measure of the space required by an algorithm to run


to completion. It compares the worst case theoretical space complexities of
algorithms for the performance analysis.
It basically depends on two major aspects described below:
 Firstly, the implementation of the program is responsible for memory
usage. For example, we can assume that recursive implementation
always reserves more memory than the corresponding iterative
implementation of a particular problem.
 And the other one is n, the input size or the amount of storage required
for each item. For example, a simple algorithm with a high amount of
input size can consume more memory than a complex algorithm with
less amount of input size.

Algorithmic Examples of Memory Footprint Analysis: The algorithms with


examples are classified from the best-to-worst performance (Space
Complexity) based on the worst-case scenarios are mentioned below:
Ideal algorithm - O(1) - Linear Search, Binary Search, Bubble Sort, Selection
Sort, Insertion Sort, Heap Sort, Shell Sort.
Logarithmic algorithm - O(log n) - Merge Sort
Linear algorithm - O(n) - Quick Sort.
Sub-linear algorithm - O(n+k) - Radix Sort.

Space-Time Trade-off and Efficiency


There is usually a trade-off between optimal memory use and runtime
performance.

In general for an algorithm, space efficiency and time efficiency reach at two
opposite ends and each point in between them has a certain time and space
efficiency. So, the more time efficiency you have, the less space efficiency
you have and vice versa.

For example, Merge Sort algorithm is exceedingly fast but requires a lot of
space to do the operations. On the other side, Bubble Sort is exceedingly slow
but requires the minimum space.
At the end of this topic, we can conclude that finding an algorithm that works
in less running time and also having less requirement of memory space can
make a huge difference in how well an algorithm performs.
Amortize Analysis

Amortized time is the way to express the time complexity when an


algorithm has the very bad time complexity only once in a while besides the
time complexity that happens most of time.
This analysis is used when the occasional operation is very slow, but most of
the operations which are executing very frequently are faster. In Data
structures we need amortized analysis for Hash Tables, Disjoint Sets etc.
In the Hash-table, the most of the time the searching time complexity is O(1),
but sometimes it executes O(n) operations. When we want to search or insert
an element in a hash table for most of the cases it is constant time taking the
task, but when a collision occurs, it needs O(n) times operations for collision
resolution.
Aggregate Method
The aggregate method is used to find the total cost. If we want to add a bunch
of data, then we need to find the amortized cost by this formula.
For a sequence of n operations, the cost is −

Example on Amortized Analysis


For a dynamic array, items can be inserted at a given index in O(1) time. But
if that index is not present in the array, it fails to perform the task in constant
time. For that case, it initially doubles the size of the array then inserts the
element if the index is present.

For the dynamic array, let 𝑐𝑖 = cost of 𝑖𝑡ℎ insertion.

Refer to the link below for complexity of programs / code


https://adrianmejia.com/most-popular-algorithms-time-complexity-every-
programmer-should-know-free-online-tutorial-course/

You might also like