0% found this document useful (0 votes)
21 views

Algorithms DAA

Uploaded by

Laxman Agarwal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Algorithms DAA

Uploaded by

Laxman Agarwal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit 1-DAA

Dept. of CSIT, GITA


Dr. Parimal Kumar Giri
Definition: - An algorithm is a Step By Step process to solve a problem, where each step
indicates an intermediate task. Algorithm contains finite number of steps that leads to the
solution of the problem.
Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a
certain order to get the desired output. Algorithms are generally created independent of
underlying languages, i.e. an algorithm can be implemented in more than one programming
language.
From the data structure point of view, following are some important categories of algorithms −
 Search − Algorithm to search an item in a data structure.
 Sort − Algorithm to sort items in a certain order.
 Insert − Algorithm to insert item in a data structure.
 Update − Algorithm to update an existing item in a data structure.
 Delete − Algorithm to delete an existing item from a data structure.

Properties /Characteristics of an Algorithm:-


Algorithm has the following basic properties

 Input − An algorithm should have 0 or more well-defined inputs.


 Output − An algorithm should have 1 or more well-defined outputs, and should match
the desired output.
 Finiteness:- Algorithms must terminate after a finite number of steps.
 Definiteness: Each step of an algorithm must be stated clearly and unambiguously.
 Effectiveness: Each and every step in an algorithm can be converted in to programming
language statement.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions, which should be
independent of any programming code.
 Generality: Algorithm is generalized one. It works on all set of inputs and provides the
required output. In other words it is not restricted to a single input value.

How to Write an Algorithm?


There are no well-defined standards for writing algorithms. Rather, it is problem and resource
dependent. Algorithms are never written to support a particular programming code.
As we know that all programming languages share basic code constructs like loops (do, for,
while), flow-control (if-else), etc. These common constructs can be used to write an algorithm.

Dr. P K Giri Page 1


We write algorithms in a step-by-step manner, but it is not always the case. Algorithm writing is
a process and is executed after the problem domain is well-defined. That is, we should know the
problem domain, for which we are designing a solution.
Categories of Algorithm:
Based on the different types of steps in an Algorithm, it can be divided into three categories,
namely
 Sequence
 Selection and
 Iteration

Sequence: The steps described in an algorithm are performed successively one by one without
skipping any step. The sequence of steps defined in an algorithm should be simple and easy to
understand. Each instruction of such an algorithm is executed, because no selection procedure or
conditional branching exists in a sequence algorithm.
Example:
// adding two numbers
Step 1: start
Step 2: read a,b
Step 3: Sum=a+b
Step 4: write Sum
Step 5: stop
Selection: The sequence type of algorithms is not sufficient to solve the problems, which
involves decision and conditions. In order to solve the problem which involve decision making
or option selection, we go for Selection type of algorithm. The general format of Selection type
of statement is as shown below:
if(condition)
Statement-1;
else
Statement-2;
The above syntax specifies that if the condition is true, statement-1 will be executed otherwise
statement-2 will be executed. In case the operation is unsuccessful. Then sequence of algorithm
should be changed/ corrected in such a way that the system will re-execute until the operation is
successful.

Dr. P K Giri Page 2


Iteration: Iteration type algorithms are used in solving the problems which involves repetition
of statement. In this type of algorithms, a particular number of statements are repeated ‘n’ no. of
times.

Example1:

Step 1 : start
Step 2 : read n
Step 3 : repeat step 4 until n>0
Step 4 : (a) r=n mod 10
(b) s=s+r
(c) n=n/10
Step 5 : write s
Step 6 : stop
Example 2:

Write an algorithm for roots of a Quadratic Equation?

// Roots of a quadratic Equation

Step 1 : start
Step 2 : read a,b,c
Step 3 : if (a= 0) then step 4 else step 5
Step 4 : Write “ Given equation is a linear equation “
Step 5 : d=(b * b)- (4 *a *c)
Step 6 : if ( d>0) then step 7 else step8
Step 7 : Write “ Roots are real and Distinct”
Step 8: if(d=0) then step 9 else step 10
Step 9: Write “Roots are real and equal”
Step 10: Write “ Roots are Imaginary”
Step 11: stop
Example 3: Design an algorithm to add two numbers and display the result.

Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Example 4. Write an algorithm to find the largest among three different numbers entered by user
Step 1: Start
Step 2: Declare variables a,b and c.
Step 3: Read variables a,b and c.
Step 4: if a>b
if a>c
Display a is the largest number.

Dr. P K Giri Page 3


else
Display c is the largest number.
else
if b>c
Display b is the largest number.
else
Display c is the greatest number.
Step 5: Stop
Example 5.Write an algorithm to find the factorial of a number entered by user.
Step 1: Start
Step 2: Declare variables n,factorial and i.
Step 3: Initialize variables
factorial←1
i←1
Step 4: Read value of n
Step 5: Repeat the steps until i=n
5.1: factorial←factorial*i
5.2: i←i+1
Step 6: Display factorial
Step 7: Stop
Example 6.Write an algorithm to find the Simple Interest for given Time and Rate of Interest .
Step 1: Start
Step 2: Read P,R,S,T.
Step 3: Calculate S=(PTR)/100
Step 4: Print S
Step 5: Stop
Writing step numbers is optional.
We design an algorithm to get a solution of a given problem. A problem can be solved in more
than one ways.

Hence, many solution algorithms can be derived for a given problem. The next step is to analyze
those proposed solution algorithms and implement the best suitable solution.

Dr. P K Giri Page 4


Performance Analysis an Algorithm:

The Efficiency of an Algorithm can be measured by the following metrics.

i. Time Complexity and

ii. Space Complexity.

Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation. They are the following −
 A Priori Analysis − this is a theoretical analysis of an algorithm. Efficiency of an
algorithm is measured by assuming that all other factors, for example, processor speed,
are constant and have no effect on the implementation.
 A Posterior Analysis − this is an empirical analysis of an algorithm. The selected
algorithm is implemented using programming language. This is then executed on target
computer machine. In this analysis, actual statistics like running time and space required,
are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or
running time of various operations involved. The running time of an operation can be defined as
the number of computer instructions executed per operation.

Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the
algorithm X are the two main factors, which decide the efficiency of X.
 Time Factor − Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
 Space Factor − Space is measured by counting the maximum memory space required by
the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by
the algorithm in terms of n as the size of input data.

Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. The space required by an algorithm is equal to the sum of the
following two components −
 A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and constants used,
program size, etc.
 A variable part is a space required by variables, whose size depends on the size of the
problem. For example, dynamic memory allocation, recursion stack space, etc.

Dr. P K Giri Page 5


Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I)
is the variable part of the algorithm, which depends on instance characteristic I. Following is a
simple example that tries to explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space
depends on data types of given variables and constant types and it will be multiplied accordingly.

Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run
to completion. Time requirements can be defined as a numerical function T(n), where T(n) can
be measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational
time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that
T(n) grows linearly as the input size increases.
Execution Time Cases
There are three cases which are usually used to compare various data structure's execution time
in a relative manner.
 Worst Case − this is the scenario where a particular data structure operation takes
maximum time it can take. If an operation's worst case time is ƒ(n) then this operation
will not take more than ƒ(n) time where ƒ(n) represents function of n.
 Average Case − this is the scenario depicting the average execution time of an operation
of a data structure. If an operation takes ƒ(n) time in execution, then m operations will
take mƒ(n) time.
 Best Case − this is the scenario depicting the least possible execution time of an operation
of a data structure. If an operation takes ƒ(n) time in execution, then the actual operation
may take time as the random number which would be maximum as ƒ(n).

Asymptotic Analysis
Asymptotic notations are the mathematical notations used to describe the running time of an
algorithm when the input tends towards a particular value or a limiting value. For example: In
bubble sort, when the input array is already sorted, the time taken by the algorithm is linear i.e.
the best case. Using asymptotic analysis, we can very well conclude the best case, average case,
and worst case scenario of an algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to
work in a constant time. Other than the "input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical units
of computation. For example, the running time of one operation is computed as f(n) and may be

Dr. P K Giri Page 6


for another operation it is computed as g(n2). This means the first operation running time will
increase linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.

Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity
of an algorithm.

 Ο Notation
 Ω Notation
 θ Notation
Big Oh Notation, Ο (Upper Bound)
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.

For example, for a function f(n)


Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. }
Example:
To find upper bound of f(n), we have to find c and n0 such that 0 ≤ f (n) ≤ c × g (n) for all n ≥ n0
0 ≤ f (n) ≤ c × g (n)
0 ≤ 6n + 3 ≤ c × g (n)
0 ≤ 6n + 3 ≤ 6n + 3n, for all n ≥ 1 (There can be such infinite possibilities)
0 ≤ 6n + 3 ≤ 9n, So, c = 9 and g (n) = n, n0 = 1

Dr. P K Giri Page 7


There can be such multiple pair of (c, n0)
f(n) = O(g(n)) = O(n) for c = 9, n0 = 1
f(n) = O(g(n)) = O(n) for c = 7, n0 = 3, for n ≥ 3, f (n) ≤ c × g (n) holds true.
and so on.
Tabular Approach
0 ≤ 6n + 3 ≤ c × g (n)
0 ≤ 6n + 3 ≤ 7 n
Now, manually find out the proper n0, such that f (n) ≤ c.g (n)

n f(n) = 6n + 3 c.g(n) = 7n

1 9 7

2 15 14

3 21 21

4 27 28

5 33 35

From Table, for n ≥ 3, f (n) ≤ c × g (n) holds true. So, c = 7, g(n) = n and n0 = 3, There can be
such multiple pair of (c, n0)
f(n) = O(g(n)) = O(n) for c = 9, n0 = 1
f(n) = O(g(n)) = O(n) for c = 7, n0 = 3
and so on.
Example: Find upper bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
To find upper bound of f(n), we have to find c and n0 such that 0 ≤ f (n) ≤ c × g (n) for all n ≥ n0
0 ≤ f (n) ≤ c × g (n)
0 ≤ 3n2 + 2n + 4 ≤ c × g (n)
0 ≤ 3n2 + 2n + 4 ≤ 3n2 + 2n2 + 4n2,
for all n ≥ 1:
0 ≤ 3n2 + 2n + 4 ≤ 9n2
0 ≤ 3n2 +2n + 4 ≤ 9n2

Dr. P K Giri Page 8


So, c = 9, g(n) = n2 and n0 = 1
Tabular approach:
0 ≤ 3n2 + 2n + 4 ≤ c.g (n)
0 ≤ 3n2 + 2n + 4 ≤ 4n2
Now, manually find out the proper n0, such that f(n) ≤ c.g(n)

n f(n) = 3n2 + 2n + 4 c.g (n) = 4n2

1 9 4

2 20 16

3 37 36

4 60 64

5 89 100

From Table, for n ≥ 4, f(n) ≤ c × g (n) holds true. So, c = 4, g(n) = n2 and n0 = 4. There can be
such multiple pair of (c, n0)
f(n) = O (g(n)) = O (n2) for c = 9, n0 = 1
f(n) = O (g(n)) = O (n2) for c = 4, n0 = 4
and so on.
Example: Find upper bound of running time of a cubic function f(n) = 2n3 + 4n + 5.
To find upper bound of f(n), we have to find c and n0 such that 0 ≤ f(n) ≤ c × g(n) for all n ≥ n0
0 ≤ f(n) ≤ c.g(n)
0 ≤ 2n3 + 4n + 5 ≤ c × g(n)
0 ≤ 2n3 + 4n + 5 ≤ 2n3+ 4n3 + 5n3, for all n ³ 1
0 ≤ 2n3 + 4n + 5 ≤ 11n2
So, c = 11, g(n) = n3 and n0 = 1
Tabular approach
0 ≤ 2n3 + 4n + 5 ≤ c × g(n)
0 ≤ 2n3 + 4n + 5 ≤ 3n3

Dr. P K Giri Page 9


Now, manually find out the proper n0, such that f(n) ≤ c × g(n)

n f(n) = 2n3 + 4n + 5 c.g(n) = 3n3

1 11 3

2 29 24

3 71 81

4 149 192

From Table, for n ≥ 3, f(n) ≤ c × g(n) holds true. So, c = 3, g(n) = n3 and n0 = 3. There can be
such multiple pair of (c, n0)
f(n) = O(g(n)) = O(n3) for c = 11, n0 = 1
f(n) = O(g(n)) = O(n3) for c =3, n0 = 3 and so on.

Omega Notation, Ω (Lower Bound)


The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly
take to complete.

For example, for a function f(n)


Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Examples on Lower Bound Asymptotic Notation

Dr. P K Giri Page 10


Example: Find lower bound of running time of constant function f(n) = 23.
To find lower bound of f(n), we have to find c and n0 such that { 0 ≤ c × g(n) ≤ f(n) for all n ≥
n0 }
0 ≤ c × g(n) ≤ f(n)
0 ≤ c × g(n) ≤ 23
0 ≤ 23.1 ≤ 23 → true
0 ≤ 12.1 ≤ 23 → true
0 ≤ 5.1 ≤ 23 → true
Above all three inequalities are true and there exists such infinite inequalities
So c = 23, c = 12, c = 5 and g(n) = 1. Any value of c which is less than or equals to 23, satisfies
the above inequality, so all such value of c are possible. Function f(n) is constant, so it does not
depend on problem size n. Hence n0 = 1
f(n) = Ω (g(n)) = Ω (1) for c = 23, n0 = 1
f(n) = Ω (g(n)) = Ω (1) for c = 12, n0 = 1 and so on.
Example: Find lower bound of running time of a linear function f(n) = 6n + 3.

To find lower bound of f(n), we have to find c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0
0 ≤ c × g(n) ≤ f(n)
0 ≤ c × g(n) ≤ 6n + 3
0 ≤ 6n ≤ 6n + 3 → true, for all n ≥ n0
0 ≤ 5n ≤ 6n + 3 → true, for all n ≥ n0
Above both inequalities are true and there exists such infinite inequalities. So,

f(n) = Ω (g(n)) = Ω (n) for c = 6, n0 = 1


f(n) = Ω (g(n)) = Ω (n) for c = 5, n0 = 1
and so on.

Example: Find lower bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
To find lower bound of f(n), we have to find c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ³ n0
0 ≤ c × g(n) ≤ f(n)
0 ≤ c × g(n) ≤ 3n2 + 2n + 4
0 ≤ 3n2 ≤ 3n2 + 2n + 4, → true, for all n ≥ 1
0 ≤ n2 ≤ 3n2 + 2n + 4, → true, for all n ≥ 1
Above both inequalities are true and there exists such infinite inequalities.

So, f(n) = Ω (g(n)) = Ω (n2) for c = 3, n0 = 1


f(n) = Ω (g(n)) = Ω (n2) for c = 1, n0 = 1
and so on.

Example: Find lower bound of running time of quadratic function f(n) = 2n3 + 4n + 5.

Dr. P K Giri Page 11


To find lower bound of f(n), we have to find c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0
0 ≤ c × g (n) ≤ f(n)
0 ≤ c × g (n) ≤ 2n3 + 4n + 5
0 ≤ 2n3 ≤ 2n3 + 4n + 5 → true, for all n ≥ 1
0 ≤ n3 ≤ 2n3 + 4n + 5 → true, for all n ≥ 1
Above both inequalities are true and there exists such infinite inequalities.

So, f(n) = Ω (g(n)) = Ω (n3) for c = 2, n0 = 1


f(n) = Ω (g(n)) = Ω (n3) for c = 1, n0 = 1
and so on.

Theta Notation, θ (Tight Bound)


The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. It is represented as follows −

θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Examples on Tight Bound Asymptotic Notation:
Example: Find tight bound of running time of constant function f(n) = 23.
To find tight bound of f(n), we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 × g(n)
for all n ≥ n0
0 ≤ c1× g(n) ≤ 23 ≤ c2 × g(n)
0 ≤ 22 ×1 ≤ 23 ≤ 24 × 1, → true for all n ≥ 1
0 ≤ 10 ×1 ≤ 23 ≤ 50 × 1, → true for all n ≥ 1
Above both inequalities are true and there exists such infinite inequalities.
So, (c1, c2) = (22, 24) and g(n) = 1, for all n ≥ 1
(c1, c2) = (10, 50) and g(n) = 1, for all n ≥ 1
f(n) = Θ (g (n)) = Θ (1) for c1 = 22, c2 = 24, n0 = 1
f(n) = Θ (g (n)) = Θ (1) for c1 = 10, c2 = 50, n0 = 1
and so on.
Example: Find tight bound of running time of a linear function f(n) = 6n + 3.
To find tight bound of f(n), we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 × g(n)
for all n ≥ n0

Dr. P K Giri Page 12


0 ≤ c1× g(n) ≤ 6n + 3 ≤ c2 × g(n)
0 ≤ 5n ≤ 6n + 3 ≤ 9n, for all n ≥ 1
Above inequality is true and there exists such infinite inequalities.
So, f(n) = Θ(g(n)) = Θ(n) for c1 = 5, c2 = 9, n0 = 1
Example: Find tight bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
To find tight bound of f(n), we have to find c1, c2 and n0 such that, 0 ≤ c1 × g(n) ≤ f(n) ≤ c2 × g(n)
for all n ≥ n0
0 ≤ c1 × g(n) ≤ 3n2 + 2n + 4 ≤ c2 × g(n)
0 ≤ 3n2 ≤ 3n2 + 2n + 4 ≤ 9n2, for all n ≥ 1
Above inequality is true and there exists such infinite inequalities. So,
f(n) = Θ(g(n)) = Θ(n2) for c1 = 3, c2 = 9, n0 = 1
Example: Find tight bound of running time of a cubic function f(n) = 2n3 + 4n + 5.
To find tight bound of f(n), we have to find c1, c2 and n0 such that, 0 ≤ c1 × g(n) ≤ f(n) ≤ c2 × g(n)
for all n ≥ n0
0 ≤ c1 × g(n) ≤ 2n3 + 4n + 5 ≤ c2 × g(n)
0 ≤ 2n3 ≤ 2n3 + 4n + 5 ≤ 11n3, for all n ≥ 1
Above inequality is true and there exists such infinite inequalities. So,
f(n) = Θ(g(n)) = Θ(n3) for c1 = 2, c2 = 11, n0 = 1
General Problems
Example: Show that : (i) 3n + 2 = Θ(n) (ii) 6*2n + n2 = Θ(2n)
(i) 3n + 2 = Θ(n)
To prove above statement, we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 g(n) for
all n ≥ n0
0 ≤ c1× g(n) ≤ 3n + 2 ≤ c2 × g(n)
0 ≤ 2n ≤ 3n + 2 ≤ 5n, for all n ≥ 1
So, f(n) = Θ(g(n)) = Θ(n) for c1 = 2, c2 = 5 n0 = 1
(ii) 6*2n + n2 = Θ(2n)
To prove above statement, we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 g(n) for
all n ≥ n0
0 ≤ c1× g(n) ≤ 6*2n + n2 ≤ c2 × g(n)
0 ≤ 6.2n ≤ 6*2n + n2 ≤ 7*2n, for all n ≥ 1
So, f(n) = Θ(g(n)) = Θ(2n) for c1 = 6, c2 = 7 n0 = 1
Example: Let f(n) and g(n) be asymptotically positive functions. Prove or disprove following. f(n) +
g(n) = q(min(f(n), g(n))).
Example: Prove that (n + a)b = Θ( nb ), b > 0
To prove that said statement, we show find positive constants c1, c2 and n0 such that 0 ≤ c1nb ≤ (n
+ a)b ≤ c2nb, for all n ≥ n0.
Here, a is constant, so for sufficient large value of n, n ≥ |a|, so
n+a≤n+|a|
This also implies, n + a ≤ n + |a| ≤ n + n
For further large value of n, n ≥ 2|a|, so |a| ≤ (n/2)
n + a ≥ n – |a| ≥ (n/2)
When n ≥ 2|a|, we have
0 = (1/2)n ≤ n + a ≤ 2n
Given that, b is positive constant, so raising the above equation to the power b does not change the
relation,
0 = (n/2)b ≤ (n + a)b ≤ (2n)b
0 = (1/2)b (n)b ≤ (n + a)b ≤ (2)b (n)b
So the constants c1 = (1/2), c2 = 2 and n0 = 2|a| proves the given statement
Dr. P K Giri Page 13
Example: Is 2n+1 = Ο(2n) ? Explain.
To prove the given statement, we must find constants c and n0 such that 0 ≤ 2n+1 ≤ c.2n for all n ≥
n0.
2n+1 = 2 * 2n for all n. We can satisfy the given statement with c = 2 and n0 = 1.
Example: Find big theta and big omega notation of f(n) = 14 * 7 + 83
1. Big omega notation :
We have to find c and n0 such that 0 ≤ c × g(n) ≤ f(n) for all n ≥ n0
0 ≤ c × g(n) ≤ f(n)
0 ≤ c × g(n) ≤ 14 * 7 + 23
0 ≤ c × g(n) ≤ 181 for all n ³ 1
0 ≤ 181.1 ≤ 181
For c = 181, g(n) = 1 and n0 = 1
f(n) = Ω(g(n)) = Ω(1) for c = 181, n0 = 1
2. Big theta notation:
We have to find such c1, c2 and n0 such that,
0 ≤ c1 × g(n) ≤ f(n) ≤ c2 × g(n) for all n ≥ n0
0 ≤ c1 .g(n) ≤ f(n) ≤ c2 × g(n)
0 ≤ c1 × g(n) ≤ 181 ≤ c2 × g(n)
0 ≤ 180.1 ≤ 181 ≤ 182.1, for all n ³ 1
f(n) = Θ(g(n)) = Θ(1), for c1 = 180, c2 = 182, n0 = 1

Common Asymptotic Notations


Following is a list of some common asymptotic notations −

constant − Ο(1)

logarithmic − Ο(log n)

linear − Ο(n)

n log n − Ο(n log n)

quadratic − Ο(n2)

cubic − Ο(n3)

polynomial − nk

exponential − 2Ο(n)

Dr. P K Giri Page 14


Time-Space Trade-Off in Algorithms.
A tradeoff is a situation where one thing increases and another thing decreases. It is a way to
solve a problem in:
 Either in less time and by using more space, or
 In very little space by spending a long amount of time.
The best Algorithm is that which helps to solve a problem that requires less space in memory
and also takes less time to generate the output. But in general, it is not always possible to
achieve both of these conditions at the same time.
Types of Space-Time Trade-off
 Compressed or Uncompressed data
 Re Rendering or Stored images
 Smaller code or loop unrolling
 Lookup tables or Recalculation
Compressed or Uncompressed data: A space-time trade-off can be applied to the problem
of data storage. If data stored is uncompressed, it takes more space but less time. But if the
data is stored compressed, it takes less space but more time to run the decompression
algorithm. There are many instances where it is possible to directly work with compressed
data.
Re-Rendering or Stored images: In this case, storing only the source and rendering it as an
image would take more space but less time i.e., storing an image in the cache is faster than re-
rendering but requires more space in memory.
Smaller code or Loop Unrolling: Smaller code occupies less space in memory but it requires
high computation time that is required for jumping back to the beginning of the loop at the end
of each iteration. Loop unrolling can optimize execution speed at the cost of increased binary
size. It occupies more space in memory but requires less computation time.
Lookup tables or Recalculation: In a lookup table, an implementation can include the entire
table which reduces computing time but increases the amount of memory needed. It can
recalculate i.e., compute table entries as needed, increasing computing time but reducing
memory requirements.

For Example: In mathematical terms, the sequence Fn of the Fibonacci Numbers is defined by
the recurrence relation:
Fn = Fn – 1 + Fn – 2,
where, F0 = 0 and F1 = 1.
A simple solution to find the Nth Fibonacci term using recursion.
Time Complexity: O(2N)
Auxiliary Space: O(1)
Explanation: The time complexity of the above implementation is exponential due to multiple
calculations of the same subproblems again and again. The auxiliary space used is minimum.
But our goal is to reduce the time complexity of the approach even it requires extra space.

Efficient Approach: To optimize the above approach, the idea is to use Dynamic
Programming to reduce the complexity by memoization of the overlapping subproblems

Dr. P K Giri Page 15


Time Complexity: O(N)
Auxiliary Space: O(N)
Explanation: The time complexity of the above implementation is linear by using an auxiliary
space for storing the overlapping sub-problems states so that it can be used further when
required.
Sub Algorithms
A sub-algorithm is a block of instructions that is executed when it is called from some other
point of the algorithm.
A sub algorithm is an independent component of an algorithm and for this reason is defined
separately from the main algorithm. The purpose of a sub algorithm is to perform some
computation when required, under control of the main algorithm. This computation may be
performed on zero or more parameters passed by the calling routine.

A sub algorithm is just an algorithm that is used as a part of another algorithm.


One of the most important strategies to solve problems is “divide and conquer”. Breaking
down a problem into smaller sub-problems that are easier to solve. You can do that multiple
times until the sub-problem is trivial.
If the sub-problem you solve is something that is useful on its own, then you have a sub-
algorithm that deserves the name.

Dr. P K Giri Page 16


 Write an algorithm for finding the factorial of number n.
Algorithm FACTORIAL(n)
// Description: Find factorial of a given number
// Input: Number n whose factorial is to be computed
// Output : Factorial of n = n x (n – 1) x … x 2 x 1

if (n == 1) then
return 1
else
return n * FACTORIAL(n – 1)
end

 Write an algorithm to perform matrix multiplication


Algorithm MATRIX_MULTIPLICATION(A, B)
// Description: Perform matrix multiplication of two matrices.
// Input : Two matrices A and B of size n x n.
// Output : Resultant matrix containing multiplication of A and B

for i ← 1 to n do
for j ← 1 to n do
C[i][j] ← 0
for k ← 1 to n do
C[i][j] ← C[i][j] + A[i][k] * B[k][j]
end
end
end

Dr. P K Giri Page 17

You might also like