Algorithms DAA
Algorithms DAA
Sequence: The steps described in an algorithm are performed successively one by one without
skipping any step. The sequence of steps defined in an algorithm should be simple and easy to
understand. Each instruction of such an algorithm is executed, because no selection procedure or
conditional branching exists in a sequence algorithm.
Example:
// adding two numbers
Step 1: start
Step 2: read a,b
Step 3: Sum=a+b
Step 4: write Sum
Step 5: stop
Selection: The sequence type of algorithms is not sufficient to solve the problems, which
involves decision and conditions. In order to solve the problem which involve decision making
or option selection, we go for Selection type of algorithm. The general format of Selection type
of statement is as shown below:
if(condition)
Statement-1;
else
Statement-2;
The above syntax specifies that if the condition is true, statement-1 will be executed otherwise
statement-2 will be executed. In case the operation is unsuccessful. Then sequence of algorithm
should be changed/ corrected in such a way that the system will re-execute until the operation is
successful.
Example1:
Step 1 : start
Step 2 : read n
Step 3 : repeat step 4 until n>0
Step 4 : (a) r=n mod 10
(b) s=s+r
(c) n=n/10
Step 5 : write s
Step 6 : stop
Example 2:
Step 1 : start
Step 2 : read a,b,c
Step 3 : if (a= 0) then step 4 else step 5
Step 4 : Write “ Given equation is a linear equation “
Step 5 : d=(b * b)- (4 *a *c)
Step 6 : if ( d>0) then step 7 else step8
Step 7 : Write “ Roots are real and Distinct”
Step 8: if(d=0) then step 9 else step 10
Step 9: Write “Roots are real and equal”
Step 10: Write “ Roots are Imaginary”
Step 11: stop
Example 3: Design an algorithm to add two numbers and display the result.
Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Example 4. Write an algorithm to find the largest among three different numbers entered by user
Step 1: Start
Step 2: Declare variables a,b and c.
Step 3: Read variables a,b and c.
Step 4: if a>b
if a>c
Display a is the largest number.
Hence, many solution algorithms can be derived for a given problem. The next step is to analyze
those proposed solution algorithms and implement the best suitable solution.
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation. They are the following −
A Priori Analysis − this is a theoretical analysis of an algorithm. Efficiency of an
algorithm is measured by assuming that all other factors, for example, processor speed,
are constant and have no effect on the implementation.
A Posterior Analysis − this is an empirical analysis of an algorithm. The selected
algorithm is implemented using programming language. This is then executed on target
computer machine. In this analysis, actual statistics like running time and space required,
are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or
running time of various operations involved. The running time of an operation can be defined as
the number of computer instructions executed per operation.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the
algorithm X are the two main factors, which decide the efficiency of X.
Time Factor − Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
Space Factor − Space is measured by counting the maximum memory space required by
the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by
the algorithm in terms of n as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. The space required by an algorithm is equal to the sum of the
following two components −
A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and constants used,
program size, etc.
A variable part is a space required by variables, whose size depends on the size of the
problem. For example, dynamic memory allocation, recursion stack space, etc.
Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run
to completion. Time requirements can be defined as a numerical function T(n), where T(n) can
be measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational
time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that
T(n) grows linearly as the input size increases.
Execution Time Cases
There are three cases which are usually used to compare various data structure's execution time
in a relative manner.
Worst Case − this is the scenario where a particular data structure operation takes
maximum time it can take. If an operation's worst case time is ƒ(n) then this operation
will not take more than ƒ(n) time where ƒ(n) represents function of n.
Average Case − this is the scenario depicting the average execution time of an operation
of a data structure. If an operation takes ƒ(n) time in execution, then m operations will
take mƒ(n) time.
Best Case − this is the scenario depicting the least possible execution time of an operation
of a data structure. If an operation takes ƒ(n) time in execution, then the actual operation
may take time as the random number which would be maximum as ƒ(n).
Asymptotic Analysis
Asymptotic notations are the mathematical notations used to describe the running time of an
algorithm when the input tends towards a particular value or a limiting value. For example: In
bubble sort, when the input array is already sorted, the time taken by the algorithm is linear i.e.
the best case. Using asymptotic analysis, we can very well conclude the best case, average case,
and worst case scenario of an algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to
work in a constant time. Other than the "input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical units
of computation. For example, the running time of one operation is computed as f(n) and may be
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity
of an algorithm.
Ο Notation
Ω Notation
θ Notation
Big Oh Notation, Ο (Upper Bound)
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.
n f(n) = 6n + 3 c.g(n) = 7n
1 9 7
2 15 14
3 21 21
4 27 28
5 33 35
From Table, for n ≥ 3, f (n) ≤ c × g (n) holds true. So, c = 7, g(n) = n and n0 = 3, There can be
such multiple pair of (c, n0)
f(n) = O(g(n)) = O(n) for c = 9, n0 = 1
f(n) = O(g(n)) = O(n) for c = 7, n0 = 3
and so on.
Example: Find upper bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
To find upper bound of f(n), we have to find c and n0 such that 0 ≤ f (n) ≤ c × g (n) for all n ≥ n0
0 ≤ f (n) ≤ c × g (n)
0 ≤ 3n2 + 2n + 4 ≤ c × g (n)
0 ≤ 3n2 + 2n + 4 ≤ 3n2 + 2n2 + 4n2,
for all n ≥ 1:
0 ≤ 3n2 + 2n + 4 ≤ 9n2
0 ≤ 3n2 +2n + 4 ≤ 9n2
1 9 4
2 20 16
3 37 36
4 60 64
5 89 100
From Table, for n ≥ 4, f(n) ≤ c × g (n) holds true. So, c = 4, g(n) = n2 and n0 = 4. There can be
such multiple pair of (c, n0)
f(n) = O (g(n)) = O (n2) for c = 9, n0 = 1
f(n) = O (g(n)) = O (n2) for c = 4, n0 = 4
and so on.
Example: Find upper bound of running time of a cubic function f(n) = 2n3 + 4n + 5.
To find upper bound of f(n), we have to find c and n0 such that 0 ≤ f(n) ≤ c × g(n) for all n ≥ n0
0 ≤ f(n) ≤ c.g(n)
0 ≤ 2n3 + 4n + 5 ≤ c × g(n)
0 ≤ 2n3 + 4n + 5 ≤ 2n3+ 4n3 + 5n3, for all n ³ 1
0 ≤ 2n3 + 4n + 5 ≤ 11n2
So, c = 11, g(n) = n3 and n0 = 1
Tabular approach
0 ≤ 2n3 + 4n + 5 ≤ c × g(n)
0 ≤ 2n3 + 4n + 5 ≤ 3n3
1 11 3
2 29 24
3 71 81
4 149 192
From Table, for n ≥ 3, f(n) ≤ c × g(n) holds true. So, c = 3, g(n) = n3 and n0 = 3. There can be
such multiple pair of (c, n0)
f(n) = O(g(n)) = O(n3) for c = 11, n0 = 1
f(n) = O(g(n)) = O(n3) for c =3, n0 = 3 and so on.
To find lower bound of f(n), we have to find c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0
0 ≤ c × g(n) ≤ f(n)
0 ≤ c × g(n) ≤ 6n + 3
0 ≤ 6n ≤ 6n + 3 → true, for all n ≥ n0
0 ≤ 5n ≤ 6n + 3 → true, for all n ≥ n0
Above both inequalities are true and there exists such infinite inequalities. So,
Example: Find lower bound of running time of quadratic function f(n) = 3n2 + 2n + 4.
To find lower bound of f(n), we have to find c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ³ n0
0 ≤ c × g(n) ≤ f(n)
0 ≤ c × g(n) ≤ 3n2 + 2n + 4
0 ≤ 3n2 ≤ 3n2 + 2n + 4, → true, for all n ≥ 1
0 ≤ n2 ≤ 3n2 + 2n + 4, → true, for all n ≥ 1
Above both inequalities are true and there exists such infinite inequalities.
Example: Find lower bound of running time of quadratic function f(n) = 2n3 + 4n + 5.
θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Examples on Tight Bound Asymptotic Notation:
Example: Find tight bound of running time of constant function f(n) = 23.
To find tight bound of f(n), we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 × g(n)
for all n ≥ n0
0 ≤ c1× g(n) ≤ 23 ≤ c2 × g(n)
0 ≤ 22 ×1 ≤ 23 ≤ 24 × 1, → true for all n ≥ 1
0 ≤ 10 ×1 ≤ 23 ≤ 50 × 1, → true for all n ≥ 1
Above both inequalities are true and there exists such infinite inequalities.
So, (c1, c2) = (22, 24) and g(n) = 1, for all n ≥ 1
(c1, c2) = (10, 50) and g(n) = 1, for all n ≥ 1
f(n) = Θ (g (n)) = Θ (1) for c1 = 22, c2 = 24, n0 = 1
f(n) = Θ (g (n)) = Θ (1) for c1 = 10, c2 = 50, n0 = 1
and so on.
Example: Find tight bound of running time of a linear function f(n) = 6n + 3.
To find tight bound of f(n), we have to find c1, c2 and n0 such that, 0 ≤ c1× g(n) ≤ f(n) ≤ c2 × g(n)
for all n ≥ n0
constant − Ο(1)
logarithmic − Ο(log n)
linear − Ο(n)
quadratic − Ο(n2)
cubic − Ο(n3)
polynomial − nk
exponential − 2Ο(n)
For Example: In mathematical terms, the sequence Fn of the Fibonacci Numbers is defined by
the recurrence relation:
Fn = Fn – 1 + Fn – 2,
where, F0 = 0 and F1 = 1.
A simple solution to find the Nth Fibonacci term using recursion.
Time Complexity: O(2N)
Auxiliary Space: O(1)
Explanation: The time complexity of the above implementation is exponential due to multiple
calculations of the same subproblems again and again. The auxiliary space used is minimum.
But our goal is to reduce the time complexity of the approach even it requires extra space.
Efficient Approach: To optimize the above approach, the idea is to use Dynamic
Programming to reduce the complexity by memoization of the overlapping subproblems
if (n == 1) then
return 1
else
return n * FACTORIAL(n – 1)
end
for i ← 1 to n do
for j ← 1 to n do
C[i][j] ← 0
for k ← 1 to n do
C[i][j] ← C[i][j] + A[i][k] * B[k][j]
end
end
end