Unit 1 Introduction to Algorithm
Unit 1 Introduction to Algorithm
Problem
Algorithm
Program
Properties of an Algorithm
An algorithm must satisfy the following criteria
• Input: Each algorithm should have zero or more inputs. The range of inputs for
which algorithm works should satisfied.
• Output: The algorithm should produce correct results. At least one output has to be
produced.
• Definiteness: Each instruction should be clear and unambiguous.
• Effectiveness: The instructions should be simple and should transform the given
input to the desired output.
• Finiteness: the algorithm must terminate after a finite sequence of instruction.
Difference between Algorithm and program
Algorithm Program
Algorithm is finite Program need not to be finite
Algorithm is written using natural language Programs are written using a specific
or algorithmic language programming language
Analysis of Algorithms
The main purpose of algorithm analysis is to design most efficient algorithms. The efficiency
of an algorithm depends on two factors
Space efficiency
Time efficiency
Space efficiency
The Space efficiency of an algorithm is the amount of memory required to run the program
completely and efficiently. If the efficiency is measured with respect to the space (memory
required), the word Space complexity is often used. The Space complexity of an algorithm
depends on the following factor:
a. Program Space: The space required for storing the machine program generated by
the compiler or assembler is called Program Space
b. Data Space: The space required for storing the constants, variables etc., is called Data
Space.
c. Stack Space: The space required for storing the return address along with parameters
that are passed to the function, local variables etc., is called Stack Space
Time efficiency
The Time efficiency of an algorithm is measured purely on how fast a given algorithm is
executed. Since the efficiency of an algorithm is measured using time, the word time
complexity is often associated with an algorithm. The Time efficiency of an algorithm
depends on various factors that are shown below:
a. Speed of the computer
b. Choice of the programming language
c. Compiler used
d. Choice of the algorithm
e. Number (Size) of inputs/outputs
Since we do not have any control over Speed of the computer,Choice of the programming
language and compiler, consider only two factors such as:
• Choice of the algorithm
• Number (Size) of inputs/outputs
Note: Many algorithms use n as parameter to find the order of growth. The parameter n may
indicate the number of inputs or size of inputs. Most of the time, the value of n is directly
proportional to the size of the data to processed. This is because, almost all algorithms run
longer on larger inputs.
Basic Operation
Definition: The operation that contributes most towards the running time of the algorithm is
called Basic Operation. A statement that executes maximum number of times in a function is
called Basic Operation. The number of times basic operation is executed depends on size of
the input. The basic operation is the most time consuming operation in the algorithm. For
Example:
• A statement present in the innermost loop in the algorithm.
• Addition operation while adding two matrices, since it is present in innermost loop.
• Multiplication operation in matrix multiplication since it is present in innermost loop.
The time efficiency is analyzed by determining the number of items the basic operation is
executed. The running time T (n) is given below:
T (n) b * C (n)
Where
• T is the running time of the algorithm
• n is the size of the input.
• b execution time for basic operation
• C represents number of times the basic operation is executed.
Order of Growth
We expect the algorithms to work faster for all values of n. some algorithms execute faster
for smaller values of n. But, as the value of n increases, they tend to be very slow. So, the
behavior of some algorithms changes with increase in value of n. This change behavior of
the algorithm and algorithm’s can be analyzed by considering the highest order of n. the order
of growth is normally determined for larger values of n for the following reasons:
• The behavior of algorithm changes as the value of n increases.
• In real time applications we normally encounter large values of n.
Measuring the performance of an algorithm in relation to the input size n is called Order of
Growth
In other words comparing growth rate refers to finding the function which is approaching
INFINITY faster
If the algorithm are faster for smaller values of n and slower when n is larger then we cannot say
these algorithms are good. This is called the order of growth of n. For understanding the order of
growth some of the computing functions are shown in the table below.
The function shown in the above table are quite common in the analysis of algorithms. These
functions have a strong significance in terms of their relative values. For example, comparing n
and n!, the later one grows very fast for small variations in n. In fact the functions are written for
small to large order of growth.
To understand further about the growth of these common functions typical values for n can be
considered in the following table.
Here, c.g(n) is the upper bound. The upper bound on f(n) indicates that function f(n) will not
consume more than the specified time c*g(n) i.e., running time of function f(n) may be equal
to c.g(n), but it will never be worse than the upper bound.
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values of C >
0 and n0>= 1
f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)
Let f(n) be the time complexity of an algorithm. The function f(n) is said to be (g(n)) [read
as big-omega of g of n] which is donated by
f(n) (g(n))
or
f(n) = (g(n))
If and only if there exists a positive constant c and non-negative integer n0 satisfy the
constraint
f(n)≥c*g(n) for all n ≥ n0.
So, if we draw the graph f(n) and c*g(n) verses n, the graph of f(n) lies above the graph of
g(n) for sufficiently large value of n as shown below:
This notation gives the lower bound on a function f(n) within a constant factor. The lower
bound on f(n) indicates that function f(n) will consume at least the specified time c*g(n) i.e.,
the algorithm has a running time that is always greater than c*g(n). In general, the lower
bound implies that below this time the algorithm cannot perform better.
Note: f(n)≥c*g(n) indicates that g(n) is a lower bound and the running time of an algorithm is always greater
than g(n). So, big-omega notation is used for finding best case time efficiency.
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of
C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)
BASIS FOR
RECURSION ITERATION
COMPARISON
Condition If the function does not converge to If the control condition in the
some condition called (base case), it iteration statement never become
leads to infinite recursion. false, it leads to infinite iteration.
Infinite Infinite recursion can crash the Infinite loop uses CPU cycles
Repetition system. repeatedly.
Stack The stack is used to store the set of Does not uses stack.
new local variables and parameters
each time the function is called.
Size of Code Recursion reduces the size of the Iteration makes the code longer.
code.
Below are the detailed example to illustrate the difference between the two:
1. Time Complexity: Finding the Time complexity of Recursion is more difficult than
that of Iteration.
• Recursion: Time complexity of recursion can be found by finding the value of the
nth recursive call in terms of the previous calls. Thus, finding the destination case
in terms of the base case, and solving in terms of the base case gives us an idea of
the time complexity of recursive equations. Please see Solving Recurrences for
more details.
• Iteration: Time complexity of iteration can be found by finding the number of
cycles being repeated inside the loop.
2. Usage: Usage of either of these techniques is a trade-off between time complexity and
size of code. If time complexity is the point of focus, and number of recursive calls
would be large, it is better to use iteration. However, if time complexity is not an issue
and shortness of code is, recursion would be the way to go.
• Recursion: Recursion involves calling the same function again, and hence, has a
very small length of code. However, as we saw in the analysis, the time
complexity of recursion can get to be exponential when there are a considerable
number of recursive calls. Hence, usage of recursion is advantageous in shorter
code, but higher time complexity.
• Iteration: Iteration is repetition of a block of code. This involves a larger size of
code, but the time complexity is generally lesser than it is for recursion.
3. Overhead: Recursion has a large amount of Overhead as compared to Iteration.
• Recursion: Recursion has the overhead of repeated function calls, that is due to
repetitive calling of the same function, the time complexity of the code increases
manifold.
• Iteration: Iteration does not involve any such overhead.
4. Infinite Repetition: Infinite Repetition in recursion can lead to CPU crash but in
iteration, it will stop when memory is exhausted.
• Recursion: In Recursion, Infinite recursive calls may occur due to some mistake
in specifying the base condition, which on never becoming false, keeps calling the
function, which may lead to system CPU crash.
Iteration: Infinite iteration due to mistake in iterator assignment or increment, or in
the terminating condition, will lead to infinite loops, which may or may not lead to
system errors, but will surely stop program execution any further.
Mathematical Analysis of Non-Recursive Algorithms
The general plan of analyzing non-recursive algorithms are as follows: -
• Based on the size of input, determine the number of parameters to be considered.
• Identify the basic operation in the algorithm.
• Check whether the number of times the basic operation is executed depends only on the
size of the input. If the basic operation to be executed depends on some other conditions,
then it is necessary to obtain the worst case, best case, average case separately.
• Obtain the total number of times the basic operations are executed.
• Simplify using standard formulas and obtain the order of growth.
Design an algorithm to find largest of ‘n’ numbers and obtain its complexity.
Design: consider the array a consisting of the 5 elements 20, 40, 25, 55, and 30. Here n=5
represent the number of elements in the array. So, the parameters are a and n. The pictorial
representation is shown below:
a [0] = 20
a [1] = 40
a [2] = 25
a [3] = 55 if a[i]>a[pos]) pos=i
a [4] = 30
I ← 1 to 4
I ← 1 to 5-1
In general,
I ← 1 to n-1 where n is the number of elements in the array.
Now, the complete code can be written as below:
Pos ← 0
for i ← 1 to n-1 do
if( a[i]> a[pos])
pos ← i
end for
𝑛−1 1
f (n)=∑𝑖=1 Note: Upper bound =n-1, Lower bound = 1
= (n-1)-1+1 // Result =[Upper bound- Lower bound +1]
=n-1.
i.e., f(n)=n-1n //By neglecting lower order terms and constants
Step 4: Express f(n) to asymptotic notations
f (n) O(n)
Matrix Multiplication: -
Design: The two matrices a and b can be multiplied and the result can be stored in matrix c
for each value of I and j as shown below:
c[i][j]=∑𝑛−1
𝑘=0 𝑎[𝑖][𝑘] ∗ 𝑏[𝑘][𝑗] for i=0 to n-1 and for j=0 to n-1
The above mathematical formula can be converted into pseudo code as shown below:
Algorithm:
Algorithm Multiplication (a [], b [], c [], n)
//Purpose: Multiply two matrices a and b of size nn
// Inputs:
n: represent size of the array
a: First matrix of size nn
b: Second matrix of size nn
//Output:
c: Resultant matrix where the product of two matrices should be stored
for i ← 0 to n-1
for j ← 0 to n-1
sum ← 0
for k=0 to n-1
sum ← sum + a[i][k] b[k][j]
end for
c[i][j] ← sum
end for
end for
Analysis
The time complexity in the best case, worst case remains same.
Step 1: The parameters to be considered is n, which is a measure of input’s size
Step 2: The basic operation is the multiplication statement in the innermost for loop i.e., the
statement “sum ← sum + a[i][k] b[k][j]” is the basic operation.
Step 3: The number of multiplication depends on the value of n only and not on any other
factors. So, the total number of times the multiplication statement is executed can be obtained
as shown below:
for i ← 0 to n-1
for j ← 0 to n-1
sum ← 0
for k=0 to n-1
Sum ← sum + a[i][k] b[k][j]
f(n) = ∑𝑛−1 ∑𝑛−1 ∑𝑛−1 1 ( Upper limit = n-1 and lower limit = 0)
𝑖=0 𝑗=0 𝑘=0
f(n) = ∑𝑛−1 ∑𝑛−1 𝑛 − 1 − 0 + 1 ( Result = Upper limit –Lower limit + 1)
𝑖=0 𝑗=0
f(n) = ∑𝑛−1 ∑𝑛−1 𝑛
𝑖=0 𝑗=0
f(n) = 𝑛 ∑𝑛−1 ∑𝑛−1 1 ( Upper limit = n-1 and lower limit = 0)
𝑖=0 𝑗=0
f(n) = ∑𝑖=0
𝑛−1 𝑛(𝑛 − 1 − 0 + 1) ( Result = Upper limit –Lower limit + 1)
f(n) = ∑𝑛−1
𝑖=0 𝑛(𝑛)
f(n) = n2∑𝑛−1
𝑖=0 1 ( Upper limit = n-1 and lower limit = 0)
f(n) = n2(n-1-0+1)
f(n) = n3
So, the time complexity is given by
f(n)(n3)
f2 () f3 () f1 ()
} } }
The above recurrence relation can be solved using repeated substitution as shown below:
T(n) = 1+t(n-1) // t(n) =1+t(n-1) --------------- (2)
=1+1+t(n-2) // t(n-1) =1+t(n-2) replacing n by n-1 in equation (2)
=2+t(n-2)
=2+1+t(n-3) // t(n-2) =1+t(n-3) replacing n by n-2 in equation (2)
=3+t(n-3)
=4+t(n-4)
…………….
…………….
=i+t(n-1)
Finally, to get initial condition t(0), let i=n
=n+t(n-n)
=n+t(0)
=n+1
=n
So, the time complexity of factorial of n is given by T(n)(n).
BRUTE FORCE
Brute force is a straightforward approach to solving a problem, usually directly based on the
problem’s statement and definitions of the concepts involved. Generally it involved iterating
through all possible solutions until a valid one is found.
Although it may sound unintelligent, in many cases brute force is the best way to go, as we
can rely on the computer’s speed to solve the problem for us.
The brute force approach is a guaranteed way to find the correct solution by listing all
the possible candidate solutions for the problem. It is a generic method and not limited
to any specific domain of problems. The brute force method is ideal for solving small and
simpler problems.
Brute Force Algorithm: This is the most basic and simplest type of algorithm. A Brute
Force Algorithm is the straightforward approach to a problem i.e., the first approach that
Disadvantages
i. Rarely yields efficient algorithms
ii. Some brute force algorithms are unacceptably slow: Foe example , bubble sort
iii. Not as constructive/creative as other design techniques such as divide and conquer.
Example 1: SELECTION SORT
In this sorting method first find the smallest element in the list and exchange that with first
element in the list. Find the second smallest element in the list and exchange with second
element of the list and so on. Finally all the elements will be arranged in ascending order.
Since, the next least item is selected and exchanged appropriately so that elements are finally
sorted, this technique is called Selection sort.
Let us see, how the elements 45, 20, 40, 5, 15 can be sorted using selection sort:
Given items After pass 1 After pass 2 After pass 3 After pass 4
A[0] = 45 5 5 5 5
A[1] = 20 20 15 15 15
A[2] = 40 40 40 20 20
A[3] = 5 45 45 45 40
A[4] = 15 15 20 40 45
1st smallest is 5. 2nd rd th
smallest is 3 smallest is 4 smallest is 5. All elements are
Exchange it with 15. Exchange it 20. Exchange it Exchange it with sorted
1st item with 2nd item with 3rd item 4th item
Design: The smallest element from ith position onwards can be obtained using the following
code.
for i ← 0 to n-1 do
pos ← i //Assume ith element as smallest
for j ← i+1 to n-1 do // Find the position of the smallest item
if (a[j]<a[pos]) pos ← j;
end for
temp ← a[pos] // exchange ith item with smallest element
a[pos] ← a[i]
a[i] ← temp
end for
Time complexity of Selection Sort
Step 1: The parameter to be considered is n, which represent size of the input.
Step 2: The basic operation is the comparative statement “ if a[j]< a[pos]” in the innermost
for loop
Step 3: The number of comparisons depends on the value of n and the number of items the
two for loops are executed.
for i ← 0 to n-2 do
pos ← i
for j ← i+1 to n-1 do
if (a[j]<a[pos]) pos ← j;
= ∑𝑛−2
𝑖=0 (𝑛 − 1) − (𝑖 + 1) + 1
= ∑𝑖=0
𝑛−2(𝑛 − 1 − 𝑖)
Now,
• Linear Search algorithm compares element 15 with all the elements of the array one by
one.
• It continues searching until either the element 15 is found or all the elements are searched.
Step-02:
• It compares element 15 with the 2nd element 87.
• Since 15 ≠ 87, so required element is not found.
• So, it moves to the next element.
Step-03:
• It compares element 15 with the 3rd element 53.
• Since 15 ≠ 53, so required element is not found.
• So, it moves to the next element.
Step-04:
• It compares element 15 with the 4th element 10.
• Since 15 ≠ 10, so required element is not found.
• So, it moves to the next element.
Step-05:
• It compares element 15 with the 5th element 15.
• Since 15 = 15, so required element is found.
• Now, it stops the comparison and returns index 4 at which element 15 is present.
/***************************************************
* Program to search for an item using Linear Search
****************************************************/
for(i=0;i<n;i++)
{
if (a[i]==search)
{
printf("\n %d is present at location a[%d]\n",search,(i+1));
count++;
}
}
if (count==0)
{
printf("%d is not present in the given list",search);
}
else
{
printf("\n %d is present %d times in an array",search,count--);
}
getch();
}
ASSIGNMENT QUESTIONS UNIT 1:
1. What is an algorithm? Write characteristics of algorithms.
2. Discuss the criteria an algorithm must satisfied the characteristics.
3. Explain the fundamentals of problem solving algorithm
4. Explain analysis of framework
5. Discuss or Explain generation of prime numbers
6. Generate list of integers from 2 to n.
7. What is efficiency of an algorithm explain space complexity and time complexity.
8. Discuss components that affect time complexity.
9. What is basic operation? Explain with an example.
10. Explain order of growth.
11. What is best, worst and average case efficiency of an algorithm?
12. What is asymptotic notation? Explain types of asymptotic notations briefly with graphical
representation.
13. Explain different time complexity types with examples.
14. Write the steps for mathematical analysis of non-recursive algorithm.
15. Explain mathematical analysis of non-recursive algorithm, with an example.
16. Write the steps for mathematical analysis of recursive algorithm.
17. Explain with an example mathematical analysis of recursive algorithm.
18. What is brute force technique? Analyze time complexity analysis of selection sort
using brute force technique.
19. Analyze the time complexity of bubble sort.
20. Explain linear search with suitable example
21. Write the differences between time complexity and space complexity
22. Write the advantages and disadvantages of algorithm
23. Write the real world application of design and analysis of algorithm briefly