0% found this document useful (0 votes)
19 views29 pages

Unit 1 Introduction to Algorithm

Uploaded by

akkideepa341
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
19 views29 pages

Unit 1 Introduction to Algorithm

Uploaded by

akkideepa341
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 29

Introduction to Design and Analysis of Algorithm

Design: Plan of a particular problem or blueprint of a problem.


Analysis: Analysis is an important process in which it compares with all other existing
algorithm and chooses the appropriate algorithm for your problem.
Algorithm: An Algorithm is defined as finite sequence of unambiguous instructions
followed to accomplish a given task. It is also defined as unambiguous instructions, step by
step procedure (instruction) to solve a given problem in finite number of steps by accepting a
set of inputs and producing desired output. After producing the desired output it should
terminate. The notation of an algorithm is pictorially represented as shown below:

Problem

Algorithm

Program

Input Computer Output

➢ The solution to a given problem is expressed in the form of an algorithm.


➢ The algorithm is converted into program.
➢ The program when it is executed, accept the input and produces the desired output.

Properties of an Algorithm
An algorithm must satisfy the following criteria
• Input: Each algorithm should have zero or more inputs. The range of inputs for
which algorithm works should satisfied.
• Output: The algorithm should produce correct results. At least one output has to be
produced.
• Definiteness: Each instruction should be clear and unambiguous.
• Effectiveness: The instructions should be simple and should transform the given
input to the desired output.
• Finiteness: the algorithm must terminate after a finite sequence of instruction.
Difference between Algorithm and program

Algorithm Program
Algorithm is finite Program need not to be finite
Algorithm is written using natural language Programs are written using a specific
or algorithmic language programming language
Analysis of Algorithms

The main purpose of algorithm analysis is to design most efficient algorithms. The efficiency
of an algorithm depends on two factors

Space efficiency

Time efficiency
Space efficiency
The Space efficiency of an algorithm is the amount of memory required to run the program
completely and efficiently. If the efficiency is measured with respect to the space (memory
required), the word Space complexity is often used. The Space complexity of an algorithm
depends on the following factor:
a. Program Space: The space required for storing the machine program generated by
the compiler or assembler is called Program Space
b. Data Space: The space required for storing the constants, variables etc., is called Data
Space.
c. Stack Space: The space required for storing the return address along with parameters
that are passed to the function, local variables etc., is called Stack Space
Time efficiency
The Time efficiency of an algorithm is measured purely on how fast a given algorithm is
executed. Since the efficiency of an algorithm is measured using time, the word time
complexity is often associated with an algorithm. The Time efficiency of an algorithm
depends on various factors that are shown below:
a. Speed of the computer
b. Choice of the programming language
c. Compiler used
d. Choice of the algorithm
e. Number (Size) of inputs/outputs
Since we do not have any control over Speed of the computer,Choice of the programming
language and compiler, consider only two factors such as:
• Choice of the algorithm
• Number (Size) of inputs/outputs
Note: Many algorithms use n as parameter to find the order of growth. The parameter n may
indicate the number of inputs or size of inputs. Most of the time, the value of n is directly
proportional to the size of the data to processed. This is because, almost all algorithms run
longer on larger inputs.
Basic Operation
Definition: The operation that contributes most towards the running time of the algorithm is
called Basic Operation. A statement that executes maximum number of times in a function is
called Basic Operation. The number of times basic operation is executed depends on size of
the input. The basic operation is the most time consuming operation in the algorithm. For
Example:
• A statement present in the innermost loop in the algorithm.
• Addition operation while adding two matrices, since it is present in innermost loop.
• Multiplication operation in matrix multiplication since it is present in innermost loop.
The time efficiency is analyzed by determining the number of items the basic operation is
executed. The running time T (n) is given below:
T (n)  b * C (n)
Where
• T is the running time of the algorithm
• n is the size of the input.
• b execution time for basic operation
• C represents number of times the basic operation is executed.

Order of Growth
We expect the algorithms to work faster for all values of n. some algorithms execute faster
for smaller values of n. But, as the value of n increases, they tend to be very slow. So, the
behavior of some algorithms changes with increase in value of n. This change behavior of
the algorithm and algorithm’s can be analyzed by considering the highest order of n. the order
of growth is normally determined for larger values of n for the following reasons:
• The behavior of algorithm changes as the value of n increases.
• In real time applications we normally encounter large values of n.
Measuring the performance of an algorithm in relation to the input size n is called Order of
Growth
In other words comparing growth rate refers to finding the function which is approaching
INFINITY faster
If the algorithm are faster for smaller values of n and slower when n is larger then we cannot say
these algorithms are good. This is called the order of growth of n. For understanding the order of
growth some of the computing functions are shown in the table below.

The function shown in the above table are quite common in the analysis of algorithms. These
functions have a strong significance in terms of their relative values. For example, comparing n
and n!, the later one grows very fast for small variations in n. In fact the functions are written for
small to large order of growth.
To understand further about the growth of these common functions typical values for n can be
considered in the following table.

Table: Growth of Common Time Functions


Best-case, Worst-case and Average case efficiencies
For some of the algorithms, the time complexity will not depends on the number of inputs
alone. For example, while searching for specific item in an array of n elements using
linear search, we have following three cases:
• An item we are searching for may be present in the very first location itself. In this
case only one item is compared and this is the Best case.
• The item may be present somewhere in the middle which definitely takes some time.
Running time is more when compared to the previous case for the same average
number of cases and hence this situation is an Average Case.
• The item we are searching for may not be present in the array requiring n number of
comparisons and running time is more than the previous two cases. This may be
considered as the Worst Case.
Worst case efficiency
The efficiency of an algorithm for the input size n for which the algorithm takes
longest time to execute among all possible inputs is called Worst case efficiency.
Best case efficiency
The efficiency of an algorithm for the input size n for which the algorithm takes least
time to execute among all possible inputs is called best case efficiency.
ASYMPTONIC NOTATIONS
Evaluate the performance of algorithm in terms of input size
The efficiency of the algorithm is normally expressed using asymptotic notations. The order
of growth can be expressed using two methods:
• Order of growth using asymptotic notations
• Order of growth using limits.
The value of the function may increase or decrease as the value of n increases. Based on the
order of growth of n, the behavior of the function varies. Asymptotic notations are the
notations using which two algorithms can be compared with respect to efficiency based
on the order of growth of an algorithm’s basic operation.
Importance of Asymptotic Notations
1. They give simple characteristics of an algorithm's efficiency.
2. They allow the comparisons of the performances of various algorithms.
Types of asymptotic notations are shown below:
a. O (Big Oh)
b.  (Big Omega)
c.  (Big Theta)
Informal definitions of Asymptotic Notations
Big Oh (O): Assuming n indicates the size of input and g(n) is function, informally O(g(n))
is defined as set of functions with a small or same order of growth as g(n) as n goes infinity.
Big Omega (): Assuming n indicates the size of input and g(n) is function, informally
(g(n)) is defined as set of functions with a larger or same order of growth as g(n) as n goes
infinity.
Big Theta ():Assuming n indicates the size of input and g(n) is function, informally (g(n))
is defined as set of functions that have same order of growth as g(n) as n goes infinity.

Formal definitions of asymptotic notations


Big Oh (O)
Big - Oh notation is used to define the upper bound of an algorithm in terms of Time Complexity.
That means Big - Oh notation always indicates the maximum time required by an algorithm for all
input values. That means Big - Oh notation describes the worst case of an algorithm time complexity.
Big - Oh Notation can be defined as follows...
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as
O(g(n)).
Let f(n) be the time efficiency of an algorithm. The function f(n) is said to be O(g(n)) [read as
big-oh of g of n] denoted by
f(n)  O(g(n)) or f(n) = O(g(n))
So, if we draw the graph f(n) and c*g(n) verses n, the graph of the function f(n) lies below the
graph of c*g(n) for sufficiently large value of n as shown below:

Here, c.g(n) is the upper bound. The upper bound on f(n) indicates that function f(n) will not
consume more than the specified time c*g(n) i.e., running time of function f(n) may be equal
to c.g(n), but it will never be worse than the upper bound.

Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values of C >
0 and n0>= 1
f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)

Big Omega ()


Big Omega (Ω) notation is used to define the lower bound of an algorithm in terms of Time Complexity.
That means Big-Omega notation always indicates the minimum time required by an algorithm for all input
values. That means Big-Omega notation describes the best case of an algorithm time complexity.
Big - Omega Notation can be defined as follows...
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term.
If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)).

Let f(n) be the time complexity of an algorithm. The function f(n) is said to be (g(n)) [read
as big-omega of g of n] which is donated by
f(n)  (g(n))
or
f(n) = (g(n))
If and only if there exists a positive constant c and non-negative integer n0 satisfy the
constraint
f(n)≥c*g(n) for all n ≥ n0.
So, if we draw the graph f(n) and c*g(n) verses n, the graph of f(n) lies above the graph of
g(n) for sufficiently large value of n as shown below:

This notation gives the lower bound on a function f(n) within a constant factor. The lower
bound on f(n) indicates that function f(n) will consume at least the specified time c*g(n) i.e.,
the algorithm has a running time that is always greater than c*g(n). In general, the lower
bound implies that below this time the algorithm cannot perform better.
Note: f(n)≥c*g(n) indicates that g(n) is a lower bound and the running time of an algorithm is always greater
than g(n). So, big-omega notation is used for finding best case time efficiency.

Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of
C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)

Big Theta ()


BigTheta() notation is used to define the average bound of an algorithm in terms of Time
Complexity.
That means Big Theta notation always indicates the average time required by an algorithm
for all input values. That means Big - Theta notation describes the average case of an
algorithm time complexity.
Big - Theta Notation can be defined as follows...
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1.
Then we can represent f(n) as (g(n)).
f(n) = (g(n))
Let f(n) be the time complexity of an algorithm. The function f(n) is said to be (g(n)) [read
as big-theta of g of n] which is donated by
f(n)  (g(n))
or
f(n) = (g(n))
If and only if there exists a positive constants c1, c2 and non-negative integer n0 satisfying the
constraint
C1*g(n)≤f(n)≤c2*g(n) for all n≥n0
So, if we draw the graph f(n), c1*g(n) and c2*g(n) verses n, the graph of function f(n) lies
above the graph of c1*g(n) and lies below the graph of c2*g(n) for sufficiently large value of
n as shown below:
This notation is used to donate both lower and upper bound on a function f(n) within a
constant factor. The upper bound on f(n) indicates that function f(n) will not consume more
than the specified time c2*g(n). The lower bound on f(n) indicates that function f(n) in the
best case will consume at least the specified time c1*g(n).
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for all
values of C1 > 0, C2 > 0 and n0>= 1
C1 g(n) <= f(n) <= C2 g(n)
⇒C1 n <= 3n + 2 <= C2 n
Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 2.
By using Big - Theta notation we can represent the time complexity as follows...
3n + 2 = Θ(n)
Difference between Recursion and Non recursive algorithms

BASIS FOR
RECURSION ITERATION
COMPARISON

Basic The statement in a body of function Allows the set of instructions to be


calls the function itself. repeatedly executed.

Format In recursive function, only Iteration includes initialization,


termination condition (base case) is condition, execution of statement
specified. within loop and update (increments
and decrements) the control variable.

Termination A conditional statement is included The iteration statement is repeatedly


in the body of the function to force executed until a certain condition is
the function to return without reached.
recursion call being executed.

Condition If the function does not converge to If the control condition in the
some condition called (base case), it iteration statement never become
leads to infinite recursion. false, it leads to infinite iteration.

Infinite Infinite recursion can crash the Infinite loop uses CPU cycles
Repetition system. repeatedly.

Applied Recursion is always applied to Iteration is applied to iteration


functions. statements or "loops".

Stack The stack is used to store the set of Does not uses stack.
new local variables and parameters
each time the function is called.

Overhead Recursion possesses the overhead of No overhead of repeated function


repeated function calls. call.

Speed Slow in execution. Fast in execution.

Size of Code Recursion reduces the size of the Iteration makes the code longer.
code.

Below are the detailed example to illustrate the difference between the two:
1. Time Complexity: Finding the Time complexity of Recursion is more difficult than
that of Iteration.
• Recursion: Time complexity of recursion can be found by finding the value of the
nth recursive call in terms of the previous calls. Thus, finding the destination case
in terms of the base case, and solving in terms of the base case gives us an idea of
the time complexity of recursive equations. Please see Solving Recurrences for
more details.
• Iteration: Time complexity of iteration can be found by finding the number of
cycles being repeated inside the loop.
2. Usage: Usage of either of these techniques is a trade-off between time complexity and
size of code. If time complexity is the point of focus, and number of recursive calls
would be large, it is better to use iteration. However, if time complexity is not an issue
and shortness of code is, recursion would be the way to go.
• Recursion: Recursion involves calling the same function again, and hence, has a
very small length of code. However, as we saw in the analysis, the time
complexity of recursion can get to be exponential when there are a considerable
number of recursive calls. Hence, usage of recursion is advantageous in shorter
code, but higher time complexity.
• Iteration: Iteration is repetition of a block of code. This involves a larger size of
code, but the time complexity is generally lesser than it is for recursion.
3. Overhead: Recursion has a large amount of Overhead as compared to Iteration.
• Recursion: Recursion has the overhead of repeated function calls, that is due to
repetitive calling of the same function, the time complexity of the code increases
manifold.
• Iteration: Iteration does not involve any such overhead.
4. Infinite Repetition: Infinite Repetition in recursion can lead to CPU crash but in
iteration, it will stop when memory is exhausted.
• Recursion: In Recursion, Infinite recursive calls may occur due to some mistake
in specifying the base condition, which on never becoming false, keeps calling the
function, which may lead to system CPU crash.
Iteration: Infinite iteration due to mistake in iterator assignment or increment, or in
the terminating condition, will lead to infinite loops, which may or may not lead to
system errors, but will surely stop program execution any further.
Mathematical Analysis of Non-Recursive Algorithms
The general plan of analyzing non-recursive algorithms are as follows: -
• Based on the size of input, determine the number of parameters to be considered.
• Identify the basic operation in the algorithm.
• Check whether the number of times the basic operation is executed depends only on the
size of the input. If the basic operation to be executed depends on some other conditions,
then it is necessary to obtain the worst case, best case, average case separately.
• Obtain the total number of times the basic operations are executed.
• Simplify using standard formulas and obtain the order of growth.

Design an algorithm to find largest of ‘n’ numbers and obtain its complexity.
Design: consider the array a consisting of the 5 elements 20, 40, 25, 55, and 30. Here n=5
represent the number of elements in the array. So, the parameters are a and n. The pictorial
representation is shown below:

a [0] = 20
a [1] = 40
a [2] = 25
a [3] = 55 if a[i]>a[pos]) pos=i
a [4] = 30

I ← 1 to 4
I ← 1 to 5-1
In general,
I ← 1 to n-1 where n is the number of elements in the array.
Now, the complete code can be written as below:
Pos ← 0
for i ← 1 to n-1 do
if( a[i]> a[pos])
pos ← i
end for

Algorithm: Maximum (a [], n)


Purpose: Find the largest of ‘n’ elements.
Input: ‘n’ the number of items present in the list. List consisting of ‘n’ elements.
Output: pos position of the largest elements.
pos ← 0
for i ← 1 to n-1 do
if(a[i]>a[pos])
pos ← i;
end if;
end for;
return pos;
Time Complexity: -
Step 1: The parameter to be considered is one, which represents the size of input.
Step 2: The element comparison if a[i]>a[pos] is the basic operation.
Step 3: The total number of times the basic operation can be executed is as follows:
for i ← 1 to n-1 do
if (a[i]>a[pos]) pos ← i;
…………….

𝑛−1 1
f (n)=∑𝑖=1 Note: Upper bound =n-1, Lower bound = 1
= (n-1)-1+1 // Result =[Upper bound- Lower bound +1]
=n-1.
i.e., f(n)=n-1n //By neglecting lower order terms and constants
Step 4: Express f(n) to asymptotic notations
f (n)  O(n)

Matrix Multiplication: -
Design: The two matrices a and b can be multiplied and the result can be stored in matrix c
for each value of I and j as shown below:
c[i][j]=∑𝑛−1
𝑘=0 𝑎[𝑖][𝑘] ∗ 𝑏[𝑘][𝑗] for i=0 to n-1 and for j=0 to n-1

The above mathematical formula can be converted into pseudo code as shown below:
Algorithm:
Algorithm Multiplication (a [], b [], c [], n)
//Purpose: Multiply two matrices a and b of size nn
// Inputs:
n: represent size of the array
a: First matrix of size nn
b: Second matrix of size nn
//Output:
c: Resultant matrix where the product of two matrices should be stored
for i ← 0 to n-1
for j ← 0 to n-1
sum ← 0
for k=0 to n-1
sum ← sum + a[i][k]  b[k][j]
end for
c[i][j] ← sum
end for
end for

Analysis
The time complexity in the best case, worst case remains same.
Step 1: The parameters to be considered is n, which is a measure of input’s size
Step 2: The basic operation is the multiplication statement in the innermost for loop i.e., the
statement “sum ← sum + a[i][k]  b[k][j]” is the basic operation.
Step 3: The number of multiplication depends on the value of n only and not on any other
factors. So, the total number of times the multiplication statement is executed can be obtained
as shown below:
for i ← 0 to n-1
for j ← 0 to n-1
sum ← 0
for k=0 to n-1
Sum ← sum + a[i][k]  b[k][j]

f(n) = ∑𝑛−1 ∑𝑛−1 ∑𝑛−1 1 ( Upper limit = n-1 and lower limit = 0)
𝑖=0 𝑗=0 𝑘=0
f(n) = ∑𝑛−1 ∑𝑛−1 𝑛 − 1 − 0 + 1 ( Result = Upper limit –Lower limit + 1)
𝑖=0 𝑗=0
f(n) = ∑𝑛−1 ∑𝑛−1 𝑛
𝑖=0 𝑗=0
f(n) = 𝑛 ∑𝑛−1 ∑𝑛−1 1 ( Upper limit = n-1 and lower limit = 0)
𝑖=0 𝑗=0
f(n) = ∑𝑖=0
𝑛−1 𝑛(𝑛 − 1 − 0 + 1) ( Result = Upper limit –Lower limit + 1)
f(n) = ∑𝑛−1
𝑖=0 𝑛(𝑛)

f(n) = n2∑𝑛−1
𝑖=0 1 ( Upper limit = n-1 and lower limit = 0)
f(n) = n2(n-1-0+1)
f(n) = n3
So, the time complexity is given by

f(n)(n3)

Mathematical analysis of Recursive Algorithms


A recursive is a method of solving the problem where the solution to a problem depends on
solutions to smaller instances of same problem. Thus, a recursive function is a function that
calls itself during execution. This enables the function to repeat itself several times to solve a
given problem.
The various types of recursion are shown below:
a) Direct Recursion: A recursive function that invokes itself is said to have direct
recursion. For example, the factorial function calls itself and hence the function is
said to be direct recursion.

int fact (int n)


{
if (n==0) Direct Recursion
return 1;
else
return n*fact (n-1);
}

b) Indirect Recursion: A function which contains a call to another function which in


turn calls another function which in turn calls another function and so on and
eventually calls the first function is called indirect recursion. It is very difficult to
read, understand and find any logical errors in a function that has indirect recursion.
For example, a function f1 invokes f2 which in turn invokes f3 which in turns invokes
f1 is said to have indirect recursion. This is pictorially shown as below:
void f1 () void f2 () void f3 ()
{ { {

f2 () f3 () f1 ()
} } }

Base case and General case


Base case: A base case is a special case where solution can be obtained without using
recursion. This is also called base/terminal condition. Each recursive function must have a
base case. A base case serves two purposes:
 It acts as terminating condition
 The recursive function obtains the solution from the base case it reaches.
For example, in the function factorial, 0! Is 1 is the base case or terminal condition.
General case: In any recursive function, the part of the function except base case is called
general case. This portion of the code contains the logic required to reduce the size (or
instance) of the problem so as to remove towards the base case or terminal condition. Hence,
each time the function is called, the size (or instance) of the problem is reduced.
For example, in the function fact, n*fact(n-1) is general case. By decreasing the value of n by
1, the function fact is heading towards the base case.
So, the general rules that we are suppose to follow while designing any recursive algorithms
are as follow:
• Determine the base case. Careful attention should be given here, because: when
base case is reached, the function must execute a return statement without a call to
recursive function.
• Determine the general case. Here also careful attention should be given and see
that each call must reduce the size of the problem and moves towards base case.
• Combine the base case and general case into a function.

General plan to analyze the efficiency of recursive algorithms


• Identify the parameters based on the size of the input
• Identify the basic operation in the algorithm
• Obtain the number of times basic operation is executed on different inputs of the
same size. If it varies, then it is necessary to obtain the worst case, best case and
average case separately.
• Obtain a recurrence relation with an appropriate initial condition
• Solve the recurrence relation and obtain the order of growth and express using
asymptotic notations

Example 1: Factorial of a number


Algorithm fact(n)
//Purpose: This function computes factorial of n
//Input: n: represent a positive integer
//Output: Factorial of n
If (n==0)
return 1 //n!=1 if n is 0
else
return n*fact(n-1) //n!=n*(n-1)! Otherwise
end if
Analysis:
The time efficiency of the algorithm to find the factorial of a number can be obtained as
below:
Step 1: The parameter to be considered is n, which is a measure of input’s size
Step 2: The basic operation is the multiplication statement.
Step 3: The total number of multiplications can be obtained using the recurrence relation as
shown below:

1 if n=0 t(0)=1 --------------- (1)


T(n)= or t(n)= 1+t(n-1)
1+t(n-1) otherwise

The above recurrence relation can be solved using repeated substitution as shown below:
T(n) = 1+t(n-1) // t(n) =1+t(n-1) --------------- (2)
=1+1+t(n-2) // t(n-1) =1+t(n-2) replacing n by n-1 in equation (2)
=2+t(n-2)
=2+1+t(n-3) // t(n-2) =1+t(n-3) replacing n by n-2 in equation (2)
=3+t(n-3)
=4+t(n-4)
…………….
…………….
=i+t(n-1)
Finally, to get initial condition t(0), let i=n
=n+t(n-n)
=n+t(0)
=n+1
=n
So, the time complexity of factorial of n is given by T(n)(n).

BRUTE FORCE
Brute force is a straightforward approach to solving a problem, usually directly based on the
problem’s statement and definitions of the concepts involved. Generally it involved iterating
through all possible solutions until a valid one is found.
Although it may sound unintelligent, in many cases brute force is the best way to go, as we
can rely on the computer’s speed to solve the problem for us.
The brute force approach is a guaranteed way to find the correct solution by listing all
the possible candidate solutions for the problem. It is a generic method and not limited
to any specific domain of problems. The brute force method is ideal for solving small and
simpler problems.
Brute Force Algorithm: This is the most basic and simplest type of algorithm. A Brute
Force Algorithm is the straightforward approach to a problem i.e., the first approach that

Algorithm for Selection sort


Algorithm SelectionSort (a [], n)
//Purpose: Sort the given elements using selection sort
//Inputs:
n- The number of items present in the array
a- The item to be sorted are present in the array.
//Outputs:
a- contains the sorted list.
for i ← 0 to n-2 do
pos ← i //Assume ith element as smallest
for j ← i+1 to n-1 do // Find the position of the smallest item
if (a[j]<a[pos]) pos ← j;
end for
temp ← a[pos] // exchange ith item with smallest element
a[pos] ← a[i]
a[i] ← temp
end for

Time complexity of Selection Sort


Step 1: The parameter to be considered is n, which represent size of the input.
comes to our mind on seeing the problem. More technically it is just like iterating every
possibility available to solve that problem.
For Example: If there is a lock of 4-digit PIN. The digits to be chosen from 0-9 then the
brute force will be trying all possible combinations one by one like 0001, 0002, 0003, 0004,
and so on until we get the right PIN. In the worst case, it will take 10,000 tries to find the
right combination.
Advantages
i. This method is applicable for wide variety of problems such as finding the sum of n
numbers, computing power, computing GCD and so on.
ii. Simple and easy algorithms can be written. For example, bubble sort, selection sort,
matrix multiplication etc.
Can be used to judge more efficient alternative approaches to solve a problem

Disadvantages
i. Rarely yields efficient algorithms
ii. Some brute force algorithms are unacceptably slow: Foe example , bubble sort
iii. Not as constructive/creative as other design techniques such as divide and conquer.
Example 1: SELECTION SORT
In this sorting method first find the smallest element in the list and exchange that with first
element in the list. Find the second smallest element in the list and exchange with second
element of the list and so on. Finally all the elements will be arranged in ascending order.
Since, the next least item is selected and exchanged appropriately so that elements are finally
sorted, this technique is called Selection sort.
Let us see, how the elements 45, 20, 40, 5, 15 can be sorted using selection sort:

Given items After pass 1 After pass 2 After pass 3 After pass 4
A[0] = 45 5 5 5 5
A[1] = 20 20 15 15 15
A[2] = 40 40 40 20 20
A[3] = 5 45 45 45 40
A[4] = 15 15 20 40 45
1st smallest is 5. 2nd rd th
smallest is 3 smallest is 4 smallest is 5. All elements are
Exchange it with 15. Exchange it 20. Exchange it Exchange it with sorted
1st item with 2nd item with 3rd item 4th item

Design: The smallest element from ith position onwards can be obtained using the following
code.

pos ← i where i=0,1,2,3…..


for j ← i+1 to n-1
if (a[j]<a[pos]) pos ← j
end for
After finding the position of the smallest number, it should be exchanged with ith position.
The equivalent statements are shown below:
temp ← a[pos]
a[pos] ← a[i]
a[i] ← temp
Algorithm for Selection sort
Algorithm SelectionSort (a [], n)
//Purpose: Sort the given elements using selection sort
//Inputs:
n- The number of items present in the array
a- The item to be sorted are present in the array.
//Outputs:
a- contains the sorted list.

for i ← 0 to n-1 do
pos ← i //Assume ith element as smallest
for j ← i+1 to n-1 do // Find the position of the smallest item
if (a[j]<a[pos]) pos ← j;
end for
temp ← a[pos] // exchange ith item with smallest element
a[pos] ← a[i]
a[i] ← temp
end for
Time complexity of Selection Sort
Step 1: The parameter to be considered is n, which represent size of the input.

Step 2: The basic operation is the comparative statement “ if a[j]< a[pos]” in the innermost
for loop
Step 3: The number of comparisons depends on the value of n and the number of items the
two for loops are executed.
for i ← 0 to n-2 do
pos ← i
for j ← i+1 to n-1 do
if (a[j]<a[pos]) pos ← j;

f(n) = ∑𝑛−2 ∑𝑛−1 1


𝑖=0 𝑗=𝑖+1

= ∑𝑛−2
𝑖=0 (𝑛 − 1) − (𝑖 + 1) + 1

= ∑𝑖=0
𝑛−2(𝑛 − 1 − 𝑖)

= (n-1)+(n-2)+ ........................... 3+2+1


(𝑛−1)𝑛
=
2
2
= n /2-n/2
n2
So, the time complexity of Selection sort T(n) =(n2)

Linear Search Algorithm (Sequential Search Algorithm)


Linear search algorithm finds a given element in a list of elements with O(n) time complexity
where n is total number of elements in the list. This search process starts comparing search
element with the first element in the list. If both are matched then result is element found
otherwise search element is compared with the next element in the list. Repeat the same until
search element is compared with the last element in the list, if that last element also doesn't
match, then the result is "Element not found in the list". That means, the search element is
compared with element by element in the list.

Linear search is implemented using following steps...


• Step 1 - Read the search element from the user.
• Step 2 - Compare the search element with the first element in the list.
• Step 3 - If both are matched, then display "Given element is found!!!" and terminate
the function
• Step 4 - If both are not matched, then compare search element with the next element
in the list.
• Step 5 - Repeat steps 3 and 4 until search element is compared with last element in
the list.
• Step 6 - If last element in the list also doesn't match, then display "Element is not
found!!!" and terminate the function.
Linear Search Example
Consider
• We are given the following linear array.
• Element 15 has to be searched in it using Linear Search Algorithm.

Now,
• Linear Search algorithm compares element 15 with all the elements of the array one by
one.
• It continues searching until either the element 15 is found or all the elements are searched.

Linear Search Algorithm works in the following steps


Step-01:
• It compares element 15 with the 1st element 92.
• Since 15 ≠ 92, so required element is not found.
• So, it moves to the next element.

Step-02:
• It compares element 15 with the 2nd element 87.
• Since 15 ≠ 87, so required element is not found.
• So, it moves to the next element.
Step-03:
• It compares element 15 with the 3rd element 53.
• Since 15 ≠ 53, so required element is not found.
• So, it moves to the next element.
Step-04:
• It compares element 15 with the 4th element 10.
• Since 15 ≠ 10, so required element is not found.
• So, it moves to the next element.
Step-05:
• It compares element 15 with the 5th element 15.
• Since 15 = 15, so required element is found.
• Now, it stops the comparison and returns index 4 at which element 15 is present.

/***************************************************
* Program to search for an item using Linear Search
****************************************************/

#include<stdio.h> // include stdio.h


#include<conio.h>
void main()
{
int i,n,search,a[10],count=0;
clrscr();
printf("\n. ............................................... ");
printf("\nProgram to search for an item using Linear Search");
printf("\n. ............................................... ");
printf("\n Enter the number of Elements:");
scanf("%d",&n);
// int a[n];
printf("\n Enter the elements of the list or an array\n");
for(i=0;i<n;i++)
{
scanf("%d",&a[i]);
}

printf("\n Enter the number to be searched:");


scanf("%d",&search);

for(i=0;i<n;i++)
{
if (a[i]==search)
{
printf("\n %d is present at location a[%d]\n",search,(i+1));
count++;
}
}

printf(" \n Total number of times the search element found\n");

if (count==0)
{
printf("%d is not present in the given list",search);
}

else
{
printf("\n %d is present %d times in an array",search,count--);
}
getch();
}
ASSIGNMENT QUESTIONS UNIT 1:
1. What is an algorithm? Write characteristics of algorithms.
2. Discuss the criteria an algorithm must satisfied the characteristics.
3. Explain the fundamentals of problem solving algorithm
4. Explain analysis of framework
5. Discuss or Explain generation of prime numbers
6. Generate list of integers from 2 to n.
7. What is efficiency of an algorithm explain space complexity and time complexity.
8. Discuss components that affect time complexity.
9. What is basic operation? Explain with an example.
10. Explain order of growth.
11. What is best, worst and average case efficiency of an algorithm?
12. What is asymptotic notation? Explain types of asymptotic notations briefly with graphical
representation.
13. Explain different time complexity types with examples.
14. Write the steps for mathematical analysis of non-recursive algorithm.
15. Explain mathematical analysis of non-recursive algorithm, with an example.
16. Write the steps for mathematical analysis of recursive algorithm.
17. Explain with an example mathematical analysis of recursive algorithm.
18. What is brute force technique? Analyze time complexity analysis of selection sort
using brute force technique.
19. Analyze the time complexity of bubble sort.
20. Explain linear search with suitable example
21. Write the differences between time complexity and space complexity
22. Write the advantages and disadvantages of algorithm
23. Write the real world application of design and analysis of algorithm briefly

You might also like