0% found this document useful (0 votes)
587 views61 pages

Unit 1 Algorithm Performance Analysis and Measurement

The document discusses algorithms, including their definition, characteristics, categories, and performance analysis. It defines an algorithm as a step-by-step procedure to solve a problem and notes they are made up of sequences of steps, decisions, and repetitions. The document outlines different categories of algorithms based on data structures and discusses analyzing algorithms to determine their time and space complexity, including analyzing best, worst, and average cases. It also discusses calculating time complexity for different types of algorithms and the potential time-space tradeoff between algorithms.

Uploaded by

None Other
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
587 views61 pages

Unit 1 Algorithm Performance Analysis and Measurement

The document discusses algorithms, including their definition, characteristics, categories, and performance analysis. It defines an algorithm as a step-by-step procedure to solve a problem and notes they are made up of sequences of steps, decisions, and repetitions. The document outlines different categories of algorithms based on data structures and discusses analyzing algorithms to determine their time and space complexity, including analyzing best, worst, and average cases. It also discusses calculating time complexity for different types of algorithms and the potential time-space tradeoff between algorithms.

Uploaded by

None Other
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 61

Algorithm Performance

Analysis and Measurement

Dr. Jyoti Srivastava


Algorithm
“A formally defined procedure for performing some
calculation”

• An algorithm provides a blueprint to write a program to


solve a particular problem.

• An algorithm is a finite set of instructions or logic, written


in order to accomplish a certain predefined task.

• Algorithm is a step-by-step procedure, which defines a


set of instructions to be executed in a certain order to get
the desired output.
Algorithm
• Algorithm is not the complete code or program; it
is just the core logic(solution) of a problem.

• Algorithms are generally created independent of


underlying languages, i.e. an algorithm can be
implemented in more than one programming
language.
Categories of Algorithms
From the data structure point of view:

• Search − Algorithm to search an item in a data structure.

• Sort − Algorithm to sort items in a certain order.

• Insert − Algorithm to insert item in a data structure.

• Update − Algorithm to update an existing item in a data structure.

• Delete − Algorithm to delete an existing item from a data


structure.
Characteristics of an Algorithm
• Unambiguous − Algorithm should be clear and unambiguous.
Each of its steps (or phases), and their inputs/outputs should be
clear and must lead to only one meaning.
• Input − An algorithm should have 0 or more well-defined
inputs.
• Output − An algorithm should have 1 or more well-defined
outputs, and should match the desired output.
• Finiteness − Algorithms must terminate after a finite number
of steps.
• Feasibility − Should be feasible with the available resources.
• Independent − An algorithm should have step-by-step
directions, which should be independent of any programming
code.
Key features of an Algorithm

• Any algorithm will be having finite steps.

• Algorithm exhibits three key features:


1. Sequence
2. Decision
3. Repetition
Sequence
• Each step of an algorithm is executed in a specific
order.

• Example:
Step1: Input first number as A
Step2: Input second number as B
Step3: SET SUM = A+B
Step4: PRINT SUM
Step5:END
Decision
• Decision statements are used when outcome of a
process depends on some condition.

• Example: if x = y, then print EQUAL. So general form


of IF construct can be given as below:

IF condition then process


IF condition
THEN process1
ELSE process2
Decision
•Example:
Step1: Input first number as A
Step2: Input second number as B
Step3: IF A = B
Then PRINT “EQUAL”
ELSE
PRINT “NOT EQUAL”
Step4: END
Repetition

•Repetition involves executing one or more


steps for a number of times.

•Which can be implemented using while, do-


while and for loops.

•These loop will execute one or more steps


until some condition is true.
Repetition
•Example:
Step1: [INITIALIZE] SET I=0, N=10
Step2: Repeat Step 3-4 while I<=N
Step3:PRINT I
Step4: I = I+1
Step4: END
Example#1
AIM: Write an algorithm to find sum of N numbers.

Step1: Input N N=4


Step2: SET I = 0, SUM = 0
Step3: Repeat Step 3 and 4 While I<=N
Step4: SET SUM = SUM + I
SET I = I + 1
Step5: PRINT SUM
Step6:END
Example#2
• Problem − Design an algorithm to add two numbers and
display the result.

step 1 − START
step 2 − declare three integers a, b & c
step 3 − define values of a & b
step 4 − add values of a & b
step 5 − store output of step 4 to c
step 6 − print c
step 7 − STOP
Alternate Way
step 1 − START ADD

step 2 − get values of a & b

step 3 − c ← a + b

step 4 − display c

step 5 − STOP

Writing step numbers, is optional.


Algorithm
• Many solution algorithms can be derived for a given problem.
• Analyze those proposed solution algorithms and implement the best
suitable solution.
Performance Analysis
and Measurement
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages,
before implementation and after implementation. They are the
following −
• A Priori Analysis − This is a theoretical analysis of an algorithm.
Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no
effect on the implementation.

• A Posterior Analysis − This is an empirical analysis of an


algorithm. The selected algorithm is implemented using
programming language. This is then executed on target computer
machine. In this analysis, actual statistics like running time and
space required, are collected.
Algorithm Analysis
• We shall learn about a priori algorithm analysis.

• Algorithm analysis deals with the execution or running


time of various operations involved.

• The running time of an operation can be defined as the


number of computer instructions executed per
operation.
Algorithm Complexity
• The analysis of algorithms is the determination of
the number of resources (such as time and storage)
necessary to execute them.

• The complexity of an algorithm f(n) gives the


running time and/or the storage space required by
the algorithm in terms of n as the size of input data.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time
and space used by the algorithm X are the two main factors,
which decide the efficiency of X.

• Time complexity − Time is measured by counting the number


of key operations such as comparisons in the sorting algorithm.

• Space complexity − Space is measured by counting the


maximum memory space required by the algorithm.

An algorithm is said to be efficient and fast, if it takes less time


to execute and consumes less memory space.
Space Complexity

•Its the amount of memory space required by the


algorithm, during the course of its execution.

•It must be taken seriously


for multi-user systems and
in situations where limited memory is available.
Space Complexity
The space required by an algorithm is equal to the sum of the
following two components −

• A fixed part: space required to store certain data and variables,


that are independent of the size of the problem.
 For example, simple variables and constants used, program
size, etc.

• A variable part is a space required by variables, whose size


depends on the size of the problem.
 For example, dynamic memory allocation, recursion stack
space, etc.
Space Complexity
Space complexity S(P) of any algorithm P is:

S(P) = C + S(I),

where C is the fixed part and

S(I) is the variable part of the algorithm, which


depends on instance characteristic I.
Example for Space Complexity
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant.

Hence S(P) = 1 + 3.

Now, space depends on data types of given variables and constant


types and it will be multiplied accordingly.
Time Complexity

•Time Complexity of an algorithm is basically


the running time of a program, as a function
of the input size.
Best, worst and average case

• Best, Worst and Average cases of a


given algorithm express what the resource usage
is at least, at most and on average, respectively.

• In real-time computing, the worst-case execution


time is often of particular concern

• Since it is important to know how much time might


be needed in the worst case to guarantee that the
algorithm will always finish on time.
Best-case performance for algorithm

• The term best-case performance is used in computer


science to describe the way of an algorithm behaves
under optimal conditions.

• For example, the best case for a simple linear search


on a list occurs when the desired element is the first
element of the list.

• This is the scenario depicting the least possible


execution time of an operation of a data structure.
Worst-case performance for algorithm
• This denotes the behavior of the algorithm with respect to the
worst-possible case of the input instances.

• Worst-case running time of an algorithm is an upper bound on the


running time for any input.

• This provides an assurance that this algorithm will never go beyond


this time limit.

• This is the scenario where a particular data structure operation takes


maximum time it can take.

• If an operation's worst case time is ƒ(n) then this operation will not
take more than ƒ(n) time where ƒ(n) represents function of n.
Average-case performance for algorithm
• Running time of an algorithm is an estimate of the
running time for an ‘average’ input.

• It specifies the expected behavior of the algorithm


when the input is randomly drawn from a given
distribution.

• This is the scenario depicting the average execution


time of an operation of a data structure.
Time-Space Trade-off
• There can be more than one algorithm to solve a particular
problem.

• One may require less memory space and one may require
less CPU time to execute.

• Hence, there exists a time-space trade-off among


algorithms.

• So, if space is a big constraint, then one might choose a


program that takes less space at the cost of more CPU time.

• On the contrary, if time is a major constraint then one


might choose a program that takes minimum time to
execute at the cost of more space.
Calculating Time Complexity

statement;
Above we have a single statement. Its Time Complexity
will be Constant.

The running time of the statement will not change in


relation to N.
Calculating Time Complexity
• If a function is linear (without any loop or
recursion), the running time of that algorithm can be
given as the number of instructions it contains.

• Running time may vary because of loops or


recursive functions.
Linear Loops

• To calculate the efficiency with single loop, we first


need to determine number of times the loop will be
executed.
for(i=0; i<100;i++)
statement block;

• Here loop factor is 100. Efficiency is directly


proportional to the number of iterations. The general
formula:
f(n)=n
Linear Loops
for(i=0; i<100;i+=2)
statement block;

Here, the number of iterations are just half the


number of loop factor.

f(n) = n/2
Nested Loop
• Loops that contain inner loop.

• In order to analyze nested loops, we need to


determine the number of iterations each loop
completes.

No. of No. of
Total no.
iterations iterations
of
in inner in outer
iterations
loop loop
Quadratic Loop

•Number of iterations in inner loop is equal to


number of iteration in outer loop.
for(i=0; i<10;i++)
for(j=0; j<10;j++)
statement block;

•Generalized formula: f(n) = n2.


Dependent Quadratic Loop

• Number of iterations in inner loop is dependent on


outer loop.
for(i=0; i<10;i++)
for(j=0; j<=i;j++)
statement block;
• The inner loop will execute as:
1 + 2 + . . . + 10 = 55
f(n) = n (n+1) / 2
Binary search(A, target)
Binary Search
{
• Input : an item while(low <= high)
• Output: Index of that item in the array
{
mid = (low + high) / 2;
if (target < list[mid])
• Best case:
high = mid - 1;
• Worst case: item does not exist in the array
else if (target > list[mid])
low = mid + 1;
else
return mid;
}
Print “item does not exist in the array”
}
0 1 2 3 4 5 6 7 8 9
10 14 16 20 25 30 32 35 36 40
Logarithmic Time Complexity

 This is an algorithm to break a set of


while(low <= high)
numbers into two halves, to search a
{ particular field.
mid = (low + high) / 2;
if (target < list[mid])  Logarithmic Time Complexity.
high = mid - 1;
else if (target > list[mid])  The running time is proportional to
the number of times N can be divided
low = mid + 1;
by 2.
else break;
}  This is because the algorithm divides
the working area in half with each
iteration.
Algorithm Analysis
• In general, doing something with every item in one
dimension is linear,

• doing something with every item in two dimensions


is quadratic,

• and dividing the working area in half is logarithmic.


Asymptotic analysis of an algorithm
• Asymptotic analysis of an algorithm refers to defining
the mathematical boundation/framing of its run-time
performance.

• Using asymptotic analysis, we can very well conclude


the best case, average case, and worst case scenario of
an algorithm.

• Asymptotic analysis is input bound i.e., if there's no


input to the algorithm, it is concluded to work in a
constant time. Other than the "input" all other factors
are considered constant.
Asymptotic analysis of an algorithm
• Asymptotic analysis refers to computing the running time of any
operation in mathematical units of computation.

• For example, the running time of one operation is computed


as f(n) and may be for another operation it is computed as g(n2).

• This means the first operation running time will increase


linearly with the increase in n and the running time of the
second operation will increase exponentially when n increases.

Similarly, the running time of both operations will be nearly the


same if n is significantly small.
Asymptotic Notations
• Asymptotic notations are used to describe the asymptotic
running time of an algorithm.

• Commonly used notations in performance analysis.

• Used to characterize the complexity of an algorithm.

• Commonly used asymptotic notations to calculate the running


time complexity of an algorithm:

 Ο Notation
 Ω Notation
 θ Notation
O-Notation (big O notation) (Upper Bound)
• The notation Ο(n) is the formal way
to express the upper bound of an
algorithm's running time.

• It measures the worst case time


complexity or the longest amount of
time an algorithm can possibly take
to complete.

For a given function g(n), we denote by Ο(g(n)) the set of functions,

Ο(g(n)) = { f(n) : there exist positive constants c and n0 such that


0 ≤ f (n) ≤ cg(n) for all n ≥ n0 }
g(n) is an asymptotically upper bound for f(n).
Example
Let f(n)=n2 and g(n)=2n
n f(n)=n2 g(n)=2n

1 1 2 f(n) < g(n) Here for n ≥ 4, f (n) ≤


2 4 4 f(n) = g(n) g(n) so, n0= 4
3 9 8 f(n) > g(n)
4 16 16 f(n) = g(n)
5 25 32 f(n) < g(n)
6 36 64 f(n) < g(n)
7 49 128 f(n) < g(n)
2. Ω-Notation (Omega notation) (Lower Bound)
• The notation Ω(n) is the formal
way to express the lower bound
of an algorithm's running time.

• It measures the best case time


complexity or the minimum
amount of time an algorithm can
possibly take to complete.

For a given function g(n), we denote by Ω(g(n)) the set of functions,


Ω(g(n)) = { f(n) : there exist positive constants c and n0 such that
0 ≤ cg(n) ≤ f (n) for all n ≥ n0 }
Ω Notation provides an asymptotic lower bound.
3. Θ-Notation (Theta notation)

The notation θ(n) is the formal


way to express both the lower
bound and the upper bound of
an algorithm's running time.

For a given function g(n), we denote


by Θ(g(n)) the set of functions,

Θ(g(n)) = { f(n) : there exist positive constants c1, c2 and n0 such that
0 ≤ c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥ n0 }
• Because Θ(g(n)) is a set, we could write f(n) ϵ Θ(g(n)) to indicate
that f(n) is a member of Θ(g(n)).
• g(n) is an asymptotically tight bound for f(n).
Common Asymptotic Notations

constant Ο(1)
logarithmic Ο(log n)
linear Ο(n)
n log n Ο(n log n)
quadratic Ο(n2)
cubic Ο(n3)
polynomial nΟ(1)
exponential 2Ο(n)
Expressing Time & Space Complexity
• Time & Space Complexity can be expressed using function f(n),
where n is the input size

• Required when:
1. We want to predict the rate of growth of complexity as size
of the problem increases

2. Multiple algorithms available, but we need to find most


efficient

• Most widely used notation to express this function f(n) is Big-Oh


notation.
• It provides upper bound for the complexity or worst-case
complexity.
Big O Notation
• Limitations of Big O Notation
• Many algorithms are hard to analyze mathematically

• Big O analysis only tells us how the algorithm grows


with the size of the problem, not how efficient it is

• For Big O Notation, o(n2) and o(100000n2) are equal,


but in real time this may be a serious concern
Analysis of Algorithm
Linear search
Best case
Average
case
Worst case

Sorting problems

Best case
Average
case
Worst case
Growth rate of function
Some Functions Common Name
Ordered by Growth
Rate
log n logarithmic
log2 n Poly-logarithmic
n Linear
n log n Linearithmic
n2 Quadratic
n3 Cubic
cn Exponential
Growth rate of function

n log n log2 n n log n n2 n3 2n

4 2 4 8 16 64 16
16 4 16 64 256 4096 65536
64 6 36 384 4096 262144 1.84 × 1019
256 8 64 2048 65536 16777216 1.15 × 1077
1024 10 100 10240 1048576 1.07 × 109 1.79 × 10308
4096 12 144 49152 16777216 6.87 × 1010 101233
Exercises
• Express the function n3/1000 - 100n2 - 100n + 3 in
terms of Θ notation. Θ(n3)

• Express 20n3 + 10n log n + 5 in terms of O


notation. 3
O(n )
• Express 5n log n + 2n in terms of O notation.
O(n log n)
Exercises
• Find the O-notation for the following function.
a. F(n) = 5n3 + n2 + 6n + 2
0<= f(n) <= c g(n)
g(n)= n3
5n3 + n2 + 6n + 2 <= c n3
c > = 5 + 1/n +6/ n2 + 2/n3
n = 2, c >= 5+ ½ + 6/4 + 2/8
n = 4, c> = 5+ ¼ + 6/16 + 2 / 64

for c = 6 and n0 = 4 f(n) = O(n3)


Exercises c = 6, n0 = 4
• Find the O-notation for the following function.
a. F(n) = 5n3 + n2 + 6n + 2
g(n) = n3
c, n0
0<=f(n)<=c g(n) for all n>=n0
5n3 + n2 + 6n + 2 <= c n3
n F(n) g(n) C c g(n) compare

1 14 1 6 6 F(n)>cg(n)

2 58 8 6 48 F(n)>cg(n)

3 164 27 6 162 F(n)>cg(n)

4 362 64 6 384 F(n)<cg(n)

5 682 125 6 750 F(n)<cg(n)


Exercises
• Find the O-notation for the following function.
a. F(n) = 5n3 + n2 + 6n + 2

for c = 6 and n0 = 4 f(n) = O(n3)

b. F(n) = 4n3 + 2n + 3

for c = 5 and n0 = 2 f(n) = O(n3)


Exercises
•Find the Ω-notation for the following function.
a. F(n) = 27n2 + 16n + 25

27n2 ≤ 27n2 + 16n + 25 for  n ≥ 0


For Ω notation definition, cg(n) ≤ f(n)
Hence c = 27, g(n) = n2
F(n) = Ω (n2)

b. F(n) = 5n3 + n2 + 3n + 2
Exercises

• For the function f(n) = 27n2 + 16n, find Θ notation.

First find the lower bound for f(n)


27n2 ≤ 27n2 + 16n for  n ≥ 0

Hence c1 = 27, g(n) = n2

Now, find the upper bound for f(n)


27n2 + 16n ≤ 31n2 for  n ≥ 4

Hence c2 = 31, g(n) = n2

F(n) = Θ(n2) c1=27, c2=31, n0=4


Exercises

Prove (i) Is 2n+1 = O(2n) ?


(ii) Is 22n = O(2n)?

i. To show that 2n+1 = O(2n )


Here f(n) = 2n+1 and g(n) = 2n
we must find constants c, n0 > 0 such that,
0 ≤ 2n+1 ≤ c · 2n for all n ≥ n0
Since 2n+1 = 2 · 2n for all n,
we can satisfy the definition with c = 2 and
n0 = 0.
Exercises

Prove (i) Is 2n+1 = O(2n) ? (ii) Is 22n = O(2n)?

To show that 22n ≠ O(2n)


assume there exist constants c, n0 > 0 such that,
0 ≤ 22n ≤ c · 2n for all n ≥ n0
then 22n = 2n · 2n ≤ c · 2n ⇒ 2n ≤ c.
But no constant is greater than all 2n

so the assumption leads to a contradiction.

You might also like