0% found this document useful (0 votes)
42 views46 pages

Chapter 1 Introduction Analysis of Algorithm

Chapter 1 Introduction Analysis of Algorithm

Uploaded by

yashpatilyp2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
42 views46 pages

Chapter 1 Introduction Analysis of Algorithm

Chapter 1 Introduction Analysis of Algorithm

Uploaded by

yashpatilyp2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 46

Chapter 1: Introduction

Performance analysis, space, and time complexity Growth of function, Big-


Oh, Omega, Theta notation, Mathematical background for algorithm analysis.

Complexity class: Definition of P, NP, NP-Hard, NP-Complete Analysis of


selection sort, insertion sort.

Recurrences: The substitution method, Recursion tree method, Master


method
Algorithm:
“A procedure for solving a mathematical problem in a finite number of steps.”

Need for algorithms:


1. Algorithms are necessary for solving complex problems efficiently and
effectively.
2. They help to automate processes and make them more reliable, faster, and
easier to perform.
3. Algorithms also enable computers to perform tasks that would be difficult
or impossible for humans to do manually.
Characteristics:
Clear and Unambiguous: Each step should be
clear and must lead to only one meaning.

Well-Defined Inputs: The inputs should be well-


defined. It may or may not take input.

Well-Defined Outputs: must be clearly define


what output will be yielded and it should be well-
defined as well.

Finite-ness: should terminate after a finite time.


Feasible: Must be simple, generic, and practical, such that it can be executed with the available
resources.
Language Independent: Must be just plain instructions that can be implemented in any
language, and yet the output will be the same, as expected.
Performance analysis:
• To estimate the performance of an algorithm is to analyze its
complexity.
• Complexity theory is the study of how complicated algorithms are.
• Any algorithm should have three key features:
• It should be correct
• A good algorithm should be understandable.
• A good algorithm should be efficient.
Analyze an Algorithm:
• The efficiency of an algorithm must be checked and maintained to analyze an algorithm.
1. Priori (before) Analysis:
• Checks algorithm before its implementation.
• Algorithm is written in theoretical form.
• Factors, such as, processor speed, are constant and have no effect on the implementation.
• This analysis is independent of the type of hardware and language of the compiler.
• It gives the approximate answers for the complexity of the program.

2. Posterior (After) Analysis:


• Checks algorithm after its implementation.
• Algorithm is implemented in any programming language.
• This analysis helps to get the actual and real analysis report about correctness, space
required, time consumed, etc.
• That is, it is dependent on the language of the compiler and the type of hardware used.
Need of Analysis of Algorithms:
• Algorithms are often quite different from one another, though the objective of these
algorithms are the same.
• For example, we know that a set of numbers can be sorted using different algorithms.
• Number of comparisons performed by one algorithm may vary with others for the
same input. Hence, time complexity of those algorithms may differ.
• At the same time, we need to calculate the memory space required by each
algorithm.
“Analysis of algorithm is the process of analyzing the problem-solving capability
of the algorithm in terms of the time and memory required.”
Types of algorithm analysis:
1. Worst-case 2. Best-case 3. Average case
The best case: It gives the beger bound of the performance
• The data given as input is organized in a way that the algorithm will give its best
performance.
The worst case: It gives the upper bound of the performance
• It Guarantees, the performance of the algorithm will always be better than the
numbers that come out of our analysis.
• Worst-case analysis is especially useful for estimating the performance when
dealing with complex problems with larger datasets.
The average case:
• Not always accurate as it needs to consider all the different combinations and
possibilities of input to the algorithm, which is not always easy to do.
Average case = All random case time / Total no of case
Algorithm complexity:
• An algorithm is defined as complex based on the amount of Space and Time it consumes.

• Complexity of an algorithm refers to the measure of the time that it will need to execute
and get the expected output, and the Space it will need to store all the data.
• Hence these two factors define the efficiency of an algorithm.

The two factors of Algorithm Complexity are:


Time Factor: Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.

Space Factor: Space is measured by counting the maximum memory space required by
the algorithm to run/execute.
Algorithm complexity:
1. Space Complexity:
• The space complexity of an algorithm refers to the amount of memory required by the
algorithm to store the variables and get the result.
To calculate Space Complexity: Determined based on folbeging 2 components:
Fixed Part:
• This refers to the space that is required by the algorithm.
• For example, input variables, output variables, program size, etc.
Variable Part:
• This refers to the space that can be different based on the implementation of the
algorithm. For example, temporary variables, dynamic memory allocation, recursion
stack space, etc.

Space complexity S(P) of any algorithm P is S(P) = C + SP(I),


where-
C is the fixed part and S(I) is the variable part of the algorithm.
2. Time Complexity:
• The time complexity of an algorithm refers to the amount of time required by the
algorithm to execute and get the result.

Time Complexity is determining the folbeging 2 components:

Constant time part:


• Any instruction that is executed just once comes in this part.
• For example, input, output, if-else, switch, arithmetic operations, etc.

Variable Time Part:


• Any instruction that is executed more than once, say n times, comes in this part.
• For example, loops, recursion, etc.

Time complexity of any algorithm P is T(P) = C + TP(I),


where-
C is the constant time part and TP(I) is the variable part of the algorithm.
Rate of Growth:
“It is rate at which execution time of an algorithm increases on increase in inputs.”
Asymptotic notation :
• Asymptotic notations are mathematical tools to represent running time complexity of an
algorithm based on the input size.
• It is commonly used in complexity analysis to describe how an algorithm performs as the
size of the input grows.
• Asymptotic Analysis for computer programs generally describes efficiency with machine-
independent variables.
Asymptotic notations:
1.Big O notation (Worst Case)
2.Theta notation (Average Case)
3.Omega notation (Best Case)
Used to:
• Describe the characteristics of a function in the limit.
• Describes the rate of growth of functions.
• Focus on what’s important by abstracting away beg-order terms and constant factors.
Example:
1. Big O Notation:
• Represents the upper bound of the running time
of an algorithm.
• It measures the time taken by the program to
return the output once the input increases.
• It gives the worst time complexity of the
program.
• Represented as O(N), where N represents the
input for the program.
Mathematical representation,
f(n) = O (g(n)) iff there exist positive constants c and n0
such that f(n) ≤ c*g(n) for all n ≥ n0.
f(n) and g(n) are non-negative functions.
Examples: Linear search: O(N), Binary search: O(Log N), Bubble sort: O(N2),
where N is the number of elements in the given array
2. Omega Notation (Ω-Notation) :
• Omega notation gives the best-case time
complexity of the given runtime algorithm.
• Measures the beger boundary of the runtime
algorithm.
• Represented as Ω-Notation.

• Mathematically,
f(n) = Ω (g(n)) there exist positive constants c and n0
such that 0 ≤ c*g(n) ≤ f(n) for all n ≥ n0
f(n) and g(n) are non-negative functions.

• Examples of Omega notation in asymptotic notations are:


{(N2+N), (4*N), (N2+LogN)} gives the best time complexity as Ω(N2)
3. Theta Notation (Θ-Notation) :
• The theta notation describes the average case time
complexity of an algorithm.
• It represents the both upper bound and beger
bound of a running time algorithm.
Mathematically, it means
f(n) = Θ (g(n))
f(n) and g(n) are non-negative functions. there exist positive constants c1, c2 and n0
such that 0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n) for all n ≥ n0
• This means that, f(n) = Θ(g(n)), If there are positive constants n0 and c such that, to the
right of n0 the f(n) always lies on or above c1*g(n) and bebeg c2*g(n).

Examples of Theta notation in asymptotic notations:


• (N2), (N log N), (3*N2) gives average time complexity as Θ(N2)
Complexity Classes:
• There exist some problems whose solutions are divided into classes known
as Complexity Classes.
• These classes help to group problems based on how much time and space they require
to solve problems and verify the solutions.

Types of Complexity Classes:


• P Class
• NP Class
• Co NP Class
• NP-hard
• NP-complete
P Class:
• The P in the P class stands for Polynomial Time.
• It is the collection of decision problems (problems with a “yes” or “no” answer) that can
be solved by a deterministic machine in polynomial time.
Features:
• The solution to P problems is easy to find.
• P is often a class of computational problems that are solvable and tractable.
• Tractable means that the problems can be solved in theory as well as in practice.
• But the problems that can be solved in theory but not in practice are known as
intractable.
Examples:
• Calculating the greatest common divisor.
• Merge Sort
Nondeterministic Polynomial-time (NP) class:

• It is the collection of decision problems that can be solved by a non-deterministic


machine in polynomial time.

• A nondeterministic computer is one that can “guess” the right answer or solution

• The solutions of the NP class are hard to find since they are being solved by a
non-deterministic machine but the solutions are easy to verify.

• Thus, NP can also be thought of as the class of problems “whose solutions can be
verified in polynomial time”
NP-hard class:
• An NP-hard problem is at least as hard as the hardest problem in NP and it is a class
of problems such that every problem in NP reduces to NP-hard.
Features:
• All NP-hard problems are not in NP.
• It takes a long time to check them. This means if a solution for an NP-hard problem is
given then it takes a long time to check whether it is right or not.
• A problem A is in NP-hard if, for every problem L in NP, there exists a polynomial-
time reduction from L to A.
Examples:
• Halting problem.
• Qualified Boolean formulas.
• No Hamiltonian cycle.
NP-complete class:
• A problem is NP-complete if it is both NP and NP-hard.
• NP-complete problems are the hard problems in NP.
Features:
• NP-complete problems are special as any problem in NP class can be transformed or
reduced into NP-complete problems in polynomial time.
• If one could solve an NP-complete problem in polynomial time, then one could also
solve any NP problem in polynomial time.
Example:
• Hamiltonian Cycle.
• Satisfiability.
• Vertex cover.
Complexity Class Characteristic feature

P Easily solvable in polynomial time.

NP Yes, answers can be checked in polynomial time.

Co-NP No, answers can be checked in polynomial time.

All NP-hard problems are not in NP and it takes a long time


NP-hard
to check them.

NP-complete A problem that is NP and NP-hard is NP-complete.


Selection sort:
• It is a comparison-based in-place sorting algorithm.
• It performs maximum (n – 1) swaps on a list of size n.
• The running time is quadratic, making it unsuitable for a long list as memory is
limited.
Working principle:
• The array is split into two sections, one sorted and one unsorted.
• The sorted section is initially empty, whereas the unsorted section includes the
whole array.
• At every step, algorithm finds minimal element in the unsorted part and adds it to
the end of the sorted one.
• When the unsorted section gets empty, the process comes to a halt.
• At the end of the first pass, minimum element seats on the first location, in the
second pass, second minimum element seats on the second location and so on
Example: Sort the letters of word “DESIGN” in alphabetical order using selection sort.
Algorithm SELECTION_SORT(A)
// A is an array of size n
for i ← 1 to n – 1 do
min ← i
for j ← i + 1 to n do
if ( A[j] < A[min]) do
min ← j
end if
end for
swap(A[i], A[min])
End for
Complexity Analysis
• Selection sort iterates through two loops irrespective of the input data pattern.
• The running time of selection sort in best, average and worst case is O(n2).

Recurrence relations:
• Recurrence relations are used in algorithms to express the runtime or complexity of
recursive functions and data structures.

Find Recurrence equation to derive the complexity:


• Let T(n) be running time to solve problem of size n.
• In each iteration, one element gets sorted and problem size reduces by one.
• For each problem of size n, the inner loop iterates n times.
• So, recurrence equation for selection sort can be written as:
T(n) = 1 for n=0
= T(n – 1) + n for n>0
The recurrence relation for selection sort is: When k approaches to n,
T(n) = 1 for n=0 T(n) = T(0) + 1 + 2 + 3 + … + (n –1) + n
= T(n – 1) + n for n>0 ---- 1
T(0) = 0,
From above equation, T(n) = 1 + 2 + 3 + … + n
T (n – 1) = T(n – 2) + (n – 1) = n(n + 1) / 2
Use above in equation 1
= (n2 /2) + (n/2)
T(n) = T(n – 2) + (n – 1) + n----2 T(n) = O(max( (n2 /2) + (n/2) ))
Let T(n – 2) = T(n – 3) + n – 2 = O(n2 / 2)
Use above in equation 2
= O(n2)
T(n) = T(n – 3) + (n – 2) + (n – 1) + n T(n) = O(n2)
After k iterations,
T(n) = T(n – k) + (n – k + 1) + (n – k + 2)
+ .… + (n – 1) + n
Insertion sort:
• Insertion sort is analogous to sorting cards by hand.

• One card is taken from the deck at a time and


put into the appropriate spot in the hand.
• Upcoming cards are handled in the same
manner.
• To insert a new card, all cards in hand with a
value greater than the new card are moved to
the right by one.

• The new card is put into the gap made on the right side by moving certain k cards.

• Insertion sort in the in-place method requires no additional memory.


Example: Sort the given set of number using insertion sort and also show the result of each pass. < 11, 7, 17, 3, 9,
29, 85, 9 >
Best case analysis:
• Let size of the input array is n. Total time taken by algorithm is the summation of time taken by
each of its instruction.

• The best case offers the


beger bound of the
algorithm’s running
time.
• When data is already
sorted, the best scenario
for insertion sort
happens.
• In this case, the
condition in the while
loop will never be
satisfied, resulting in
tj = 1.
Worst case analysis:
• The worst-case running time gives an upper bound of running time for any input.
• The running time of algorithm cannot get worse than its worst-case running time.
• Worst case for insertion sort occurs when data is sorted in reverse order.
• So we must have to compare A[j] with each element of sorted array A[1 … j – 1]. So, tj = j
Average Case Analysis
• Let's assume that tj = (j-1)/2 to calculate the average case
Therefore,
T( n ) = C1 * n + ( C2 + C3 ) * ( n - 1 ) + C4/2 * ( n - 1 ) ( n ) / 2 + ( C5 + C6 )/2 * ( ( n - 1 )
(n ) / 2 - 1) + C8 * ( n - 1 )

further simplified has dominating factor of n2 and gives T(n) = C * ( n 2) or O( n2 )


Master Theorem:

• The master method provides a method for solving recurrence of the form

T(n) = a T(n/b) + f(n)

where, a >=1, b > 1 are constant. f(n) is asymptotically positive function.

• The above recurrence describe that the algorithm divides the problem of size n into a
sub-problem, each of size n/b.

• The "a" sub-problems are solved recursively each in time T(n/b) .

• f(n) is the cost of dividing the problem and combining the result of sub-problem.
case 1:
T(n) = O(n logba)
f(n) = O(n logba-e ) for some constant e > 0
For this case, f(n) must be smaller than nlogba
case 2:
T(n) = O(n logba log n) If f(n) = O(n logba )
This case is applied only when the two functions are of same size.
case 3:
T(n) = O(f(n))
f(n) = O(nlogba+e ) for some constant e > 0
and if a f(n/b) <= C f(n) for some constant C < 1 and sufficiently large n
For this case, f(n) must be larger than nlogba and f(n) must be larger and should also
satisfy the "regularity" and i.e. a f(n/b) <= Cf(n).

Note: In each of 3 cases we are comparing the function f(n) with nlogba and solution
to determined by larger of 2 function.
Recursion Tree:
• Recursion Tree is another method for solving the recurrence relations.
• A recursion tree is a tree where each node represents the cost of a certain recursive sub-
problem.
• We sum up the values in each node to get the cost of the entire algorithm.
Steps to Solve Recurrence Relation:
Step 1: Draw a recursion tree based on the given recurrence relation.
Step 2: Determine-
Cost of each level
Total number of levels in the recursion tree
Number of nodes in the last level
Cost of the last level
Step 3: Add cost of all the levels of the recursion tree and simplify the expression so
obtained in terms of asymptotic notation.
• To find the cost function C(n), consider • Each half is now n/2 long and will each
what happens when one splits a given array require n/2 array assignments to merge
(or sub-array) into two halves, sorts them, back together once they themselves are
and then merges the sorted halves back broken in half and their halves sorted.
together again. • This is a total of 2(n/2)=n array
• One will ultimately need n array assignments.
assignments to accomplish the final merge. • This happens again and again, each time
• The computation of the rest of the cost now picking up 4(n/4)=n array assignments,
requires we look at sorting the two halves. and then 8(n/8)=n array assignments, etc.,
until no more merges are necessary.

You might also like