0% found this document useful (0 votes)
5 views26 pages

Lecture 4 Class

The document provides an analysis of the Insertion Sort and Selection Sort algorithms, detailing their best and worst-case time complexities, advantages, and implementation. It explains the concept of Big-O notation for measuring algorithm efficiency and compares different algorithms based on their growth rates. Additionally, it discusses the importance of understanding algorithm complexities and provides code snippets for the Selection Sort algorithm.

Uploaded by

aayushvij04
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
0% found this document useful (0 votes)
5 views26 pages

Lecture 4 Class

The document provides an analysis of the Insertion Sort and Selection Sort algorithms, detailing their best and worst-case time complexities, advantages, and implementation. It explains the concept of Big-O notation for measuring algorithm efficiency and compares different algorithms based on their growth rates. Additionally, it discusses the importance of understanding algorithm complexities and provides code snippets for the Selection Sort algorithm.

Uploaded by

aayushvij04
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 26

BITS, PILANI – K. K.

BIRLA GOA CAMPUS

Foundations of Data Structures and Algorithms


(BITS F232 )

Lecture No. 4

1
Insertion Sort - Analysis
We count the number of primitive operations executed by an algorithm.

• Best Case Analysis


The best case for insertion sort occurs when the list is already
sorted.
In this case, insertion sort requires n-1 comparisons
i.e., O(n) complexity.

• Worst Case Analysis


for each value of i, what is the maximum number of key
comparisons possible?
- Answer: i -1

• Thus, the total time in the worst case is


T(n) = 1+2+3+……+(n-1)
= n(n-1)/2
= O(n2)
Advantages
• simple implementation
• efficient for (quite) small data sets
• adaptive, i.e. efficient for data sets that are already
substantially sorted:
• more efficient in practice than most other simple quadratic
(i.e. O(n2) algorithms such as selection sort or bubble sort
• stable, i.e. does not change the relative order of elements
with equal keys
• in-place, i.e. only requires a constant amount O(1) of
additional memory space
• online, i.e. can sort a list as it receives it

3
Algorithm Analysis
What is a Good Algorithm?
Efficient:
• Running time
• Space used
How to measure efficiency?
Efficiency as a function of input size:
• The number of bits in an input number
• Number of data elements (numbers, points)
Asymptotic Notation
Asymptotic Notations allows us to analyze an
algorithm's running time by identifying its
behaviour as the input size of the algorithm
increases. This is also known as an algorithm's
growth rate.
We are interested in the upper bound on
running time of the algorithm, or a Worst Case
of an algorithm.
Asymptotic Notation for this is Big-O
Big O
Formal mathematical definition of Big O.
Let T(n) and f(n) be two positive functions.
We write T(n) = O(f(n)), and say that T(n) has order of
f(n) or T(n) is of order f(n),
if there are positive constants M and n₀ such that
T(n) ≤ M·f(n) for all n ≥ n₀.
Example
Suppose T(n) = 6n2 + 8n + 10
Then we know for n ≥ 1
6n2 ≤ 6n2
8n ≤ 8n2
& 10 ≤ 10n2
Therefore T(n) ≤ 24n2 for all n ≥ 1
& T(n) = O(n2)
In this case M = 24 & n₀ = 1
Note: M & n₀ are not unique
Can we find different M & n₀ for the above example?
M = 10 & n₀ = 4
We would say that the running time of this algorithm grows as n2
Sloppy Notation
The notation T(n) ∊ O(f(n)) can be used even when f(n)
grows much faster than T(n).
For example,
If T(n) = 6n2 + 8n + 10
We may write T(n) = O(n3).
This is indeed true, but not very useful.
Hierarchy of functions: log n < n < n2 < n3 < 2n
Caution!
Beware of very large constant factors. An algorithm
running in time 100000n is still O(n) but might be less
efficient than one running in time 2n2, which is O(n2)
Example
How do you show that 10n3 ≠ O(n2) ?
By Contradiction
Assume 10n3 = O(n2)
Then there are positive constants M and n₀ such
that 10 n3 ≤ M· n2 for all n ≥ n₀.
Therefore, 10n/M ≤ 1 for all n ≥ n₀.
Certainly not true for n  Max {M, n₀}
Therefore our assumption was wrong.
Comparing Algorithms
Comparison between algorithms can be done easily.
Let's take an example to understand this:
If we have two algorithms with the following expressions representing
the time required by them for execution,
Expression 1: 20n2 + 3n - 4
Expression 2: n3 + 100n - 2
Then as per asymptotic notations, we should just worry about how the
function will grow as the value of n (input) will grow
And that will entirely depend on n2 for Expression 1
and on n3 for Expression 2.
Hence, we can clearly say that the algorithm for which running time is
represented by the Expression 2, will grow faster than the other one,
simply by analysing the highest power coefficient
Good & Bad Solutions
When analysing algorithms we will often come across the
following time complexities.
Complexity
Good News
O(1)
O(logn)
O(n)
O(nlogn)
________________________
O(nk), k ≥ 2
O(kn), k ≥ 2 Bad News
O(n!)
Big O
Theorem
How to Determine Complexities
Complexity
How to determine the running time of a piece of code?
•The answer is that it depends on what kinds of statements are used.
Sequence of statements
statement 1;
statement 2;
...
statement k;
The total time is found by adding the times for all statements:
total time = time(statement 1) + time(statement 2) + ... +
time(statement k)
If each statement is "simple" (only involves basic operations) then the
time for each statement is constant and the total time is also constant:
O(1)
How to Determine Complexities
Complexity
If-Then-Else
if (cond)
then block 1 (sequence of statements)
else
block 2 (sequence of statements)
end if;
Here, either block 1 will execute, or block 2 will execute.
Therefore, the worst-case time is the slower of the two
possibilities: max(time(block 1), time(block 2))
If block 1 takes O(1) and block 2 takes O(N), the if-then-else
statement would be O(N).
How to Determine Complexities
Complexity

Loops
for I in 1 .. N loop
sequence of statements
end loop;
• The loop executes N times, so the sequence of statements
also executes N times.
• If we assume the statements are O(1),
• then total time for the for loop is N * O(1), which is O(N)
overall
How to Determine Complexities
Complexity
Nested loops
for I in 1 .. N loop
for J in 1 .. M loop
sequence of statements
end loop;
end loop;
• The outer loop executes N times.
• Every time the outer loop executes, the inner loop executes M
times.
• As a result, the statements in the inner loop execute a total of N *
M times.
Big - Omega
The “big -Omega” - Ω
f(n) = Ω(g(n)) if there exists constants c and n0, such
that c g(n) ≤ f(n) for n  n0
• used to give a lower bound.
Important to know how many
computations are necessary to
solve the problem
Equivalently any solution to the problem has to
perform that may computations
Generally not easy to prove.
Big - Theta
“The Big Theta” – θ
f(n) = θ(g(n)) if there exists constants c1, c2
and n0 such that c1g(n) ≤ f(n) ≤ c2g(n) for n  n0
f(n) = θ(g(n)) if and only if
f(n) is O(g(n)) and f(n) is Ω(g(n))
• Asymptotically tight bound
Selection Sort
• Big Idea
Orders a list of values by repeatedly putting the
smallest unplaced value into its final position.
Algorithm:
– Look through the list to find the smallest value.
– Swap it so that it is at index 0.
– Look through the list to find the second-
smallest value.
– Swap it so that it is at index 1.
...
– Repeat until all values are in their proper
places.
How Selection Sort Works
Example:
Consider the following list to be sorted in ascending order using
selection sort : 8, 2, 14, 7, 6
Selection sort works as -
• find the first smallest element 2
• It swaps it with the first element of the unordered list.
New list looks like 2, 8, 14, 7, 6
sorted unsorted
This Step partitions the list into two parts:
left part which is sorted & the right part which is unsorted.
How Selection Sort Works
• Next, find the smallest element of unsorted list
• It swaps it with the first element of the unsorted list.
New list looks like 2, 6, 14, 7, 8
sorted unsorted
Again This Step partitions the list into two parts:
left part which is sorted & the right part which is unsorted
• Similarly, it continues to sort the given elements.
• Finally, sorted list in ascending order are-
2, 6, 7, 8, 14
How Selection Sort Works

Selection sort algorithm


The list is divided into two parts,
- The sorted part at the left end and the
unsorted part at the right end.
Initially, the sorted part is empty and the
unsorted part is the entire list.
Finally, the sorted part is the entire list and the
unsorted part is the empty list.
Selection Sort – Code Snippet

Selection Sort – Code Snippet


for (i = 0 ; i < n-1 ; i++)
{
Here
index = i; i = variable to traverse the array A
for(j = i+1 ; j < n ; j++) index = variable to store the index of
minimum element
{ j = variable to traverse the unsorted
if(A[j] < A[index]) sub-array
temp = temporary variable used for
index = j; swapping
}
temp = A[i];
A[i] = A[index];
A[index] = temp;
}
• Invariant?
Selection Sort - Analysis
• Analysis
Each step of Selection Sort algorithm involves finding minimum
element of the unsorted list.
Initially size of unsorted list is n
Finding minimum = (n-1) comparisons
After 1st iteration size of unsorted list is n-1
Finding minimum = (n-2) comparisons
After 2nd iteration size of unsorted list is n-2
Finding minimum = (n-3) comparisons
And so on
Finally – Total number of comparisons
= (n-1) + (n-1) + … +1 = n(n-1)/2
Therefore Time complexity = O(n2)
Selection Sort - Analysis
Analysis (Notes)
• Analysis does not depend on particular instance
• Therefore Best Case = Worst case = O(n2)
• Running time does not depend depends on the
amount of order in the sequence
• Number of swap operations = O(n)
(Check this for Insertion sort – Exercise)
• Selection sort is an in-place algorithm.
It performs all computation in the original array
and no other array is used.
Hence, the space complexity is O(1).
Selection Sort - Variation
• The number of comparisons required can be reduced
by considering elements in pairs and finding the
minimum and maximum at the same time.
• Exercise
Design an algorithm that incorporates the above
idea & find the number of comparisons it makes.

You might also like