Lecture 2 - Growth of Functions and Recursion
Lecture 2 - Growth of Functions and Recursion
Growth of Functions
When computing time complexities, we only pay attention to the size of the input given to
the algorithm. It’s essential that algorithms are tested with different input sizes.
Given a sample program that computes the difference between two integers:
difference(a,b)
{
c = a – b //1 time unit for substraction; 1 time unit for
assignment
Given our hypothetical machine, if we run this program, the total time taken is
T_diff = 1 + 1 + 1 = 3 units
Therefore, irrespective of the size of inputs, the time taken for executing this program is 3
time units. This can also be expressed as a constant since, regardless of the input size, this
program will always complete in 3 time units. As such, we say that this algorithm
completes in constant time and it’s rate of growth is a constant.
This can be represented using asymptotic notations. The asymptotic notation for the upper
bound time complexity for this algorithm, also denoted as Big-O (Big-Oh) is O(1). Since
each operation/step in the algorithm occurs once and takes 1 time unit, the complexity of
O(3) can be represented as O(1).
Let’s evaluate another example that calculates the sum of items/elements in an array:
sumOfArray(A[], N)
sum = 0
for i = 0 to N-1
return sum
The following table analyzes the cost and the number of times each operation is executed
The time complexity for the above program can be computed as:
T_sum = 1 + 2 * (N +1) + 2 * N + 1
= 4N + 4
The running time of this program is proportional to the size of the array, hence, making it a
Linear running time. A linear function grows proportionately with the size of the array. We
ignore constants when denoting time complexity. This running time can be denoted using
it’s highest order term as O(N)
For an algorithm that calculates the sum of elements in a matrix of size N by N, the
pseudocode would look like this:
sumOfMatrix(A[][], N)
total = 0
For i = 0 to N – 1
For j = 0 to N – 1
return total
T_sumOfMatrix = 1 + 2 * (N + 1) + 2 * (N + 1) * (N + 1) + 2 * N * N + 1
= 9N2 + 6N + 6
We ignore lower order terms and the constant. This leaves us with N2 , which is a
Quadratic function.
The above functions can be plotted on a graph to resemble the following graph:
Types of functions
1. Logarithmic: log n
2. Linear: n
3. Quadratic: n2
4. Polynomial: anz
5. Exponential: an
Asymptotic Notation
Given f(n) is your algorithm’s run time, g(n) is an arbitrary time complexity you want to
relate to your algorithm, f(n) is O(g(n)) if for some real constants c(c>0) and n0, f(n) <= c g(n)
for every input size n (n>n0)
For example:
g(n) = log n
Example in C#:
public void ExampleAlgo(int n)
{
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
Console.WriteLine($"{i}, {j}"); // O(n^2)
}
}
}
Explanation:
e. The method ExampleAlgo has a nested loop. The outer loop runs n times,
and for each iteration of the outer loop, the inner loop also runs n times.
Hence, the total number of iterations is n * n = n^2, resulting in a time
complexity of O(n^2).
2. Big Ω Notation (Ω):
a. Pronounced: "Big Oh-mega"
b. Represents the lower bound of an algorithm's running time.
c. Describes the best-case scenario.
d. It gives the minimum time an algorithm can take to complete based on the
input size.
Example in C#:
Explanation:
e. In the best-case scenario, if n is 1, the method returns immediately with a
time complexity of Ω(1). In other scenarios, the method has a single loop
running n times, resulting in a time complexity of Ω(n).
3. Big Θ Notation (Θ):
a. Pronounced: "Big Theta"
b. Represents the tight bound of an algorithm's running time.
c. Describes the average-case scenario.
d. It indicates that the running time grows at the same rate for the best,
average, and worst-case scenarios.
Example in C#:
Explanation:
e. The outer loop runs n times, and within each iteration of the outer loop, the
inner loop runs log(n) times (since j doubles each time). Hence, the total
number of iterations is n * log(n), resulting in a time complexity of Θ(n log
n).
Recursion is a technique where a function calls itself to solve smaller instances of the
same problem. It is a powerful tool for solving complex problems by breaking them down
into simpler sub-problems.
Example 1: Factorial Calculation
Problem: Calculate the factorial of a number n, denoted as n!, which is the product of all
positive integers less than or equal to n.
Recursive Algorithm:
Example in C#:
using System;
Explanation:
Problem: Generate the Fibonacci sequence, where each number is the sum of the two
preceding ones, starting from 0 and 1.
Recursive Algorithm:
Example in C#:
using System;
Explanation:
• The Fibonacci method checks if n is 0 or 1. If true, it returns 0 or 1 respectively.
Otherwise, it calls itself with n-1 and n-2, and returns their sum. This continues
until the base cases are reached.
A) Merge Sort
Merge Sort is a classic divide-and-conquer algorithm that is used for sorting an array or a
list. It works by recursively dividing the array into smaller subarrays, sorting each subarray,
and then merging the sorted subarrays to form the final sorted array.
Example:
Let's consider an array [38, 27, 43, 3, 9, 82, 10] to be sorted using merge sort.
[38, 27] and [43, 3] => [27, 38] and [3, 43]
[9, 82] and [10] => [9, 82] and [10] => [9, 10, 82]
[27, 38] and [3, 43] => [3, 27, 38, 43]
4. Final combination:
[3, 27, 38, 43] and [9, 10, 82] => [3, 9, 10, 27, 38, 43, 82]
The final sorted array is [3, 9, 10, 27, 38, 43, 82].
Example in C#:
using System;
public static void Merge(int[] array, int left, int middle, int
right)
{
int n1 = middle - left + 1;
int n2 = right - middle;
int i = 0, j = 0;
int k = left;
while (i < n1 && j < n2)
{
if (leftArray[i] <= rightArray[j])
{
array[k] = leftArray[i];
i++;
}
else
{
array[k] = rightArray[j];
j++;
}
k++;
}
Console.WriteLine("Sorted array:");
Console.WriteLine(string.Join(", ", array));
}
}
Explanation:
• The MergeSort method recursively divides the array into two halves until the base
case (single element) is reached.
• The Merge method combines two sorted halves into a single sorted array.
• The Main method initializes an array, prints the original array, applies merge sort,
and then prints the sorted array.