0% found this document useful (0 votes)
3 views

Lecture 2 - Growth of Functions and Recursion

The document discusses the growth of functions and complexity in algorithms, emphasizing time and memory complexity measures. It explains asymptotic notations (Big O, Big Ω, Big Θ) to analyze algorithm performance and provides examples of constant, linear, and quadratic time complexities. Additionally, it covers recursion with examples like factorial and Fibonacci calculations, and introduces the divide-and-conquer algorithm Merge Sort.

Uploaded by

moseskinyua155
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 2 - Growth of Functions and Recursion

The document discusses the growth of functions and complexity in algorithms, emphasizing time and memory complexity measures. It explains asymptotic notations (Big O, Big Ω, Big Θ) to analyze algorithm performance and provides examples of constant, linear, and quadratic time complexities. Additionally, it covers recursion with examples like factorial and Fibonacci calculations, and introduces the divide-and-conquer algorithm Merge Sort.

Uploaded by

moseskinyua155
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Lecture 2: Growth of Functions and Simplified Recursion

Growth of Functions

Complexity in algorithms refers to the amount of resources (such as time or memory)


required to solve a problem or perform a task. The most common measure of complexity is
time complexity, which refers to the amount of time an algorithm takes to produce a result
as a function of the size of the input. Memory complexity refers to the amount of memory
used by an algorithm. Algorithm designers strive to develop algorithms with the lowest
possible time and memory complexities, since this makes them more efficient and
scalable.

Understanding the growth of functions is crucial in analyzing the performance of


algorithms. It helps in determining how the running time or space requirements of an
algorithm increase as the size of the input increases. This concept is often expressed using
asymptotic notation: Big O (pronounced "Big Oh"), Big Ω (Omega, pronounced "Big Oh-
mega"), and Big Θ (Theta, pronounced "Big Theta") notations.

The Running Time of an Algorithm may depend on a number of factors:

• Whether the machine is a Single or Multiple Processor Machine.


• It also depends on the cost of each Read/Write operation to Memory.
• The configuration of machine – 32 bit or 64 bit Architecture.
• The Size of Input given to the Algorithm.

When computing time complexities, we only pay attention to the size of the input given to
the algorithm. It’s essential that algorithms are tested with different input sizes.

To compute the running time of an algorithm, we define a hypothetical computer with a


single 32-bit processor that executes instructions sequentially. We further assume that
this computer takes one time unit to complete an operation, which can be arithmetic,
logical, assignment, return, etc.

Given a sample program that computes the difference between two integers:

difference(a,b)

{
c = a – b //1 time unit for substraction; 1 time unit for
assignment

return c //1 time unit for return

Given our hypothetical machine, if we run this program, the total time taken is

T_diff = 1 + 1 + 1 = 3 units

Therefore, irrespective of the size of inputs, the time taken for executing this program is 3
time units. This can also be expressed as a constant since, regardless of the input size, this
program will always complete in 3 time units. As such, we say that this algorithm
completes in constant time and it’s rate of growth is a constant.

This can be represented using asymptotic notations. The asymptotic notation for the upper
bound time complexity for this algorithm, also denoted as Big-O (Big-Oh) is O(1). Since
each operation/step in the algorithm occurs once and takes 1 time unit, the complexity of
O(3) can be represented as O(1).

Let’s evaluate another example that calculates the sum of items/elements in an array:

sumOfArray(A[], N)

sum = 0

for i = 0 to N-1

sum = sum + A[i]

return sum

The following table analyzes the cost and the number of times each operation is executed

Operation Time complexity (units) Times its repeated


sum = 0 1 1
for i = 0 to N-1 2 N + 1 (1 for assignment, 1
for incrementing 1)
sum = sum + A[i] 2 N (1 for assignment, 1 for
sum)
return sum 1 1

The time complexity for the above program can be computed as:

T_sum = 1 + 2 * (N +1) + 2 * N + 1

= 4N + 4

The running time of this program is proportional to the size of the array, hence, making it a
Linear running time. A linear function grows proportionately with the size of the array. We
ignore constants when denoting time complexity. This running time can be denoted using
it’s highest order term as O(N)

For an algorithm that calculates the sum of elements in a matrix of size N by N, the
pseudocode would look like this:

sumOfMatrix(A[][], N)

total = 0

For i = 0 to N – 1

For j = 0 to N – 1

total = total + A[i][j]

return total

Operation Time complexity (units) Times its repeated


sum = 0 1 1
for i = 0 to N-1 2 N + 1 (1 for assignment, 1
for incrementing 1)
for j = 0 to N-1 2 (N + 1) * (N + 1)
sum = sum + A[i] 2 N (1 for assignment, 1 for
sum)
return sum 1 1
The first loop executes N + 1 times when iterating over rows. The second loop executes (N
+ 1) * (N + 1) times for each cell in the column. The total time taken:

T_sumOfMatrix = 1 + 2 * (N + 1) + 2 * (N + 1) * (N + 1) + 2 * N * N + 1

= 9N2 + 6N + 6

We ignore lower order terms and the constant. This leaves us with N2 , which is a
Quadratic function.

The above functions can be plotted on a graph to resemble the following graph:
Types of functions

1. Logarithmic: log n
2. Linear: n
3. Quadratic: n2
4. Polynomial: anz
5. Exponential: an

Asymptotic Notation

1. Big O Notation (O):


a. Pronounced: "Big Oh"
b. Represents the upper bound of an algorithm's running time.
c. Describes the worst-case scenario.
d. It gives the maximum time an algorithm can take to complete based on the
input size.

Given f(n) is your algorithm’s run time, g(n) is an arbitrary time complexity you want to
relate to your algorithm, f(n) is O(g(n)) if for some real constants c(c>0) and n0, f(n) <= c g(n)
for every input size n (n>n0)

For example:

f(n) = 3log n + 100

g(n) = log n

The above example is interpreted as follows:

Is f(n) O(g(n))? That is, is 3 log n + 100 O(log n)?

4log n + 50<= c * log n

4 log n + 50 <= 100 * log n

This satisfies f(n) = g O(n)

Example in C#:
public void ExampleAlgo(int n)
{
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
Console.WriteLine($"{i}, {j}"); // O(n^2)
}
}
}

Explanation:

e. The method ExampleAlgo has a nested loop. The outer loop runs n times,
and for each iteration of the outer loop, the inner loop also runs n times.
Hence, the total number of iterations is n * n = n^2, resulting in a time
complexity of O(n^2).
2. Big Ω Notation (Ω):
a. Pronounced: "Big Oh-mega"
b. Represents the lower bound of an algorithm's running time.
c. Describes the best-case scenario.
d. It gives the minimum time an algorithm can take to complete based on the
input size.

Example in C#:

public void ExampleAlgo(int n)


{
if (n == 1)
return; // Ω(1)

for (int i = 0; i < n; i++)


{
Console.WriteLine(i); // Ω(n)
}
}

Explanation:
e. In the best-case scenario, if n is 1, the method returns immediately with a
time complexity of Ω(1). In other scenarios, the method has a single loop
running n times, resulting in a time complexity of Ω(n).
3. Big Θ Notation (Θ):
a. Pronounced: "Big Theta"
b. Represents the tight bound of an algorithm's running time.
c. Describes the average-case scenario.
d. It indicates that the running time grows at the same rate for the best,
average, and worst-case scenarios.

Example in C#:

public void ExampleAlgo(int n)


{
for (int i = 0; i < n; i++)
{
int j = 1;
while (j < n)
{
Console.WriteLine($"{i}, {j}"); // Θ(n log n)
j *= 2;
}
}
}

Explanation:

e. The outer loop runs n times, and within each iteration of the outer loop, the
inner loop runs log(n) times (since j doubles each time). Hence, the total
number of iterations is n * log(n), resulting in a time complexity of Θ(n log
n).

Simplified Examples of Recursion

Recursion is a technique where a function calls itself to solve smaller instances of the
same problem. It is a powerful tool for solving complex problems by breaking them down
into simpler sub-problems.
Example 1: Factorial Calculation

Problem: Calculate the factorial of a number n, denoted as n!, which is the product of all
positive integers less than or equal to n.

Recursive Algorithm:

1. Base Case: If n is 0, return 1 (0! = 1).


2. Recursive Case: Multiply n by the factorial of n-1.

Example in C#:

using System;

public class RecursionExample


{
public static int Factorial(int n)
{
if (n == 0) // Base Case
return 1;
else
return n * Factorial(n - 1); // Recursive Case
}

public static void Main()


{
int number = 5;
int result = Factorial(number);
Console.WriteLine($"Factorial of {number} is {result}");
}
}

Explanation:

• The Factorial method checks if n is 0. If true, it returns 1. Otherwise, it calls itself


with n-1 and multiplies the result by n. This continues until the base case is
reached.
Example 2: Fibonacci Sequence

Problem: Generate the Fibonacci sequence, where each number is the sum of the two
preceding ones, starting from 0 and 1.

Recursive Algorithm:

1. Base Case: If n is 0, return 0. If n is 1, return 1.


2. Recursive Case: Return the sum of Fibonacci of n-1 and Fibonacci of n-2.

Example in C#:

using System;

public class RecursionExample


{
public static int Fibonacci(int n)
{
if (n == 0) // Base Case 1
return 0;
else if (n == 1) // Base Case 2
return 1;
else
return Fibonacci(n - 1) + Fibonacci(n - 2); // Recursive
Case
}

public static void Main()


{
int number = 7;
int result = Fibonacci(number);
Console.WriteLine($"Fibonacci of {number} is {result}");
}
}

Explanation:
• The Fibonacci method checks if n is 0 or 1. If true, it returns 0 or 1 respectively.
Otherwise, it calls itself with n-1 and n-2, and returns their sum. This continues
until the base cases are reached.

2.1: Divide-and-Conquer Algorithms

A) Merge Sort

Merge Sort is a classic divide-and-conquer algorithm that is used for sorting an array or a
list. It works by recursively dividing the array into smaller subarrays, sorting each subarray,
and then merging the sorted subarrays to form the final sorted array.

Steps of Merge Sort

1. Divide: Divide the array into two halves.


2. Conquer: Recursively sort each half.
3. Combine: Merge the two sorted halves to produce the sorted array.

Example:

Let's consider an array [38, 27, 43, 3, 9, 82, 10] to be sorted using merge sort.

1. Divide the array into two halves:

[38, 27, 43, 3] and [9, 82, 10]

2. Conquer each half by recursively applying merge sort:

[38, 27] and [43, 3] => [27, 38] and [3, 43]
[9, 82] and [10] => [9, 82] and [10] => [9, 10, 82]

3. Combine the sorted halves:

[27, 38] and [3, 43] => [3, 27, 38, 43]

4. Final combination:
[3, 27, 38, 43] and [9, 10, 82] => [3, 9, 10, 27, 38, 43, 82]

The final sorted array is [3, 9, 10, 27, 38, 43, 82].

Example in C#:

using System;

public class MergeSortExample


{
public static void MergeSort(int[] array, int left, int right)
{
if (left < right)
{
int middle = left + (right - left) / 2;

MergeSort(array, left, middle);


MergeSort(array, middle + 1, right);

Merge(array, left, middle, right);


}
}

public static void Merge(int[] array, int left, int middle, int
right)
{
int n1 = middle - left + 1;
int n2 = right - middle;

int[] leftArray = new int[n1];


int[] rightArray = new int[n2];

Array.Copy(array, left, leftArray, 0, n1);


Array.Copy(array, middle + 1, rightArray, 0, n2);

int i = 0, j = 0;
int k = left;
while (i < n1 && j < n2)
{
if (leftArray[i] <= rightArray[j])
{
array[k] = leftArray[i];
i++;
}
else
{
array[k] = rightArray[j];
j++;
}
k++;
}

while (i < n1)


{
array[k] = leftArray[i];
i++;
k++;
}

while (j < n2)


{
array[k] = rightArray[j];
j++;
k++;
}
}

public static void Main()


{
int[] array = { 38, 27, 43, 3, 9, 82, 10 };
Console.WriteLine("Original array:");
Console.WriteLine(string.Join(", ", array));

MergeSort(array, 0, array.Length - 1);

Console.WriteLine("Sorted array:");
Console.WriteLine(string.Join(", ", array));
}
}

Explanation:

• The MergeSort method recursively divides the array into two halves until the base
case (single element) is reached.
• The Merge method combines two sorted halves into a single sorted array.
• The Main method initializes an array, prints the original array, applies merge sort,
and then prints the sorted array.

You might also like