0% found this document useful (0 votes)
37 views29 pages

Lecture Notes 1 On Analysis and Complexity of Algorithms

Uploaded by

samidoks4jesus
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
37 views29 pages

Lecture Notes 1 On Analysis and Complexity of Algorithms

Uploaded by

samidoks4jesus
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 29

CMP 318 – ANALYSIS & COMPLEXITY OF ALGORITHMS

LECTURE NOTES

BY

CHARLES OKONJI
OUTLINE
• Overview of Algorithms
• Asymptotic Analysis
• Complexity Classes
• Time and Space Tradeoffs in Algorithms
• Analysis of Recursive Algorithms
• Numerical Algorithms
• Sequential and Binary Search Algorithms
• Sorting Algorithms
• Binary Search Trees
• Hash Tables
• Graphs and its Representation.
Overview of Algorithms
• Algorithm means ” A set of finite rules or instructions to be followed in calculations or other problem-solving operations ” Or ” A
procedure for solving a mathematical problem in a finite number of steps that frequently involves recursive operations”.
An algorithm is a procedure used for solving a problem or performing a computation. Algorithms act as an exact list of instructions
that conduct specified actions step by step in either hardware- or software-based routines.

• Use of the Algorithms


Some of the key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer programming and are used to solve problems ranging from simple sorting
and searching to complex tasks such as artificial intelligence and machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems, such as finding the optimal solution to a system of linear
equations or finding the shortest path in a graph.
3. Operations Research: Algorithms are used to optimize and make decisions in fields such as transportation, logistics, and resource
allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence and machine learning, and are used to develop
intelligent systems that can perform tasks such as image recognition, natural language processing, and decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract insights from large amounts of data in fields such as marketing,
finance, and healthcare.
Overview of Algorithms
• Why Algorithms?
1. Algorithms are necessary for solving complex problems efficiently and effectively.
2. They help to automate processes and make them more reliable, faster, and easier to perform.
3. Algorithms also enable computers to perform tasks that would be difficult or impossible for humans to do manually.
4. They are used in various fields such as mathematics, computer science, engineering, finance, and many others to optimize processes,
analyze data, make predictions, and provide solutions to problems.

• Characteristics of an Algorithm
Not all written instructions for programming are an algorithm. For some instructions to be an algorithm, it must have the following
characteristics:
o Clear and Unambiguous: The algorithm should be unambiguous. Each of its steps should be clear in all aspects and must lead to only
one meaning.
o Well-Defined Inputs: If an algorithm is to take inputs, then these inputs should be well-defined.
o Well-Defined Outputs: The algorithm must clearly define what output will be yielded and it should be well-defined as well. Every
algorithm should produce at least 1 output.
o Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.
o Feasible: The algorithm must be simple, generic, and practical, such that it can be executed with the available resources. It must not
contain some future technology or anything abstract.
o Language Independent: The Algorithm designed must be language-independent, i.e. it must be just plain instructions that can be
implemented in any language, and yet the output will be the same, as expected.
o Effectiveness: An algorithm must be developed by using very basic, simple, and feasible operations so that one can trace it out by
using just paper and pencil.
Overview of Algorithms

• Properties of Algorithm
o It should terminate after a finite time.
o It should produce at least one output.
o It should take zero or more input.
o It should be deterministic means giving the same output for the same input case.
o Every step in the algorithm must be effective i.e. every step should do some work.

• Types of Algorithms
o Brute Force Algorithm: A straightforward approach that exhaustively tries all possible solutions, suitable for small problem instances
but may become impractical for larger ones due to its high time complexity.
o Recursive Algorithm: A method that breaks a problem into smaller, similar sub-problems and repeatedly applies itself to solve them
until reaching a base case, making it effective for tasks with recursive structures.
o Encryption Algorithm: Utilized to transform data into a secure, unreadable form using cryptographic techniques, ensuring
confidentiality and privacy in digital communications and transactions.
o Backtracking Algorithm: A trial-and-error technique used to explore potential solutions by undoing choices when they lead to an
incorrect outcome, commonly employed in puzzles and optimization problems.
o Searching Algorithm: Designed to find a specific target within a dataset, enabling efficient retrieval of information from sorted or
unsorted collections.
o Sorting Algorithm: Aimed at arranging elements in a specific order, like numerical or alphabetical, to enhance data organization and
retrieval.
o Hashing Algorithm: Converts data into a fixed-size hash value, enabling rapid data access and retrieval in hash tables, commonly used
in databases and password storage.
o Divide and Conquer Algorithm: Breaks a complex problem into smaller subproblems, solves them independently, and then combines
their solutions to address the original problem effectively.
Overview of Algorithms
o Greedy Algorithm: Makes locally optimal choices at each step in the hope of finding a global optimum, useful for optimization
problems but may not always lead to the best solution.
o Dynamic Programming Algorithm: Stores and reuses intermediate results to avoid redundant computations, enhancing the efficiency
of solving complex problems.
o Randomized Algorithm: Utilizes randomness in its steps to achieve a solution, often used in situations where an approximate or
probabilistic answer suffices.

• How do Algorithms Work?


Algorithms are step-by-step procedures designed to solve specific problems and perform tasks efficiently in the realm of computer science
and mathematics. These powerful sets of instructions form the backbone of modern technology and govern everything from web searches
to artificial intelligence.
Here's how algorithms work:
o Input: Algorithms take input data, which can be in various formats, such as numbers, text, or images.
o Processing: The algorithm processes the input data through a series of logical and mathematical operations, manipulating and
transforming it as needed.
o Output: After the processing is complete, the algorithm produces an output, which could be a result, a decision, or some other
meaningful information.
o Efficiency: A key aspect of algorithms is their efficiency, aiming to accomplish tasks quickly and with minimal resources.
o Optimization: Algorithm designers constantly seek ways to optimize their algorithms, making them faster and more reliable.
o Implementation: Algorithms are implemented in various programming languages, enabling computers to execute them and produce
desired outcomes.
Overview of Algorithms
• How to Write an Algorithm?
o There are no well-defined standards for writing algorithms. It is, however, a problem that is resource-dependent. Algorithms are never
written with a specific programming language in mind.
o As you all know, basic code constructs such as loops like do, for, while, all programming languages share flow control such as if-else, and
so on. An algorithm can be written using these common constructs.
o Algorithms are typically written in a step-by-step fashion, but this is not always the case. Algorithm writing is a process that occurs after
the problem domain has been well-defined. That is, you must be aware of the problem domain for which you are developing a solution.

Example: Create an algorithm that multiplies two numbers and displays the output.
Step 1 – Start
Step 2 − declare three integers x, y & z
Step 3 − define values of x & y
Step 4 − multiply values of x & y
Step 5 − store result of step 4 to z
Step 6 − print z
Step 7 − Stop

• OR
Step 1 − Start mul
Step 2 − get values of x & y
Step 3 − z ← x * y
Step 4 − display z
Step 5 – Stop

• In Algorithm Design and Analysis, the second method is typically used to describe an algorithm, as it allows the Analyst to analyze the
algorithm while ignoring all unwanted definitions easily. Meaning, they can see which operations are being used and how the process
is progressing. Writing step numbers is optional.
Overview of Algorithms
• To solve a given problem, you create an algorithm. A problem can be solved in a variety of ways. As a result, many solution algorithms
for a given problem can be derived.

• Thus, to evaluate the proposed solution algorithms and implement the most appropriate solution, the following factors are
considered:
o Modularity: This entails breaking down an algorithm into small-small modules or small-small steps, which is a basic definition of
an algorithm.
o Correctness: An algorithm's correctness is defined as when the given inputs produce the desired output, indicating that the
algorithm was designed correctly. An algorithm's analysis has been completed correctly.
o Maintainability: It means that the algorithm should be designed in a straightforward, structured way so that when you redefine
the algorithm, no significant changes are made to the algorithm.
o Functionality: It takes into account various logical steps to solve a real-world problem.
o Robustness: Robustness refers to an algorithm's ability to define your problem clearly.
o User-friendly: If the algorithm is difficult to understand, the designer will not explain it to the programmer.
o Simplicity: If an algorithm is simple, it is simple to understand.
o Extensibility: Your algorithm should be extensible if another algorithm designer or programmer wants to use it.
Overview of Algorithms
• Qualities of a Good Algorithm
o Efficiency: A good algorithm should perform its task quickly and use minimal resources.
o Correctness: It must produce the correct and accurate output for all valid inputs.
o Clarity: The algorithm should be easy to understand and comprehend, making it maintainable and modifiable.
o Scalability: It should handle larger data sets and problem sizes without a significant decrease in performance.
o Reliability: The algorithm should consistently deliver correct results under different conditions and environments.
o Optimality: Striving for the most efficient solution within the given problem constraints.
o Robustness: Capable of handling unexpected inputs or errors gracefully without crashing.
o Adaptability: Ideally, it can be applied to a range of related problems with minimal adjustments.
o Simplicity: Keeping the algorithm as simple as possible while meeting its requirements, avoiding unnecessary complexity.
Complexity of Algorithms
• An algorithm's performance can be measured in two (2) ways:
o Time Complexity
This is the amount of time that an algorithm takes to produce a result as a function of the size of the input. It is calculated primarily by
counting the number of steps required to complete the execution.
• Example:
mul = 1;
// Suppose you have to calculate the multiplication of n numbers.
for i=1 to n
mul = mul *1;
// when the loop ends, then mul holds the multiplication of the n numbers
return mul;
The time complexity of the loop statement in above codes is at least n, and as the value of n escalates, so does the time complexity. While
the code's complexity, i.e., returns mul, will be constant because its value is not dependent on the importance of n and will provide the
result in a single step. The worst-time complexity is generally considered because it is the maximum time required for any given input size.
o Space Complexity
This is the amount of space an algorithm requires to solve a problem and produce an output. The space is required for an algorithm for the
following reasons:
 To store program instructions.
 To store track of constant values.
 To store track of variable values.
 To store track of function calls, jumping statements, and so on.

Space Complexity = Auxiliary Space + Input Size


Complexity of Algorithms
• Algorithm designers strive to develop algorithms with the lowest possible time and memory complexities, since this makes them
more efficient and scalable.
• The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of the amount of data the algorithm
must process.
• Writing an efficient algorithm help to consume the minimum amount of time for processing the logic.

• How to study efficiency of algorithms?


The way to study the efficiency of an algorithm is to implement it and experiment by running the program on various test inputs while
recording the time spent during each execution. For an algorithm A, it is judged on the basis of the following 2 parameters for an input of
size n:

1. Time Complexity: Time taken by the algorithm to solve the problem. It is measured by calculating the iteration of loops, number of
comparisons etc. Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to
the algorithm.
“Time” can mean the number of memory accesses performed, the number of comparisons between integers, the number of times some
inner loop is executed, or some other natural unit related to the amount of real time the algorithm will take.

2. Space Complexity: Space taken by the algorithm to solve the problem. It includes space used by necessary input variables and any extra
space (excluding the space taken by inputs) that is used by the algorithm. For example, if we use a hash table (a kind of data structure), we
need an array to store values so this is an extra space occupied, hence will count towards the space complexity of the algorithm. This extra
space is known as Auxiliary Space.
Space complexity is a function describing the amount of memory(space)an algorithm takes in terms of the amount of input to the
algorithm.
Space complexity is sometimes ignored because the space used is minimal and/ or obvious, but sometimes it becomes an issue as time.
Complexity of Algorithms
Examples with their Complexity Analysis
• Linear Search algorithm
For an array for a length of 1:
// C implementation of the approach
#include <stdio.h>

// Linearly search x in arr[].


// If x is present then return the index,
// otherwise return -1
int search(int arr[], int n, int x)
{
int i;
for (i = 0; i < n; i++) {
if (arr[i] == x)
return i;
}
return -1;
}

/* Driver's code*/
int main()
{
int arr[] = { 1, 10, 30, 15 };
int x = 30;
int n = sizeof(arr) / sizeof(arr[0]);

// Function call
printf("%d is present at index %d", x,
search(arr, n, x));

getchar();
return 0;
}
Complexity of Algorithms
Output
30 is present at index 2

Time Complexity Analysis: (In Big-O notation)


Best Case: O(1), This will take place if the element to be searched is on the first index of the given list. So, the number of comparisons, in
this case, is 1.
Average Case: O(n), This will take place if the element to be searched is on the middle index of the given list.
Worst Case: O(n), This will take place if:
The element to be searched is on the last index
The element to be searched is not present on the list

• Linear Search for an array of length (n) and based on the following cases:
o If (n) is even then our output will be 0
o If (n) is odd then our output will be the sum of the elements of the array.
Complexity of Algorithms
// C++ implementation of the approach
#include <bits/stdc++.h>
using namespace std;

int getSum(int arr[], int n)


{
if (n % 2 == 0) // (n) is even
{
return 0;
}
int sum = 0;
for (int i = 0; i < n; i++) {
sum += arr[i];
}
return sum; // (n) is odd
}

// Driver's Code
int main()
{
// Declaring two array one of length odd and other of
// length even;
int arr[4] = { 1, 2, 3, 4 };
int a[5] = { 1, 2, 3, 4, 5 };

// Function call
cout << getSum(arr, 4)
<< endl; // print 0 because (n) is even
cout << getSum(a, 5)
<< endl; // print sum because (n) is odd
}
Complexity of Algorithms
Output
0
15

Time Complexity Analysis:


Best Case: The order of growth will be constant because in the best case we are assuming that (n) is even.
Average Case: In this case, we will assume that even and odd are equally likely, therefore Order of growth will be linear
Worst Case: The order of growth will be linear because in this case, we are assuming that (n) is always odd.
Complexity of Algorithms
• Cases in complexities:
There are two commonly studied cases of complexity in algorithms:
 Best case complexity: The best-case scenario for an algorithm is the scenario in which the algorithm performs the minimum
amount of work (e.g. takes the shortest amount of time, uses the least amount of memory, etc.).

 Worst case complexity: The worst-case scenario for an algorithm is the scenario in which the algorithm performs the maximum
amount of work (e.g. takes the longest amount of time, uses the most amount of memory, etc.).

• In analyzing the complexity of an algorithm, it is often more informative to study the worst-case scenario, as this gives a guaranteed
upper bound on the performance of the algorithm. Best-case scenario analysis is sometimes performed, but is generally less important
as it provides a lower bound that is often trivial to achieve.
Analysis of Algorithms
• Algorithm analysis is an important part of computational complexity theory, which provides theoretical estimation for the required
resources of an algorithm to solve a specific computational problem. Analysis of algorithms is the determination of the amount of
time and space resources required to execute it.

Why Analysis of Algorithms?


o To predict the behavior of an algorithm without implementing it on a specific computer.
o It is much more convenient to have simple measures for the efficiency of an algorithm than to implement the algorithm and test the
efficiency every time a certain parameter in the underlying computer system changes.
o It is impossible to predict the exact behavior of an algorithm. There are too many influencing factors.
o The analysis is thus only an approximation; it is not perfect.
o More importantly, by analyzing different algorithms, we can compare them to determine the best one for our purpose.

Types of Algorithm Analysis:


o Best case: Define the input for which algorithm takes less time or minimum time. In the best case calculate the lower bound of an
algorithm. Example: In the linear search when search data is present at the first location of large data, then the best case occurs.
o Worst Case: Define the input for which algorithm takes a long time or maximum time. In the worst calculate the upper bound of an
algorithm. Example: In the linear search when search data is not present at all, then the worst case occurs.
o Average case: In the average case, take all random inputs and calculate the computation time for all inputs, and then we divide it by
the total number of inputs.
Average case = all random case time / total no of case
Asymptotic Analysis of Algorithms
Given two algorithms for a task, how do we find out which one is better?
One naive way of doing this is – to implement both the algorithms and run the two programs on your computer for different inputs and
see which one takes less time. There are many problems with this approach for the analysis of algorithms.
o It might be possible that for some inputs, the first algorithm performs better than the second. And for some inputs second performs
better.
o It might also be possible that for some inputs, the first algorithm performs better on one machine, and the second works better on
another machine for some other inputs.

Asymptotic Analysis is the big idea that handles the above issues in analyzing algorithms. In Asymptotic Analysis, we evaluate the
performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how the time (or space)
taken by an algorithm increases with the input size.

To illustrate, let us consider the Search problem (searching a given item) in a sorted array. We can use either of:
 Linear Search (order of growth is linear), OR

 Binary Search (order of growth is logarithmic).

To understand how Asymptotic Analysis solves the problems mentioned above in analyzing algorithms,
let us say:
we run the Linear Search on a fast computer A and the Binary Search on a slow computer B and we then pick the constant values for
the two computers so that it tells us exactly how long it takes for the given machine to perform the search in seconds.
Asymptotic Analysis of Algorithms
• Let us say the constant for A is 0.2 and the constant for B is 1000, which means that A is 5000 times more powerful than B.

Input Size Running time on A Running time on B

2 sec ~1h
10

20 sec ~ 1.8 h
100

~ 55.5 h ~ 5.5 h
10^6

~ 6.3 years ~ 8.3 h


10^9

• For small values of input array size n, the fast computer may take less time. But, after a certain value of input array size, the Binary
Search will definitely start taking less time compared to the Linear Search even though the Binary Search is being run on a slow
machine. The reason is the order of growth of Binary Search with respect to input size is logarithmic while the order of growth of Linear
Search is linear. So, the machine-dependent constants can always be ignored after a certain value of input size.

Running times for this example:


Linear Search running time in seconds on A: 0.2 * n
Binary Search running time in seconds on B: 1000*log(n)
Asymptotic Notations
• Asymptotic Notation is a way to describe the running time or space complexity of an algorithm based on the input size. It is
commonly used in complexity analysis to describe how an algorithm performs as the size of the input grows. The three most
commonly used notations are Big O, Omega, and Theta.

 Big O notation (O): This notation provides an upper bound on the growth rate of an algorithm’s running time or space usage. It
represents the worst-case scenario, i.e., the maximum amount of time or space an algorithm may need to solve a problem. For
example, if an algorithm’s running time is O(n), then it means that the running time of the algorithm increases linearly with the
input size n or less.

 Omega notation (Ω): This notation provides a lower bound on the growth rate of an algorithm’s running time or space usage. It
represents the best-case scenario, i.e., the minimum amount of time or space an algorithm may need to solve a problem. For
example, if an algorithm’s running time is Ω(n), then it means that the running time of the algorithm increases linearly with the
input size n or more.

 Theta notation (Θ): This notation provides both an upper and lower bound on the growth rate of an algorithm’s running time or
space usage. It represents the average-case scenario, i.e., the amount of time or space an algorithm typically needs to solve a
problem. For example, if an algorithm’s running time is Θ(n), then it means that the running time of the algorithm increases
linearly with the input size n.

• In general, the choice of asymptotic notation depends on the problem and the specific algorithm used to solve it. It is important to
note that asymptotic notation does not provide an exact running time or space usage for an algorithm, but rather a description of
how the algorithm scales with respect to input size. It is a useful tool for comparing the efficiency of different algorithms and for
predicting how they will perform on large input sizes.
Asymptotic Notations
Measurement of Complexity of an Algorithm
Based on the above three notations, there are three (3) cases used to analyze an algorithm:
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running time of an algorithm. We must know the case that causes a
maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x) is not
present in the array. When x is not present, the search() function compares it with all the elements of arr[] one by one. Therefore, the
worst-case time complexity of the linear search would be O(n).

2. Best Case Analysis (Very Rarely used)


In the best-case analysis, we calculate the lower bound on the running time of an algorithm. We must know the case that causes a
minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location.
The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be Ω(1)

3. Average Case Analysis (Rarely used)


In average case analysis, we take all possible inputs and calculate the computing time for all of the inputs. Sum all the calculated values
and divide the sum by the total number of inputs. We must know (or predict) the distribution of cases. For the linear search problem, let
us assume that all cases are uniformly distributed (including the case of x not being present in the array). So we sum all the cases and
divide the sum by (n+1). Following is the value of average-case time complexity.

Average Case Time = \sum_{i=1}^{n}\frac{\theta (i)}{(n+1)} = \frac{\theta (\frac{(n+1)*(n+2)}{2})}{(n+1)} = \theta (n)


Asymptotic Notations
1. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it gives the worst-case complexity of an
algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete statement execution in the longest amount of time
possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n
≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.
Asymptotic Notations
Mathematical Representation of Big-O Notation:
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best case and quadratic time in the worst case. We can safely
say that the time complexity of the Insertion sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use two statements for best and worst cases:
The worst-case time complexity of Insertion Sort is Θ(n2).
The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time complexity of an algorithm. Many times we easily find an
upper bound by simply looking at the algorithm.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O provides exact or upper bounds.
Asymptotic Notations
2. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an
algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be Ω(g), if there is a constant c > 0 and a
natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0
Asymptotic Notations
Mathematical Representation of Omega notation:
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be written as Ω(n), but it is not very useful
information about insertion sort, as we are generally interested in worst-case and sometimes in the average case.
Examples:
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner because Ω provides exact or lower bounds.
Asymptotic Notations
3. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an algorithm.
.Theta (Average Case) You add the running times for each possible input combination and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be Θ(g), if there are constants c1, c2 > 0 and a
natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

Theta notation
Asymptotic Notations
Mathematical Representation of Theta notation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the value f(n) is always between c1 * g(n) and c2 * g(n) for large
values of n (n ≥ n0). The definition of theta also requires that f(n) must be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order terms and ignore leading constants. For example, Consider
the expression 3n3 + 6n2 + 6000 = Θ(n3), the dropping lower order terms is always fine because there will always be a number(n) after
which Θ(n3) has higher values than Θ(n2) irrespective of the constants involved. For a given function g(n), we denote Θ(g(n)) is following
set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.
Properties of Asymptotic Notations
:

1. General Properties:
If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a is a constant.

Example:
f(n) = 2n²+5 is O(n²)
then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).
Similarly, this property satisfies both Θ and Ω notation.

We can say,
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)), where a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)), where a is a constant.

2. Transitive Properties:
If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).

Example:
If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)
Similarly, this property satisfies both Θ and Ω notation.

We can say,
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))

3. Reflexive Properties:
Reflexive properties are always easy to understand after transitive.
If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.

Example:
f(n) = n² ; O(n²) i.e O(f(n))
Similarly, this property satisfies both Θ and Ω notation.

We can say that,


If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).
Properties of Asymptotic Notations
:

4. Symmetric Properties:
If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).

Example:
If(n) = n² and g(n) = n²
then, f(n) = Θ(n²) and g(n) = Θ(n²)
This property only satisfies for Θ notation.

5. Transpose Symmetric Properties:


If f(n) is O(g(n)) then g(n) is Ω (f(n)).

Example:
If(n) = n , g(n) = n²
then n is O(n²) and n² is Ω (n)
This property only satisfies O and Ω notations.

6. Some More Properties:


1. If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n))

2. If f(n) = O(g(n)) and d(n)=O(e(n)) then f(n) + d(n) = O( max( g(n), e(n) ))

Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) + d(n) = n + n² i.e O(n²)

3. If f(n)=O(g(n)) and d(n)=O(e(n)) then f(n) * d(n) = O( g(n) * e(n))

Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) * d(n) = n * n² = n³ i.e O(n³)

You might also like