0% found this document useful (0 votes)
81 views18 pages

Data Structure Part 3

The document discusses algorithms and their complexity analysis. It begins by explaining that algorithms are written in a step-by-step manner to solve problems, and examples of algorithms are provided. The rest of the document discusses analyzing the time and space complexity of algorithms using mathematical notations such as Big-O notation. It explains that complexity analysis is done to determine the resources like time and space required by an algorithm.

Uploaded by

keybird david
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
81 views18 pages

Data Structure Part 3

The document discusses algorithms and their complexity analysis. It begins by explaining that algorithms are written in a step-by-step manner to solve problems, and examples of algorithms are provided. The rest of the document discusses analyzing the time and space complexity of algorithms using mathematical notations such as Big-O notation. It explains that complexity analysis is done to determine the resources like time and space required by an algorithm.

Uploaded by

keybird david
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 18

We write algorithms in a step-by-step manner, but it is not always the case.

Algorithm
writing is a process and is executed after the problem domain is well-defined. That is,
we should know the problem domain, for which we are designing a solution.
Example
Let's try to learn algorithm-writing by using an example.
Problem − Design an algorithm to add two numbers and display the result.
Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Algorithms tell the programmers how to code the program. Alternatively, the algorithm
can be written as −
Step 1 − START ADD
Step 2 − get values of a & b
Step 3 − c ← a + b
Step 4 − display c
Step 5 − STOP
In design and analysis of algorithms, usually the second method is used to describe an
algorithm. It makes it easy for the analyst to analyze the algorithm ignoring all unwanted
definitions. He can observe what operations are being used and how the process is
flowing.
Writing step numbers, is optional.
We design an algorithm to get a solution of a given problem. A problem can be solved
in more than one ways.

Hence, many solution algorithms can be derived for a given problem. The next step is to
analyze those proposed solution algorithms and implement the best suitable solution.
1.8 ALGORITHM COMPLEXITY
Suppose X is an algorithm and n is the size of input data, the time and space used by the
algorithm X are the two main factors, which decide the efficiency of X.
 Time Factor − Time is measured by counting the number of key operations such
as comparisons in the sorting algorithm.
 Space Factor − Space is measured by counting the maximum memory space
required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space
required by the algorithm in terms of n as the size of input data.
1.8.1 Space Complexity
Space complexity of an algorithm represents the amount of memory space required by
the algorithm in its life cycle. The space required by an algorithm is equal to the sum of
the following two components −
 A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and
constants used, program size, etc.
 A variable part is a space required by variables, whose size depends on the size of
the problem. For example, dynamic memory allocation, recursion stack space,
etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part
and S(I) is the variable part of the algorithm, which depends on instance characteristic I.
Following is a simple example that tries to explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now,
space depends on data types of given variables and constant types and it will be
multiplied accordingly.

1.8.2 Time Complexity


Time complexity of an algorithm represents the amount of time required by the
algorithm to run to completion. Time requirements can be defined as a numerical
function T(n), where T(n) can be measured as the number of steps, provided each step
consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total
computational time is T(n) = c ∗ n, where c is the time taken for the addition of two bits.
Here, we observe that T(n) grows linearly as the input size increases.

1.9 ALGORITHM ANALYSIS


Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following –
 A Priori Analysis or Performance or Asymptotic Analysis − This is a theoretical
analysis of an algorithm. Efficiency of an algorithm is measured by assuming
that all other factors, for example, processor speed, are constant and have no
effect on the implementation.
 A Posterior Analysis or Performance Measurement − This is an empirical
analysis of an algorithm. The selected algorithm is implemented using
programming language. This is then executed on target computer machine. In
this analysis, actual statistics like running time and space required, are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the
execution or running time of various operations involved. The running time of an
operation can be defined as the number of computer instructions executed per operation.
Analysis of an algorithm is required to determine the amount of resources such as time
and storage necessary to execute the algorithm. Usually, the efficiency or running time of
an algorithm is stated as a function which relates the input length to the time complexity
or space complexity.

Algorithm analysis framework involves finding out the time taken and the memory space
required by a program to execute the program. It also determines how the input size of a
program influences the running time of the program.

In theoretical analysis of algorithms, it is common to estimate their complexity in the


asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big-
O notation, Omega notation, and Theta notation are used to estimate the complexity
function for large arbitrary input.

1.9.1 Types of Analysis

The efficiency of some algorithms may vary for inputs of the same size. For
such algorithms, we need to differentiate between the worst case, average case
and best case efficiencies.

1.9.1.1 Best Case Analysis


If an algorithm takes the least amount of time to execute a specific set of
input, then it is called the best case time complexity. The best case efficiency
of an algorithm is the efficiency for the best case input of size n. Because of
this input, the algorithm runs the fastest among all the possible inputs of the
same size.

1.9.1.2 Average Case Analysis

If the time complexity of an algorithm for certain sets of inputs are on an


average, then such a time complexity is called average case time complexity.

Average case analysis provides necessary information about an algorithm’s


behavior on a typical or random input. You must make some assumption about
the possible inputs of size n to analyze the average case efficiency of
algorithm.

1.9.1.3 Worst Case Analysis

If an algorithm takes maximum amount of time to execute for a specific set of input, then
it is called the worst case time complexity. The worst case efficiency of an algorithm is
the efficiency for the worst case input of size n. The algorithm runs the longest among all
the possible inputs of the similar size because of this input of size n.

1.10 MATHEMATICAL NOTATION

Algorithms are widely used in various areas of study. We can solve different problems
using the same algorithm. Therefore, all algorithms must follow a standard. The
mathematical notations use symbols or symbolic expressions, which have a precise
semantic meaning.

1.10.1 Asymptotic Notations

A problem may have various algorithmic solutions. In order to choose the best algorithm
for a particular process, you must be able to judge the time taken to run a particular
solution. More accurately, you must be able to judge the time taken to run two solutions,
and choose the better among the two.
To select the best algorithm, it is necessary to check the efficiency of each algorithm. The
efficiency of each algorithm can be checked by computing its time complexity. The
asymptotic notations help to represent the time complexity in a shorthand way. It can
generally be represented as the fastest possible, slowest possible or average possible.
The notations such as O (Big-O), Ώ (Omega), and θ (Theta) are called as asymptotic
notations. These are the mathematical notations that are used in three different cases of
time complexity.

1.10.1.1 Big-O Notation

‘O’ is the representation for Big-O notation. Big -O is the method used to express the
upper bound of the running time of an algorithm. It is used to describe the performance or
time complexity of the algorithm. Big-O specifically describes the worst-case scenario
and can be used to describe the execution time required or the space used by the
algorithm.

Table 2.1 gives some names and examples of the common orders used to
describe functions. These orders are ranked from top to bottom.

Table 2.1: Common Orders


Time complexity Examples
1 O(1) Constant Adding to the front of a linked list
2 O(log n) Logarithmic Finding an entry in a sorted array
3 O(n) Linear Finding an entry in an unsorted array
4 O(n log n) Linearithmic Sorting ‘n’ items by ‘divide-and-conquer’
5 O(n2) Quadratic Shortest path between two nodes in a graph
6 O(n3) Cubic Simultaneous linear equations
7 O(2n) Exponential The Towers of Hanoi problem

Big-O notation is generally used to express an ordering property among the


functions. This notation helps in calculating the maximum amount of time
taken by an algorithm to compute a problem. Big-O is defined as:

f(n) ≤ c ∗ g(n)

where, n can be any number of inputs or outputs and f(n) as well as g(n) are
two non-negative functions. These functions are true only if there is a constant
c and a non-negative integer n0 such that,
n ≥ n0.

The Big-O can also be denoted as f(n) = O(g(n)), where f(n) and g(n) are two
non -negative functions and f(n) < g(n) if g(n) is multiple of some constant c.
The graphical representation of f(n) = O(g(n)) is shown in figure 2.1, where
the running time increases considerably when n increases.
Example: Consider f(n)=15n3+40n2+2nlog n+2n. As the value of n
increases, n3 becomes much larger than n2, nlog n, and n. Hence, it
dominates the function f(n) and we can consider the running time to grow by
the order of n3. Therefore, it can be written as f(n)=O(n3).

The values of n for f(n) and C* g(n) will not be less than n0. Therefore, the
values less than n0 are not considered relevant.

Figure 1.8: Big-O Notation f(n) = O(g(n))

Let us take an example to understand the Big-O notation more clearly.

Example:
Consider function f(n) = 2(n)+2 and g(n) = n2.

We need to find the constant c such that f(n) ≤ c ∗ g(n).


Let n = 1, then
f(n) = 2(n)+2 = 2(1)+2 = 4

g(n) = n2 = 12 = 1
Here, f(n)>g(n)
Let n = 2, then
f(n) = 2(n)+2 = 2(2)+2 = 6
g(n) = n2 = 22 = 4

Here, f(n)>g(n)
Let n = 3, then
f(n) = 2(n)+2 = 2(3)+2 = 8
g(n) = n2 = 32 = 9
Here, f(n)<g(n)
Thus, when n is greater than 2, we get f(n)<g(n). In other words, as n becomes larger,
the running time increases considerably. This concludes that the Big-O helps to
determine the ‘upper bound’ of the algorithm’s run-time.

Limitations of Big O Notation


There are certain limitations with the Big O notation of expressing the complexity of
algorithms. These limitations are as follows:
 Many algorithms are simply too hard to analyse mathematically.
 There may not be sufficient information to calculate the behaviour of the
algorithm in the average case.
 Big O analysis only tells us how the algorithm grows with the size of the problem,
not how efficient it is, as it does not consider the programming effort.
 It ignores important constants. For example, if one algorithm takes O(n2 ) time to
execute and the other takes O(100000n2 ) time to execute, then as per Big O, both
algorithm have equal time complexity. In real-time systems, this may be a serious
consideration.

1.10.1.2 Omega Notation

‘Ω’ is the representation for Omega notation. Omega describes the manner in which an
algorithm performs in the best case time complexity. This notation provides the minimum
amount of time taken by an algorithm to compute a problem. Thus, it is considered that
omega gives the "lower bound" of the algorithm's run-time. Omega is defined as:

f(n) ≥ c ∗ g(n)

Where, n is any number of inputs or outputs and f(n) and g(n) are two non-negative
functions. These functions are true only if there is a constant c and a non-negative integer
n0 such that n>n0.

Omega can also be denoted as f(n) = Ώ (g(n)) where, f of n is equal to Omega of g of n .


The graphical representation of f(n) = Ώ (g(n)) is shown in figure 2.2. The function f(n) is
said to be in Ώ (g(n)), if f(n) is bounded below by some constant multiple of g(n) for all
large values of n, i.e., if there exists some positive constant c and some non-negative
integer n0, such that f(n) ≥ c ∗ g(n) for all n ≥n0.
Figure 2.2 shows Omega notation.

Figure 1.9 Omega Notation f(n) = Ώ (g(n))


Let us take an example to understand the Omega notation more clearly.

Example:
Consider function f(n) = 2n2+5 and g(n) = 7n.
We need to find the constant c such that f(n) ≥ c ∗ g(n).

Let n = 0, then
f(n) = 2n2+5 = 2(0)2+5 = 5
g(n) = 7(n) = 7(0) = 0
Here, f(n)>g(n)
Let n = 1, then

f(n) = 2n2+5 = 2(1)2+5 = 7


g(n) = 7(n) = 7(1) = 7
Here, f(n)=g(n)
Let n = 2, then
f(n) = 2n2+5 = 2(2)2+5 = 13

g(n) = 7(n) = 7(2) = 14


Here, f(n)<g(n)

Thus, for n=1, we get f(n) ≥ c ∗ g(n). This concludes that Omega helps to
determine the "lower bound" of the algorithm's run-time.

1.10.1.3 Theta Notation

'θ' is the representation for Theta notation. Theta notation is used when the
upper bound and lower bound of an algorithm are in the same order of
magnitude. Theta can be defined as:
c1 ∗ g(n) ≤ f(n) ≤ c2 ∗ g(n) for all n>n0

Where, n is any number of inputs or outputs and f(n) and g(n) are two non-
negative functions. These functions are true only if there are two constants
namely, c1, c2, and a non-negative integer n0.

Theta can also be denoted as f(n) = θ(g(n)) where, f of n is equal to Theta of g


of n. The graphical representation of f(n) = θ(g(n)) is shown in figure 2.3. The
function f(n) is said to be in θ (g(n)) if f(n) is bounded both above and below
by some positive constant multiples of g(n) for all large values of n, i.e., if
there exists some positive constant c1 and c2 and some non-negative integer
n0, such that C2g(n)≤f(n)≤ C1g(n) for all n≥n0.
Figure shows Theta notation.

Figure 1.10: Theta Notation f(n) = θ(g(n))

Let us take an example to understand the Theta notation more clearly.

Example: Consider function f(n) = 4n + 3 and g(n) = 4n for all n ≥ 3; and f(n) = 4n + 3
and g(n) = 5n for all n ≥ 3.

Then the result of the function will be:


Let n = 3
f(n) = 4n + 3 = 4(3)+3 = 15
g(n) = 4n =4(3) = 12 and

f(n) = 4n + 3 = 4(3)+3 = 15
g(n) = 5n =5(3) = 15 and
here, c1 is 4, c2 is 5 and n0 is 3
Thus, from the above equation we get c1 g(n) f(n) c2 g(n). This concludes that Theta
notation depicts the running time between the upper bound and lower bound.

1.11 ALGORITHM DESIGN TECHNIQUE


1.11.1 Divide and Conquer
1.11.2 Back Tracking Method
1.11.3 Dynamic programming

1.11.1 Divide and Conquer

Introduction

Divide and Conquer approach basically works on breaking the problem into sub problems
that are similar to the original problem but smaller in size & simpler to solve. once
divided sub problems are solved recursively and then combine solutions of sub problems
to create a solution to original problem.

At each level of the recursion the divide and conquer approach follows three steps:
Divide: In this step whole problem is divided into several sub problems.
Conquer: The sub problems are conquered by solving them recursively, only if they are
small enough to be solved, otherwise step1 is executed.
Combine: In this final step, the solution obtained by the sub problems are combined to
create solution to the original problem.

Generally,
we can
follow
the divide-
and-

conquer approach in a three-step process.

Examples: The specific computer algorithms are based on the Divide & Conquer
approach:

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)
4. Tower of Hanoi.

Fundamental of Divide & Conquer Strategy:

There are two fundamentals of Divide & Conquer Strategy:

1. Relational Formula
2. Stopping Condition

1. Relational Formula: It is the formula that we generate from the given technique.
After generation of Formula, we apply D&C Strategy, i.e., we break the problem
recursively & solve the broken subproblems.

2. Stopping Condition: When we break the problem using Divide & Conquer Strategy,
then we need to know that for how much time, we need to apply divide & Conquer. So,
the condition where the need to stop our recursion steps of D&C is called as Stopping
Condition.

Applications of Divide and Conquer Approach:

Following algorithms are based on the concept of the Divide and Conquer Technique:

1. Binary Search: The binary search algorithm is a searching algorithm, which is


also called a half-interval search or logarithmic search. It works by comparing the
target value with the middle element existing in a sorted array. After making the
comparison, if the value differs, then the half that cannot contain the target will
eventually eliminate, followed by continuing the search on the other half. We will
again consider the middle element and compare it with the target value. The
process keeps on repeating until the target value is met. If we found the other half
to be empty after ending the search, then it can be concluded that the target is not
present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as
partition-exchange sort. It starts by selecting a pivot value from an array followed
by dividing the rest of the array elements into two sub-arrays. The partition is
made by comparing each of the elements with the pivot value. It compares
whether the element holds a greater value or lesser value than the pivot and then
sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons.
It starts by dividing an array into sub-array and then recursively sorts each of
them. After the sorting is done, it merges them back.

Advantages of Divide and Conquer


o Divide and Conquer tend to successfully solve one of the biggest problems, such
as the Tower of Hanoi, a mathematical puzzle. It is challenging to solve
complicated problems for which you have no basic idea, but with the help of the
divide and conquer approach, it has lessened the effort as it works on dividing the
main problem into two halves and then solve them recursively. This algorithm is
much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves
simple subproblems within the cache memory instead of accessing the slower
main memory.

Disadvantages of Divide and Conquer

o Since most of its algorithms are designed by incorporating recursion, so it


necessitates high memory management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater than
the stack present in the CPU.

1.11.2 Backtracking
Introduction

The Backtracking is an algorithmic-method to solve a problem with an additional way. It


uses a recursive approach to explain the problems. We can say that the backtracking is
needed to find all possible combination to solve an optimization problem.

Backtracking is a systematic way of trying out different sequences of decisions until we


find one that "works."

In the following Figure:

o Each non-leaf node in a tree is a parent of one or more other nodes (its children)
o Each node in the tree, other than the root, has exactly one parent
Generally, however, we draw our trees downward, with the root at the top.

A tree is composed of nodes.


Backtracking can understand of as searching a tree for a particular "goal" leaf
node.

Backtracking is undoubtedly quite simple - we "explore" each node, as follows:

To "explore" node N:
1. If N is a goal node, return "success"
2. If N is a leaf node, return "failure"
3. For each child C of N,
Explore C
If C was successful, return "success"
4. Return "failure"

Backtracking algorithm determines the solution by systematically searching the solution


space for the given problem. Backtracking is a depth-first search with any bounding
function. All solution using backtracking is needed to satisfy a complex set of constraints.
The constraints may be explicit or implicit.

Explicit Constraint is ruled, which restrict each vector element to be chosen from the
given set.

Implicit Constraint is ruled, which determine which each of the tuples in the solution
space, actually satisfy the criterion function.
1.11.3 Dynamic programming

Dynamic Programming Technique is similar to divide-and-conquer technique. Both


techniques solve a problem by breaking it down into several sub-problems that can be
solved recursively. The main difference between is that, Divide & Conquer approach
partitions the problems into independent sub-problems, solve the sub-problems
recursively, and then combine their solutions to solve the original problems. Whereas
dynamic programming is applicable when the sub-problems are not independent, that is,
when sub-problems share sub subproblems. Also, A dynamic programming algorithms
solves every sub problem just once and then saves its answer in a table, thereby avoiding
the work of recomputing the answer every time the sub subproblems is encountered.

Therefore "Dynamic programming is a applicable when sub problem are not


independent, that is when sub problem share sub problems."

As Greedy approach, Dynamic programming is typically applied to optimization


problems and for them there can be many possible solutions and the requirement is to
find the optimal solution among those. But Dynamic programming approach is little
different greedy approach. In greedy solutions are computed by making choices in serial
forward way and in this no backtracking & revision of choices is done where as Dynamic
programming computes its solution bottom up by producing them from smaller sub
problems, and by trying many possibilities and choices before it arrives at the optimal set
of choices.

The Development of a dynamic-programming algorithm can be broken into a sequence of


four steps:
Divide, Sub problems: The main problems are divided into several smaller sub
problems. In this the solution of the main problem is expressed in terms of the solution
for the smaller sub problems. Basically, it is all about characterizing the structure of an
optimal solution and recursively define the value of an optimal solution.
Table, Storage: The solution for each sub problem is stored in a table, so that it can be
used many times whenever required.
Combine, bottom-up Computation: The solution to main problem is obtained by
combining the solutions of smaller sub problems. i.e., compute the value of an optimal
solution in a bottom-up fashion.
Construct an optimal solution from computed information. (This step is optional and is
required in case if some additional information is required after finding out optimal
solution.)

Now for any problem to be solved through dynamic programming approach it must
follow the following conditions:
Principle of Optimality: It states that for solving the master problem optimally, its sub
problems should be solved optimally. It should be noted that not all the times each sub
problem(s) is solved optimally, so in that case we should go for optimal majority.
Polynomial Breakup: For solving the main problem, the problem is divided into several
sub problems and for efficient performance of dynamic programming the total number of
sub problems to be solved should be at-most a polynomial number.

Various algorithms which make use of Dynamic programming technique are as follows:
1. Knapsack problem.
2. Chain matrix multiplication.
3. All pair shortest path.
4. Travelling sales man problem.
5. Tower of hanoi.
6. Checker Board.
7. Fibonacci Sequence.
8. Assembly line scheduling.
9. Optimal binary search trees.

1.12 SUMMARY
A data structure is a particular way of storing and organizing data either in computer’s
memory or on the disk storage so that it can be used efficiently.
There are two types of data structures: primitive and non-primitive data structures.
Primitive data structures are the fundamental data types which are supported by a
programming language. Nonprimitive data structures are those data structures which are
created using primitive data structures.
Non-primitive data structures can further be classified into two categories: linear and
non-linear data structures.
If the elements of a data structure are stored in a linear or sequential order, then it is a
linear data structure. However, if the elements of a data structure are not stored in
sequential order, then it is a non-linear data structure.
An array is a collection of similar data elements which are stored in consecutive memory
locations.
A linked list is a linear data structure consisting of a group of elements (called nodes)
which together represent a sequence.
A stack is a last-in, first-out (LIFO) data structure in which insertion and deletion of
elements are done at only one end, which is known as the top of the stack.
A queue is a first-in, first-out (FIFO) data structure in which the element that is inserted
first is the first to be taken out. The elements in a queue are added at one end called the
rear and removed from the other end called the front.
A tree is a non-linear data structure which consists of a collection of nodes arranged in a
hierarchical tree structure.
A graph is often viewed as a generalization of the tree structure, where instead of a purely
parent-to-child relationship between tree nodes, any kind of complex relationships can
exist between the nodes.
An abstract data type (ADT) is the way we look at a data structure, focusing on what it
does and ignoring how it does its job.
An algorithm is basically a set of instructions that solve a problem.
The time complexity of an algorithm is basically the running time of the program as a
function of the input size.
The space complexity of an algorithm is the amount of computer memory required during
the program execution as a function of the input size.
The worst-case running time of an algorithm is an upper bound on the running time for
any input.
The average-case running time specifies the expected behaviour of the algorithm when
the input is randomly drawn from a given distribution.
The efficiency of an algorithm is expressed in terms of the number of elements that has to
be processed and the type of the loop that is being used.

1.13 MODEL QUESTIONS


1. Define data structures. Give some examples.
2. In how many ways can you categorize data structures? Explain each of them.
3. Discuss the applications of data structures.
4. Write a short note on different operations that can be performed on data
structures.
5. Write a short note on abstract data type.
6. Explain the different types of data structures. Also discuss their merits and
demerits.
7. Define an algorithm. Explain its features with the help of suitable examples.
8. Explain and compare the approaches for designing an algorithm.
9. What do you understand by a graph?
10. Explain the criteria that you will keep in mind while choosing an appropriate
algorithm to solve a particular problem.
11. What do you understand by time–space trade-off?
12. What do you understand by the efficiency of an algorithm?
13. How will you express the time complexity of a given algorithm?
14. Discuss the significance and limitations of the Big O notation.
15. Discuss the best case, worst case and average case complexity of an
algorithm.
16. Categorize algorithms based on their running time complexity.
17. Give examples of functions that are in Big O notation as well as functions that
are not in Big O notation.
18. Explain the Ω notation.
19. Give examples of functions that are in Ω notation as well as functions that are
not in Ω notation.
20. Explain the Θ notation.
21. Give examples of functions that are in Θ notation as well as functions that are
not in Θ notation.
22. Explain the ω notation.
23. Give examples of functions that are in ω notation as well as functions that are
in ω notation.

1.14 List of References


https://www.javatpoint.com/
https://www.studytonight.com
https://www.tutorialspoint.com
https://www.geeksforgeeks.org/heap-sort/
https://www.programiz.com/dsa/heap-sort
https://www.2braces.com/data-structures

You might also like