0% found this document useful (0 votes)
48 views12 pages

Understanding Algorithms and Their Analysis

An algorithm is a step-by-step procedure for solving a problem, characterized by clarity, defined inputs and outputs, finiteness, feasibility, and independence from programming languages. It can be analyzed for efficiency through a priori and a posteriori methods, focusing on time and space complexity, which are critical for evaluating performance. Asymptotic notation (Big O, Big Theta, Big Omega) is used to describe the efficiency of algorithms in terms of their growth rates, helping to compare their performance under various conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views12 pages

Understanding Algorithms and Their Analysis

An algorithm is a step-by-step procedure for solving a problem, characterized by clarity, defined inputs and outputs, finiteness, feasibility, and independence from programming languages. It can be analyzed for efficiency through a priori and a posteriori methods, focusing on time and space complexity, which are critical for evaluating performance. Asymptotic notation (Big O, Big Theta, Big Omega) is used to describe the efficiency of algorithms in terms of their growth rates, helping to compare their performance under various conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Algorithm:

Algorithm is a step-by-step procedure, which defines a set of instructions to be


executed in a certain order to get the desired output. Algorithms are generally
created independent of underlying languages, i.e. an algorithm can be implemented
in more than one programming language.

From the data structure point of view, following are some important categories of
algorithms −

 Search − Algorithm to search an item in a data structure.


 Sort − Algorithm to sort items in a certain order.
 Insert − Algorithm to insert item in a data structure.
 Update − Algorithm to update an existing item in a data structure.
 Delete − Algorithm to delete an existing item from a data structure.

Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the
following characteristics −

 Unambiguous − Algorithm should be clear and unambiguous. Each of its


steps (or phases), and their inputs/outputs should be clear and must lead to
only one meaning.
 Input − An algorithm should have 0 or more well-defined inputs.
 Output − An algorithm should have 1 or more well-defined outputs, and
should match the desired output.
 Finiteness − Algorithms must terminate after a finite number of steps.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions, which
should be independent of any programming code.

How to Write an Algorithm?


There are no well-defined standards for writing algorithms. Rather, it is problem
and resource dependent. Algorithms are never written to support a particular
programming code.

As we know that all programming languages share basic code constructs like loops
(do, for, while), flow-control (if-else), etc. These common constructs can be used
to write an algorithm.
We write algorithms in a step-by-step manner, but it is not always the case.
Algorithm writing is a process and is executed after the problem domain is well-
defined. That is, we should know the problem domain, for which we are designing
a solution.

Example
Let's try to learn algorithm-writing by using an example.

Problem − Design an algorithm to add two numbers and display the result.

Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP

Algorithms tell the programmers how to code the program. Alternatively, the
algorithm can be written as −

Step 1 − START ADD


Step 2 − get values of a & b
Step 3 − c ← a + b
Step 4 − display c
Step 5 − STOP

In design and analysis of algorithms, usually the second method is used to describe
an algorithm. It makes it easy for the analyst to analyze the algorithm ignoring all
unwanted definitions. He can observe what operations are being used and how the
process is flowing.

Writing step numbers, is optional.

We design an algorithm to get a solution of a given problem. A problem can be


solved in more than one ways.
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following −

 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency


of an algorithm is measured by assuming that all other factors, for example,
processor speed, are constant and have no effect on the implementation.
 A Posterior Analysis − This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This is then
executed on target computer machine. In this analysis, actual statistics like
running time and space required, are collected.

We shall learn about a priori algorithm analysis. Algorithm analysis deals with the
execution or running time of various operations involved. The running time of an
operation can be defined as the number of computer instructions executed per
operation.

Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used
by the algorithm X are the two main factors, which decide the efficiency of X.

 Time Factor − Time is measured by counting the number of key operations


such as comparisons in the sorting algorithm.
 Space Factor − Space is measured by counting the maximum memory space
required by the algorithm.

The complexity of an algorithm f(n) gives the running time and/or the storage
space required by the algorithm in terms of n as the size of input data.

Space Complexity
Space complexity of an algorithm represents the amount of memory space required
by the algorithm in its life cycle. The space required by an algorithm is equal to the
sum of the following two components −

 A fixed part that is a space required to store certain data and variables, that
are independent of the size of the problem. For example, simple variables and
constants used, program size, etc.
 A variable part is a space required by variables, whose size depends on the
size of the problem. For example, dynamic memory allocation, recursion
stack space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed
part and S(I) is the variable part of the algorithm, which depends on instance
characteristic I. Following is a simple example that tries to explain the concept −

Algorithm: SUM(A, B)
Step 1 − START
Step 2 − C ← A + B + 10
Step 3 − Stop

Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3.
Now, space depends on data types of given variables and constant types and it will
be multiplied accordingly.

Time Complexity
Time complexity of an algorithm represents the amount of time required by the
algorithm to run to completion. Time requirements can be defined as a numerical
function T(n), where T(n) can be measured as the number of steps, provided each
step consumes constant time.

For example, addition of two n-bit integers takes n steps. Consequently, the total
computational time is T(n) = c ∗ n, where c is the time taken for the addition of two
bits. Here, we observe that T(n) grows linearly as the input size increases.
Hence, many solution algorithms can be derived for a given problem. The next step
is to analyze those proposed solution algorithms and implement the best suitable
solution.

Best case:
The fastest time an algorithm can complete a task under ideal conditions. For
example, in a linear search algorithm, the best case is when the target element is
the first item in the array.

Average case:
The average amount of time an algorithm takes to complete a task across all
possible inputs.

Worst case:
The longest time an algorithm takes to complete a task under the worst possible
conditions. For example, in a linear search algorithm, the worst case is when the
target element is at the end of the array or not present at all.

Best, worst, and average case analysis is a method used to evaluate an algorithm's
performance in different scenarios. The goal is to determine how well an algorithm
performs based on its input size.

Asymptotic notation
Asymptotic notation is a set of mathematical tools used to describe how efficient
and scalable algorithms are in computer science. It's used to analyze an algorithm's
running time, or how long it takes to process a given input, by observing its
behavior as the input size increases. This is also known as an algorithm's growth
rate.
Hence Asymptotic Notation is used to describe the running time of an algorithm -
how much time an algorithm takes with a given input, n. There are three different
notations: big O, big Theta (Θ), and big Omega (Ω). big-Θ is used when the
running time is the same for all cases, big-O for the worst case running time, and
big-Ω for the best case running time.

Problem Statement: If the time taken by two programs is T(n) = a*n + b, and
V(n) = c*n2 + d*n + e. Then which program will be more efficient?
Solution:
The linear algorithm (T(n)) is always asymptotically more efficient than the
quadratic algorithm (V(n)). Since, for any positive a, b, c, d, and e, there is always
an n, for which c*n2 + d*n + e >= a*n + b. So, V(n) will always take longer than
T(n) for execution.

Note:
 Rule of Thumb: Slower the asymptotic growth rate, the better the
algorithm.
 Asymptotic Analysis is Input bound, i.e., all the other factors remain
constant while analyzing the algorithm using asymptotic analysis.
 Asymptotic Analysis refers to the growth rate of f(n), as n -> infinity, so
we can ignore the lower values of n.
 The goal of Asymptotic Analysis is not to calculate the exact value but to
understand the behaviour of a function as it approaches a certain value or
infinity.
Importance of Asymptotic Notation in Data Structures
 Asymptotic notation categorizes algorithms based on their performance as
the input size grows.
 This helps understand how an algorithm will perform as data becomes
more complex, which is important for scalability.
 Asymptotic notation helps predict how an algorithm will perform under
different conditions.
 It allows for a fair comparison of different approaches by abstracting
away specific detail like hardware and implementation nuances.

 Asymptotic notation can help make informed decisions about which


algorithm to use based on efficiency and resource constraints.

Big-Θ Notation
We compute the big-Θ of an algorithm by counting the number of iterations the
algorithm always takes with an input of n. For instance, the loop in the pseudo
code below will always iterate N times for a list size of N. The runtime can be
described as Θ(N).

for each item in list:


print item
Adding Runtimes
When an algorithm consists of many parts, we describe its runtime based on the
slowest part of the program.
An algorithm with three parts has running times of Θ(2N) + Θ(log N) +
Θ(1). We only care about the slowest part, so we would quantify the
runtime to be Θ(N). We would also drop the coefficient of 2 since when
N gets really large, the multiplier 2 will have a small effect.

Theta notation is used to denote the average bound of the algorithm; it bounds a
function from above and below, that’s why it is used to represent exact asymptotic
behaviour.

Θ(g(n)) = {f(n): there exist positive constants c0, c1 and c2, such that 0 ≤ c0g(n)
≤ f(n) ≤ c1g(n) for all n ≥ c2}

Algorithmic Common Runtimes


The common algorithmic runtimes from fastest to slowest are:

 constant: Θ(1)
 logarithmic: Θ(log N)
 linear: Θ(N)
 polynomial: Θ(N^2)
 exponential: Θ(2^N)
 factorial: Θ(N!)
Big-O Notation
The Big-O notation describes the worst-case running time of a program. We
compute the Big-O of an algorithm by counting how many iterations an algorithm
will take in the worst-case scenario with an input of N. We typically consult the
Big-O because we must always plan for the worst case. For example, O(log n)
describes the Big-O of a binary search algorithm.

A function f(n) is said to be O(g(n)) if there exist positive constants c0 and c1 such
that 0 ≤ f(n) ≤ c0*g(n) for all n ≥ c1. This means that for sufficiently large values
of n, the function f(n) does not grow faster than g(n) up to a constant factor.
O(g(n)) = {f(n): there exist positive constants c0 and c1 such that 0 ≤ f(n) ≤
c0g(n) for all n ≥ c1}.
For Example:
Let, f(n) = n2 + n + 1
g(n) = n2
n2 + n + 1 <= c (n2)
The time complexity of the above function is O(n2), because the above function
has to run for n2 time at least

Big-Ω Notation
Big-Ω (Omega) describes the best running time of a program. We compute the big-
Ω by counting how many iterations an algorithm will take in the best-case scenario
based on an input of N. For example, a Bubble Sort algorithm has a running time
of Ω(N) because in the best case scenario the list is already sorted, and the bubble
sort will terminate after the first iteration.
Omega notation is used to denote the lower bound of the algorithm; it represents
the minimum running time of an algorithm. Therefore, it provides the best-case
complexity of any algorithm.
Ω(g(n)) = {f(n): there exist positive constants c0 and c1, such that 0 ≤ c0g(n) ≤
f(n) for all n ≥ c1}.
For Example:
Let,
 f(n) = n2 + n
Then, the best-case time complexity will be Ω(n2)
 f(n) = 100n + log(n)

Then, the best-case time complexity will be Ω(n).

Difference Between Big O Notation, Omega Notation, and Theta Notation


Parameter Big O Notation (O) Omega Notation (Ω) Theta Notation (Θ)
Describes an upper Describes a lower bound
Describes both an upper
bound on the time or on the time or space
Definition and a lower bound on the
space complexity of an complexity of an
time or space complexity.
algorithm. algorithm.
Used to characterize an
Used to characterize the Used to characterize the
algorithm's precise bound
Purpose worst-case scenario of an best-case scenario of an
(both worst and best
algorithm. algorithm.
cases).
Indicates the maximum Indicates the minimum Indicates the exact rate of
Interpretation rate of growth of the rate of growth of the growth of the algorithm's
algorithm's complexity. algorithm's complexity. complexity.

Mathematical f(n) = O(g(n)) if ∃ f(n) = Ω(g(n)) if ∃ f(n) = Θ(g(n)) if ∃


Expression constants c > 0, n₀ such constants c > 0, n₀ such constants c₁, c₂ > 0, n₀
Parameter Big O Notation (O) Omega Notation (Ω) Theta Notation (Θ)
that 0 ≤ f(n) ≤ c*g(n) for that 0 ≤ c*g(n) ≤ f(n) for such that 0 ≤ c₁g(n) ≤ f(n)
all n ≥ n₀. all n ≥ n₀. ≤ c₂g(n) for all n ≥ n₀.

Focuses on both the


Focuses on the upper Focuses on the lower
upper and lower limits,
Focus limit of performance (less limit of performance
providing a balanced
efficient aspects). (more efficient aspects).
view of performance.
It is commonly used to Used to provide a precise
Usage in Used to demonstrate the
analyze efficiency, analysis of algorithm
Algorithm effectiveness under
especially concerning efficiency in typical
Analysis optimal conditions.
worst-case performance. scenarios.
Predominant in It is less common than Used when an algorithm
theoretical and practical Big O but important for exhibits a consistent
Common Usage
applications for worst- understanding best-case performance across
case analysis. efficiency. different inputs.
Linear search in a sorted
Searching in an unsorted Inserting an element in a array, where the element
Examples
list: O(n). sorted array: Ω(1). is always in the middle:
Θ(n).

What are the Limitations of Asymptotic Analysis?


 Dependence on Large Input Size: Asymptotic analysis heavily depends on large input size
(i.e., value tends to infinity). But in reality, the input may not always be sufficiently large.
 Ignores Constant Factors: Asymptotic analysis mainly focuses on the algorithm’s growth rate
(i.e., highest value) and discards the smaller terms.
 Doesn’t Indicate the Exact Running Time: It approximates how running time grows with the
size of input but doesn’t provide the precise running time.
 Doesn’t Consider Memory Usage: It typically focuses on running time and ignores memory
usage or other resources unless specifically dealing with space complexity.
 Ignores Coefficient of Dominant Terms: Similar to the smaller terms, it also ignores the
coefficient of the dominant term, i.e., if two algorithms have the same dominant term, they
would be considered equivalent.
 Doesn’t Holds for All Algorithms: Algorithms such as randomized algorithms and algorithms
with complex control structures may not be well-suited for traditional asymptotic analysis.

Is Asymptotic Analysis Always Correct?


Asymptotic analyses are not always correct, but they are the best method to analyse any
algorithm's complexity. The asymptotic analysis does not take care of constants; it is
concerned with variables only. Any algorithm might be slow for small inputs, whereas fast
for larger inputs. So, the asymptotic analysis of the algorithm may be slower, whereas in real-
time, the software is faster because it needs to deal with large input sizes.
Here's a breakdown of when it's correct and when it might not be:
When Asymptotic Analysis is Correct:
 Predicting trends for large inputs: Asymptotic analysis excels at
providing a high-level understanding of how a function or algorithm
scales with input size. It helps compare different algorithms and identify
the most efficient one for large datasets.

Ignoring constant factors: In most practical situations, constant factors
and lower-order terms become insignificant as the input size grows.
Asymptotic analysis allows us to ignore these details and focus on the
dominant term that determines the overall growth rate.
 Analyzing best/worst/average cases: Asymptotic analysis can be applied
to analyze the best, worst, and average case performance of algorithms,
giving a comprehensive picture of their efficiency.
When Asymptotic Analysis Might Not Be Correct:
 Small inputs: For very small inputs, lower-order terms or constant
factors can dominate the behavior of the function, making the asymptotic
analysis inaccurate.
 Special cases: Certain specific inputs might trigger unexpected behavior
not captured by the asymptotic analysis.
 Practical considerations: Asymptotic analysis ignores factors like
hardware specifics, memory limitations, and real-world implementation
details that can impact actual performance.

Common Algorithmic Runtimes


Algorithm Best Case Worst Case

Linear Search O(1) O(n)

Binary Search O(1) O(log n)

Bubble Sort O(n) O(n2)

Insertion Sort O(n) O(n2)

Selection Sort O(n2) O(n2)

Merge Sort O(n log n) O(n log n)

Quick Sort O(n log n) O(n2)

Heap Sort O(n log n) O(n log n)

Breadth-First Search O(1) O(bd) (where b is the branching factor and d is the depth of the tree/graph)

Depth-First Search O(1) O(bd) (where b is the branching factor and d is the depth of the tree/graph)

You might also like