Algorithm and Data Structure - Q and A
Algorithm and Data Structure - Q and A
References:
Simply put, a data structure is a systematic way of organizing and accessing data, and an
algorithm is a step-by-step procedure for performing some task in a finite amount of time
(Data Structure and Algorithm in Java)
Asymptotic Analysis
In algorithm analysis, we focus on the growth rate of the running time as a function of the
input size n, taking a “big-picture” approach. For example, it is often enough just to know
that the running time of an algorithm grows proportionally to n. We analyze algorithms using
a mathematical notation for functions that disregards constant factors. Namely, we
characterize the running times of algorithms by using functions that map the size of the input,
n, to values that correspond to the main factor that determines the growth rate in terms of n.
This approach reflects that each basic step in a pseudocode description or a high-level
language implementation may correspond to a small number of primitive operations. Thus, we
can perform an analysis of an algorithm by estimating the number of primitive operations
executed up to a constant factor, rather than getting bogged down in language-specific or
hardware-specific analysis of the exact number of operations that execute on the computer.
(Data Structure and Algorithm in Java)
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of
describing limiting behavior.
As an illustration, suppose that we are interested in the properties of a function f
(n) as n becomes very large. If f(n) = n2 + 3n, then as n becomes very large, the
term 3n becomes insignificant compared to n2. The function f(n) is said to be "asymptotically
equivalent to n2, as n → ∞". This is often written symbolically as f (n) ~ n2, which is read as
"f(n) is asymptotic to n2".
An example of an important asymptotic result is the prime number theorem. Let π(x) denote
the prime-counting function (which is not directly related to the constant pi), i.e. π(x) is the
number of prime numbers that are less than or equal to x. Then the theorem states that
𝜋(𝑥)∼𝑥ln𝑥.
Experimental running times of two algorithms are difficult to directly com- pare unless
the experiments are performed in the same hardware and software environments.
Experiments can be done only on a limited set of test inputs; hence, they leave out the
running times of inputs not included in the experiment (and these inputs may be
important).
An algorithm must be fully implemented in order to execute it to study its running time
experimentally.
Big O Notation
Let f (n) and g(n) be functions mapping positive integers to positive real numbers.
We say that f (n) is O(g(n)) if there is a real constant c > 0 and an integer constant
n0 ≥ 1 such that
f (n) ≤ c .g(n), for n ≥ n0.
This definition is often referred to as the “big-Oh” notation, for it is sometimes pronounced
as “ f (n) is big-Oh of g(n).”
The big-Oh notation allows us to say that a function f (n) is “less than or equal
to” another function g(n) up to a constant factor and in the asymptotic sense as n
grows toward infinity. This ability comes from the fact that the definition uses “≤”
to compare f (n) to a g(n) times a constant, c, for the asymptotic cases when n≥n0.
However, it is considered poor taste to say “ f (n) ≤ O(g(n)),” since the big-Oh
already denotes the “less-than-or-equal-to” concept. Likewise, although common,
it is not fully correct to say “ f (n) = O(g(n)),” with the usual understanding of the
“=” relation, because there is no way to make sense of the symmetric statement,
“O(g(n)) = f (n).” It is best to say, “ f (n) is O(g(n)).”
Alternatively, we can say “ f (n) is order of g(n).” For the more mathematically
inclined, it is also correct to say, “ f (n) ∈ O(g(n)),” for the big-Oh notation, technically
speaking, denotes a whole collection of functions. In this book, we will stick
to presenting big-Oh statements as “ f (n) is O(g(n)).” Even with this interpretation,
there is considerable freedom in how we can use arithmetic operations with the big-
Oh notation, and with this freedom comes a certain amount of responsibility.
In computer science, big O notation is used to classify algorithms according to how their run
time or space requirements grow as the input size grows.In analytic number theory, big O
notation is often used to express a bound on the difference between an arithmetical
function and a better understood approximation; a famous example of such a difference is the
remainder term in the prime number theorem. Big O notation is also used in many other fields
to provide similar estimates.
Big O notation characterizes functions according to their growth rates: different functions
with the same asymptotic growth rate may be represented using the same O notation. The
letter O is used because the growth rate of a function is also referred to as the order of the
function. A description of a function in terms of big O notation usually only provides an upper
bound on the growth rate of the function.
Associated with big O notation are several related notations, using the symbols o, Ω, ω,
and Θ, to describe other kinds of bounds on asymptotic growth rates.
(Wikipedia)
Time Complexity
References:
In theoretical computer science, the time complexity is the computational complexity that
describes the amount of computer time it takes to run an algorithm. Time complexity is
commonly estimated by counting the number of elementary operations performed by the
algorithm, supposing that each elementary operation takes a fixed amount of time to perform.
Thus, the amount of time taken and the number of elementary operations performed by the
algorithm are taken to be related by a constant factor.
Since an algorithm's running time may vary among different inputs of the same size, one
commonly considers the worst-case time complexity, which is the maximum amount of time
required for inputs of a given size. Less common, and usually specified explicitly, is
the average case complexity, which is the average of the time taken on inputs of a given size
(this makes sense because there are only a finite number of possible inputs of a given size). In
both cases, the time complexity is generally expressed as a function of the size of the input.
[1]: 226 Since this function is generally difficult to compute exactly, and the running time for
small inputs is usually not consequential, one commonly focuses on the behavior of the
complexity when the input size increases—that is, the asymptotic behavior of the complexity.
Therefore, the time complexity is commonly expressed using big O notation,
typically 𝑂(𝑛), 𝑂(𝑛log𝑛), 𝑂(𝑛𝛼), 𝑂(2𝑛), etc., where n is the size in units of bits needed to
represent the input.
Algorithmic complexities are classified according to the type of function appearing in the big
O notation. For example, an algorithm with time complexity 𝑂(𝑛) is a linear time
algorithm and an algorithm with time complexity 𝑂(𝑛𝛼) for some constant 𝛼>1 is
a polynomial time algorithm.
(Wikipedia0
Space Complexity
References:
The space complexity of an algorithm or a data structure is the amount of memory space
required to solve an instance of the computational problem as a function of characteristics of
the input. It is the memory required by an algorithm until it executes completely. This includes
the memory space used by its inputs, called input space, and any other (auxiliary) memory it
uses during execution, which is called auxiliary space.