0% found this document useful (0 votes)
19 views6 pages

Algorithm and Data Structure - Q and A

Common interview Q and A about data structure for programmer interview

Uploaded by

Benny Susanto
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
19 views6 pages

Algorithm and Data Structure - Q and A

Common interview Q and A about data structure for programmer interview

Uploaded by

Benny Susanto
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 6

Table of Contents

Algorithm and Data Structure.........................................................................................................2


Asymptotic Analysis.........................................................................................................................2
Disadvantages of Experimental Analysis........................................................................................3
Big O Notation.................................................................................................................................3
Time Complexity..............................................................................................................................5
Space Complexity............................................................................................................................6
Algorithm and Data Structure

Q: What is algorithm and data structure?


A: Algorithm is a step by step procedure to solve a certain problem, while data structure is
n organized structure that is applied to a certain data for easy retrieval and manipulation

References:

Simply put, a data structure is a systematic way of organizing and accessing data, and an
algorithm is a step-by-step procedure for performing some task in a finite amount of time
(Data Structure and Algorithm in Java)

In mathematics and computer science, an algorithm (/ˈælɡərɪðəm/ ⓘ) is a finite sequence


of mathematically rigorous instructions, typically used to solve a class of
specific problems or to perform a computation.Algorithms are used as specifications for
performing calculations and data processing. More advanced algorithms can
use conditionals to divert the code execution through various routes (referred to
as automated decision-making) and deduce valid inferences (referred to as automated
reasoning), achieving automation eventually. Using human characteristics as descriptors of
machines in metaphorical ways was already practiced by Alan Turing with terms such as
"memory", "search" and "stimulus" (Wikipedia)

Asymptotic Analysis

Q: What is asymptotic analysis and the relation to Big O Notation?


A:

In algorithm analysis, we focus on the growth rate of the running time as a function of the
input size n, taking a “big-picture” approach. For example, it is often enough just to know
that the running time of an algorithm grows proportionally to n. We analyze algorithms using
a mathematical notation for functions that disregards constant factors. Namely, we
characterize the running times of algorithms by using functions that map the size of the input,
n, to values that correspond to the main factor that determines the growth rate in terms of n.
This approach reflects that each basic step in a pseudocode description or a high-level
language implementation may correspond to a small number of primitive operations. Thus, we
can perform an analysis of an algorithm by estimating the number of primitive operations
executed up to a constant factor, rather than getting bogged down in language-specific or
hardware-specific analysis of the exact number of operations that execute on the computer.
(Data Structure and Algorithm in Java)
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of
describing limiting behavior.
As an illustration, suppose that we are interested in the properties of a function f
(n) as n becomes very large. If f(n) = n2 + 3n, then as n becomes very large, the
term 3n becomes insignificant compared to n2. The function f(n) is said to be "asymptotically
equivalent to n2, as n → ∞". This is often written symbolically as f (n) ~ n2, which is read as
"f(n) is asymptotic to n2".

An example of an important asymptotic result is the prime number theorem. Let π(x) denote
the prime-counting function (which is not directly related to the constant pi), i.e. π(x) is the
number of prime numbers that are less than or equal to x. Then the theorem states that
𝜋(𝑥)∼𝑥ln⁡𝑥.

Asymptotic analysis is commonly used in computer science as part of the analysis of


algorithms and is often expressed there in terms of big O notation.
(Wikipedia)

Disadvantages of Experimental Analysis

Q: What are some disadvantages of experimental analysis ?


A: While experimental studies of running times are valuable, especially when fine- tuning
production-quality code, there are three major limitations to their use for algorithm
analysis:

 Experimental running times of two algorithms are difficult to directly com- pare unless
the experiments are performed in the same hardware and software environments.
 Experiments can be done only on a limited set of test inputs; hence, they leave out the
running times of inputs not included in the experiment (and these inputs may be
important).
 An algorithm must be fully implemented in order to execute it to study its running time
experimentally.

Big O Notation

Q: What is Big O Notation


A:

Let f (n) and g(n) be functions mapping positive integers to positive real numbers.
We say that f (n) is O(g(n)) if there is a real constant c > 0 and an integer constant
n0 ≥ 1 such that
f (n) ≤ c .g(n), for n ≥ n0.
This definition is often referred to as the “big-Oh” notation, for it is sometimes pronounced
as “ f (n) is big-Oh of g(n).”

Example 4.6: The function 8n+5 is O(n).


Justification: By the big-Oh definition, we need to find a real constant c>0 and
an integer constant n0 ≥ 1 such that 8n+5 ≤ cn for every integer n ≥ n0. It is easy
to see that a possible choice is c = 9 and n0 = 5. Indeed, this is one of infinitely
many choices available because there is a trade-off between c and n0. For example,
we could rely on constants c = 13 and n0 = 1.

The big-Oh notation allows us to say that a function f (n) is “less than or equal
to” another function g(n) up to a constant factor and in the asymptotic sense as n
grows toward infinity. This ability comes from the fact that the definition uses “≤”
to compare f (n) to a g(n) times a constant, c, for the asymptotic cases when n≥n0.
However, it is considered poor taste to say “ f (n) ≤ O(g(n)),” since the big-Oh
already denotes the “less-than-or-equal-to” concept. Likewise, although common,
it is not fully correct to say “ f (n) = O(g(n)),” with the usual understanding of the
“=” relation, because there is no way to make sense of the symmetric statement,
“O(g(n)) = f (n).” It is best to say, “ f (n) is O(g(n)).”

Alternatively, we can say “ f (n) is order of g(n).” For the more mathematically
inclined, it is also correct to say, “ f (n) ∈ O(g(n)),” for the big-Oh notation, technically
speaking, denotes a whole collection of functions. In this book, we will stick
to presenting big-Oh statements as “ f (n) is O(g(n)).” Even with this interpretation,
there is considerable freedom in how we can use arithmetic operations with the big-
Oh notation, and with this freedom comes a certain amount of responsibility.

(Data Structure and Algorithm in Java)

Big O notation is a mathematical notation that describes the limiting behavior of


a function when the argument tends towards a particular value or infinity. Big O is a member
of a family of notations invented by German mathematicians Paul Bachmann,[1] Edmund
Landau,[2] and others, collectively called Bachmann–Landau notation or asymptotic
notation. The letter O was chosen by Bachmann to stand for Ordnung, meaning the order of
approximation.

In computer science, big O notation is used to classify algorithms according to how their run
time or space requirements grow as the input size grows.In analytic number theory, big O
notation is often used to express a bound on the difference between an arithmetical
function and a better understood approximation; a famous example of such a difference is the
remainder term in the prime number theorem. Big O notation is also used in many other fields
to provide similar estimates.
Big O notation characterizes functions according to their growth rates: different functions
with the same asymptotic growth rate may be represented using the same O notation. The
letter O is used because the growth rate of a function is also referred to as the order of the
function. A description of a function in terms of big O notation usually only provides an upper
bound on the growth rate of the function.

Associated with big O notation are several related notations, using the symbols o, Ω, ω,
and Θ, to describe other kinds of bounds on asymptotic growth rates.
(Wikipedia)

Time Complexity

Q: What is time complexity?


A: The amount of time needed for an algorithm to run

References:

In theoretical computer science, the time complexity is the computational complexity that
describes the amount of computer time it takes to run an algorithm. Time complexity is
commonly estimated by counting the number of elementary operations performed by the
algorithm, supposing that each elementary operation takes a fixed amount of time to perform.
Thus, the amount of time taken and the number of elementary operations performed by the
algorithm are taken to be related by a constant factor.

Since an algorithm's running time may vary among different inputs of the same size, one
commonly considers the worst-case time complexity, which is the maximum amount of time
required for inputs of a given size. Less common, and usually specified explicitly, is
the average case complexity, which is the average of the time taken on inputs of a given size
(this makes sense because there are only a finite number of possible inputs of a given size). In
both cases, the time complexity is generally expressed as a function of the size of the input.
[1]: 226 Since this function is generally difficult to compute exactly, and the running time for
small inputs is usually not consequential, one commonly focuses on the behavior of the
complexity when the input size increases—that is, the asymptotic behavior of the complexity.
Therefore, the time complexity is commonly expressed using big O notation,
typically 𝑂(𝑛), 𝑂(𝑛log⁡𝑛), 𝑂(𝑛𝛼), 𝑂(2𝑛), etc., where n is the size in units of bits needed to
represent the input.

Algorithmic complexities are classified according to the type of function appearing in the big
O notation. For example, an algorithm with time complexity 𝑂(𝑛) is a linear time
algorithm and an algorithm with time complexity 𝑂(𝑛𝛼) for some constant 𝛼>1 is
a polynomial time algorithm.
(Wikipedia0

Space Complexity

Q: What is a space complexity?


A: The amount of time needed for an algorithm to run

References:

The space complexity of an algorithm or a data structure is the amount of memory space
required to solve an instance of the computational problem as a function of characteristics of
the input. It is the memory required by an algorithm until it executes completely. This includes
the memory space used by its inputs, called input space, and any other (auxiliary) memory it
uses during execution, which is called auxiliary space.

Similar to time complexity, space complexity is often expressed asymptotically in big O


notation, such as 𝑂(𝑛), 𝑂(𝑛log⁡𝑛), 𝑂(𝑛𝛼), 𝑂(2𝑛), etc., where n is a characteristic of the input
influencing space complexity.
(Wikipedia)

You might also like