Algorithm New
Algorithm New
Performance of a program:
The performance of a program is the
amount of computer memory and time
needed to run a program.
o We use two approaches to
determine the performance of a
program.
One is analytical,
other experimental.
In performance analysis we
use analytical methods,
performance measurement
we conduct experiments.
Time Complexity:
The time needed by an algorithm
expressed as a function of the size of a
problem is called the time complexity
of the algorithm.
o The time complexity of a program is
the amount of computer time it
needs to run to completion.
The limiting behavior of the complexity
as size increases is called the
asymptotic time complexity.
o It is the asymptotic complexity of an
algorithm, which ultimately
determines the size of problems
that can be solved by the algorithm.
Space Complexity:
The space complexity of a program
is the amount of memory it
needs to run to completion.
The space need by a program has
the following components:
o Instruction space: Instruction
space is the space needed to
store the compiled version of the
program instructions.
Data space: Data space is the
space needed to store all
constant and variable values.
o Data space has two
components:
Environment stack space: The
environment stack is used to
save information needed to
resume execution of partially
completed functions.
Instruction Space: The amount
of instructions space that is
needed depends on factors such
as:
The compiler used to convert the
program into machine code(c++, java,
android ….etc).
The compiler options in effect at the
time of compilation
The target computer.
Classification of Algorithms
N(constant)
If ‘n’ is the number of data items to be
processed or degree of polynomial or
the size of the file to be sorted or
searched or the number of nodes in a
graph etc.
Complexity of Algorithms
The complexity of an algorithm M is the
function f(n) which gives the running
time and/or storage space requirement
of the algorithm in terms of the size ‘n’
of the input data.
o Mostly, the storage space required
by an algorithm is simply a multiple
of the data size ‘n’.
Complexity shall refer to the
running time of the algorithm.
The function f(n), gives the running
time of an algorithm, depends not only
on the size ‘n’ of the input data but also
on the particular data.
The complexity function f(n) for certain
cases are:
Rate of Growth:
The following notations are commonly
use notations in performance analysis
and used to characterize the complexity
of an algorithm:
1. Big–OH (O) ,
2. Big–OMEGA (),
3. Big–THETA () and
4. Little–OH (o)
Other classification
Brute force
Devide and concur
Gready
Dynamic programing
Analyzing Algorithms
Suppose ‘M’ is an algorithm, and
suppose ‘n’ is the size of the input data.
Clearly the complexity f(n) of M
increases as n increases.
It is usually the rate of increase of f(n)
we want to examine.
o This is usually done by comparing
f(n) with some standard functions.
o The most common computing times
are:
O(1), O(log2 n), O(n), O(n. log2 n), O(n2),
O(n3), O(2n), n! and nn
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
409 2,62,
64 6 384 6 144 Note 1
12 16,3 2,097
8 7 896 84 ,152 Note 2
25 65,5 1,677
6 8 2048 36 ,216 ????????
5
Note 2: The value here is about 500 billion
times the age of the universe in
nanoseconds, assuming a universe age of
20 billion years.
Graph of log n, n, n log n, n2, n3, 2n, n! and
nn
Example 1:
Let’s consider a short piece of source code:
x = 3*y + 2;
z = z + 1;
If y, z are scalars, this piece of code
takes a constant amount of time, which
we write as O(1).
In terms of actual computer instructions
or clock ticks, it’s difficult to say exactly
how long it takes.
But whatever it is, it should be the same
whenever this piece of code is
executed.
O(1) means some constant, it might be
5, or 1 or 1000.
Example 3:
If the first program takes 100n2
milliseconds and while the second takes
5n3 milliseconds,
then might not 5n3 program better than
100n2 program?
As the programs can be evaluated by
comparing their running time functions,
with constants by proportionality
neglected.
So, can 5n3 program be better than the
100n2 program.
5 n3/100 n2 = n/20
for inputs n < 20, the program with
running time 5n3 will be faster than
those the one with running time 100 n2.
Therefore, if the program is to be run
mainly on inputs of small size, we would
indeed prefer the program whose
running time was O(n3)
However, as ‘n’ gets large, the ratio of
the running times, which is n/20, gets
arbitrarily larger.
Thus, as the size of the input increases,
the O(n3) program will take significantly
more time than the O(n2) program.
So it is always better to prefer a
program whose running time with the
lower growth rate.
The low growth rate function’s such as
O(n) or O(n log n) are always better.
Example 3:
Analysis of simple for loop
Now let’s consider a simple for loop:
for (i = 1; i<=n;
i++)
v[i] = v[i] + 1;
This loop will run exactly n times, and
because the inside of the loop takes
constant time, the total running time is
proportional to n.
We write it as O(n).
The actual number of instructions might
be 50n, while the running time might be
17n microseconds.
It might even be 17n+3 microseconds
because the loop needs some time to
start up.
The big-O notation allows a
multiplication factor (like 17) as well as
an additive factor (like 3).
As long as it’s a linear function which is
proportional to n, the correct notation is
O(n)
and the code is said to have linear
running time.
Example 4:
Analysis for nested for loop
Now let’s look at a more complicated example,
a nested for loop:
for (i = 1; i<=n; i++)
for (j = 1; j<=n; j++)
a[i,j] = b[i,j] * x;
Example 6:
Analysis of matrix multiply
Lets start with an easy case.
Multiplying two n n matrices. The
code to compute the matrix product C
= A * B is given below.
for (i = 1; i<=n; i++)
for (j = 1; j<=n; j++)
C[i, j] = 0;
for (k = 1; k<=n; k++)
C[i, j] = C[i, j] + A[i, k] * B[k, j];
There are 3 nested for loops, each of
which runs n times.
The innermost loop therefore executes
n*n*n = n 3 times.
The innermost statement, which
contains a scalar sum and product takes
constant O(1) time.
So the algorithm overall takes O(n3)
time.
Example 7:
Analysis of bubble sort
The main body of the code for bubble sort
looks something like this:
for (i = n-1; i<1; i--)
for (j = 1; j<=i; j++)
if (a[j] > a[j+1])
swap a[j] and a[j+1];
This looks like the double.
The innermost statement, the if, takes
O(1) time.
It doesn’t necessarily take the same
time when the condition is true as it
does when it is false, but both times are
bounded by a constant.
But there is an important difference
here.
The outer loop executes n times,
but the inner loop executes a number of
times that depends on i.
The first time the inner for executes, it
runs i = n-1 times. The second time it
runs n-2 times, etc.
The total number of times the inner if
statement executes is therefore:
(n-1) + (n-2) + ... + 3 + 2 + 1
This is the sum of an arithmetic series.
Example 8:
Analysis of binary search
Binary search is a little harder to
analyze because it doesn’t have a for
loop.
But it’s still pretty easy because the
search interval halves each time we
iterate the search.
The sequence of search intervals looks
something like this:
n, n/2, n/4, ..., 8, 4, 2, 1
It’s not obvious how long this sequence is,
but if we take logs, it is:
log2 n, log2 n - 1, log2 n - 2, ..., 3, 2, 1, 0
Since the second sequence decrements
by 1 each time down to 0, its length
must be log2 n + 1.
It takes only constant time to do each
test of binary search, so the total
running time is just the number of times
that we iterate, which is log2n + 1.
So binary search is an O(log2 n)
algorithm.
Since the base of the log doesn’t matter
in an asymptotic bound, we can write
that binary search is O(log n).