Module 1 - Notes_pdf
Module 1 - Notes_pdf
System Life Cycle, Algorithms, Performance Analysis, Space Complexity, Time Complexity,
Asymptotic Notation, Complexity Calculation of Simple Algorithms.
1. Requirement Phase:
All programming projects begin with a set of specifications that defines the purpose of that
program.
Requirements describe the information that the programmers are given (input) and the results
(output) that must be produced.
Frequently the initial specifications are defined vaguely and we must develop rigorous input
and output descriptions that include all cases.
2. Analysis Phase
In this phase the problem is break down into manageable pieces.
There are two approaches to analysis:-bottom up and top down.
Bottom up approach is an older, unstructured strategy that places an early emphasis on coding
fine points. Since the programmer does not have a master plan for the project, the resulting
program frequently has many loosely connected, error ridden segments.
Top down approach is a structured approach divide the program into manageable segments.
1
This phase generates diagrams that are used to design the system.
Several alternate solutions to the programming problem are developed and compared during
this phase
3. Design Phase
This phase continues the work done in the analysis phase.
The designer approaches the system from the perspectives of both data objects that the
program needs and the operations performed on them.
The first perspective leads to the creation of abstract data types while the second requires the
specification of algorithms and a consideration of algorithm design strategies.
Ex: Designing a scheduling system for university
Data objects: Students, courses, professors etc
Operations: insert, remove search etc
ie. We might add a course to the list of university courses, search for the courses taught
by some professor etc.
Since abstract data types and algorithm specifications are language independent.
We must specify the information required for each data object and ignore coding details.
Ex: Student object should include name, phone number, social security number etc.
5. Verification Phase
This phase consists of
developing correctness proofs for the program
Testing the program with a variety of input data.
2
Removing errors.
Correctness of Proofs
If done properly, the correctness of proofs and system test will indicate erroneous code.
Removal of errors depends on the design and code.
While debugging large undocumented program written in ‘spaghetti’ code, each
corrected error possibly generates several new errors.
Debugging a well documented program that is divided into autonomous units that
interact through parameters is far easier. This especially true if each unit is tested
separately and then integrated into system.
ALGORITHMS
Definition: An algorithm is a finite set of instructions to accomplish a particular task. In addition, all
algorithms must satisfy the following criteria:
(1) Input. There are zero or more quantities that are externally supplied.
3
(2) Output. At least one quantity is produced.
(3) Definiteness. Each instruction is clear and unambiguous.
(4) Finiteness. If we trace out the instructions of an algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
(5) Effectiveness. Every instruction must be basic enough to be carried out, in principle, by a person
using only pencil and paper. It is not enough that each operation be definite as in (3); it also must
be feasible.
We can describe algorithm in many ways
1. We can use a natural language like English
2. Graphical Representation called flow chart, but they work well only if the algorithm is small
and simple.
Example [Selection sort]: Suppose we must devise an algorithm that sorts a collection of n > 1
elements of arbitrary type. A simple solution is given by the following
[Selection Sort: In each pass of the selection sort, the smallest element is selected from the unsorted
list and exchanged with the elements at the beginning of the unsorted list]
For the first position in the sorted list, the whole list is scanned sequentially. The first position where
14 is stored presently, we search the whole list and find that 10 is the lowest value.
So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in the list,
appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the list in a linear manner.
We find that 14 is the second lowest value in the list and it should appear at the second place. We
swap these values.
4
After two iterations, two least values are positioned at the beginning in a sorted manner.
The same process is applied to the rest of the items in the array.
Following is a pictorial depiction of the entire sorting process −
From those elements that are currently unsorted, find the smallest and place it next in the sorted list
We assume that the elements are stored in an array ‘list’, such that the ith integer is stored in the ith
Position list[i], 0 <= i <n
Algorithm 1.1 is our first attempt to deriving a solution
5
To turn the program 1.1 into a real C program, two clearly defined sub tasks are remain: finding
the smallest integer and interchanging it with list[i].
We can solve this by using a function
• Correctness Proof
7
8
Recursive Algorithm
PERFORMANCE ANALYSIS
9
An algorithm is said to be efficient and fast, if it takes less time to execute & consume less memory
space
1. Space Complexity
2. Time Complexity
1. Space Complexity
S(P) = C + Sp(I)
10
{
Sum=sum+A[i];
return sum;
}
}
Here Space needed for variable n = 1 byte
Sum = 1 byte
i = 1 byte
Array A[i] = n byte
Total Space complexity = [n+3] byte
2. void main()
{
int x,y,z,sum;
printf(“Enter 3 numbers”);
scanf(“%d%d%d”,&x,&y,&z);
sum = x+y+z;
printf(“The sum = %d”,sum);
}
Here Space needed for variable x = 1 byte
y = 1 byte
z = 1 byte
sum = 1 byte
Total Space complexity = 4 byte
3. sum (a,n)
{
int s=0;
for(i=0;i<n;i++)
for(j=0;j<m;j++)
s=s+a[i][j];
return s;
}
11
Here Space needed for variable n = 1 byte
m = 1 byte
s = 1 byte
i = 1 byte
j = 1 byte
Array a[i][j] = nm byte
Total Space complexity = nm+5 byte
2. Time Complexity
The time complexity of an algorithm or a program is the amount of time it needs to run to
completion.
T(P)=C +TP
Here C is compile time
Tp is Runtime
For calculating the time complexity, we use a method called Frequency Count ie, counting
the number of steps
Comments – 0 step
Assignment statement – 1 Step
Conditional statement – 1 Step
Loop condition for ‘n’ numbers – n+1 Step
Body of the loop – n step
Return statement – 1 Step
Examples:
12
3. Iterative function for summing a list of numbers
13
4. Recursive summing of a list of numbers
When we analyze an algorithm it depends on the input data, there are three cases :
a. Best case: The best case is the minimum number of steps that can be executed for the
given parameters.
b. Average case: The average case is the average number of steps executed on instances
with the given parameters.
c. Worst case: In the worst case, is the maximum number of steps that can be executed for
the given parameters
14
ASYMPTOTIC NOTATION
Complexity of an algorithm is usually a function of n.
Behavior of this function is usually expressed in terms of one or more standard functions.
Expressing the complexity function with reference to other known functions is called asymptotic
complexity.
Three basic notations are used to express the asymptotic complexity
1. Big – Oh notation O
Formal method of expressing the upper bound of an algorithm’s running time.
i.e. it is a measure of longest amount of time it could possibly take for an algorithm to
complete.
It is used to represent the worst case complexity.
f(n) = O(g(n)) if and only if there are two positive constants c and n0 such that
f(n) ≤ c g(n) for all n ≥ n0 .
Then we say that “f(n) is big-O of g(n)”.
Examples:
1. Derive the Big – Oh notation for f(n) = 2n + 3
Ans:
2n + 3 <= 2n+3n
15
2n+3 <= 5n for all n>=1
Here c = 5
g(n) = n
so, f(n) = O(n)
2. Big – Omega notation Ω
f(n) = Ω(g(n)) if and only if there are two positive constants c and n0 such that
f(n) ≥ c g(n) for all n ≥ n0.
Then we say that “f(n) is omega of g(n)”.
Examples:
Derive the Big – Omega notation for f(n) = 2n + 3
Ans:
2n + 3 >= 1n for all n>=1
Here c = 1
g(n) = n
so, f(n) = Ω (n)
16
Here c1 = 1
C2 = 5
g1(n) and g2(n) = n
so, f(n) = Θ (n)
17
18
**See more problems in note book and tutorial note book
19