0% found this document useful (0 votes)
12 views

Algo DataStruct Progr

An algorithm is a set of step-by-step instructions to solve a problem or complete a task. Learning algorithm design techniques provides guidance for creating algorithms to solve new problems. Some common techniques include divide and conquer, which breaks problems into identical subproblems; top-down design, which breaks down a system into subsystems; and recursion, which involves a function calling itself. Fundamental algorithms like selection sort, insertion sort, and merge sort are used to rearrange lists into order.

Uploaded by

jethrotabue
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Algo DataStruct Progr

An algorithm is a set of step-by-step instructions to solve a problem or complete a task. Learning algorithm design techniques provides guidance for creating algorithms to solve new problems. Some common techniques include divide and conquer, which breaks problems into identical subproblems; top-down design, which breaks down a system into subsystems; and recursion, which involves a function calling itself. Fundamental algorithms like selection sort, insertion sort, and merge sort are used to rearrange lists into order.

Uploaded by

jethrotabue
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

ALGORITHMS DEVELOPMENT TECHNIQUE

(STRATEGIES)
A computer
Algorithmhelpsstrategy
to manipulate
is adata according
general to a list oftoinstructions,
approach called a
solve problems
program. It can save a lot of data and produce the same, instantly.
algorithmically. Such a strategy can be applied to a variety of problem It is also
called a universal
from information-processing
different areas of computing. machine. It is a common operating
system used in corporate businesses, educational organizations and many
research programs, medicine and many works of life.

L earning design strategies is useful because it provide us guidance


in designing algorithms for new problems. Some of the most used
algorithm design techniques are:

 Divide and Conquer: This is a special case of recursion in which


given problem is divided into two or more sub-problems of exactly
same type and solution to problem is expressed in terms of solution
to sub-problem. i.e. Dividing the list of elements into two
approximately equal parts recursively and find solution
independently (conquer) then merged together into single list.
Quick and merge sorts are based on Divide and Conquer concept.
The divide and conquer strategy solves a problem by: Breaking
into sub problems that are themselves smaller instances of
the same type of problem. Recursively solving these sub
problems. Appropriately combining their answers.
A variation of divide and conquer strategy is the top-down approach.
A top-down design: also called as stepwise refinement, it is
essentially breaking down a system to gain insight into its
compositional sub-systems. In a top-down approach an overview
of the system is first formulated, specifying but not detailing
any first-level subsystems. Each subsystem is then refined in
greater detail, sometimes in many additional subsystem levels,
until the entire specification is reduced to base elements. In short
the top down approach means decomposing of the solution
procedure into subtasks until a solution is obtained. This approach
produces a readable and modular code that can be easily
understood, reused and maintained.

Example of Top-down analysis

Page 1 on 55
TYPES OF ALGORITHMS
There is no universally accepted breakdown for the various types of
algorithms however, there are common classes that algorithms are
frequently agreed to belong to. Among these are:
 Dynamic Programming Algorithms: This class remembers older
results and attempts to use this to speed the process of finding new
results.
 Greedy Algorithms: Greedy algorithms attempt not only to find a
solution, but also to find the ideal solution to any given problem.
 Brute Force Algorithms: The brute force approach starts at some
random point and iterates through every possibility until it finds the
solution.
 Randomized Algorithms: This class includes any algorithm that
uses a random number at any point during its process.
 Branch and Bound Algorithms: they form a tree of sub-problems
to the primary problem, following each branch until it is either
solved or lumped in with another branch.
 Simple Recursive Algorithms: This type of algorithm goes for a
direct solution immediately, then backtracks to find a simpler
solution.
 Backtracking Algorithms: Backtracking algorithms test for a
solution, if one is found the algorithm has solved, if not it recurs
once and tests again, continuing until a solution is found.
 Divide and Conquer Algorithms: they are similar to a branch and
bound algorithm, except it uses the backtracking method of
recurring and dividing a problem into sub problems.
RECURSION

Page 2 on 55
Recursion is the process of a function calling itself until a condition is
satisfied. A function which calls itself directly or indirectly again and again
until a condition is satisfied is called as recursive function.
The main characteristics of recursive algorithms are:
 Stopping condition: It specifies the situation when the result can
be obtained directly without the need of a recursive call.
 Recursive call: The algorithm calls itself at least once. The value of
the parameter corresponding to a cascade of calls should lead
eventually to the stopping condition.
EXAMPLES OF RECURSIVE FUNCTIONS
The factorial function
In general, problems that are defined in terms of themselves are good
candidates for recursive techniques. The recursive definition of the

factorial function is as follows.


factorial(int n)
if (n==0)then factorial(3) =
3*factorial(2)
return 1; =
3*2*factorial(1)
else =
3*2*1*factorial(0)
return n * factorial(n-1); =
3*2*1*1 = 6.

Iterative implementation of the factorial function


𝑛! = 𝑛 × 𝑛 − 1 × 𝑛 − 2 × … × 2 × 1, and applying the brute force technique
(also known as iterative technique) we obtain the following algorithm.
Factorial(n)
factorial ← 1; {initialize running product variable}
for i←1 to n do
factorial ← factorial x i;
i = i+1;
end for
return factorial; 5! =
1*2*3*4*5 = 120.
The Fibonacci sequence
The Fibonacci sequence is often used to illustrate the power of dynamic
programming. The sequence is defined by the following recurrence
relation: F0 = 0, F1= 1 and Fn = Fn-1 + Fn-2.

Page 3 on 55
Power function
Let us consider the problem of computing x n, for x>0 and n a natural
number. Starting from the definition, 𝑥𝑛= 𝑥 × 𝑥 × … × 𝑥 = 𝑥 × 𝑥𝑛−1
Applying the iterative and the recursive techniques respectively, we have

SOME FUNDAMENTAL SORTING ALGORITHMS


Sorting is a technique to rearrange the elements of a list in ascending or
descending order, which can be numerical, lexicographical, or any user-
defined order. Sorting can be classified in two types:
 Internal Sorts: This method uses only the primary memory
during sorting process. All data items are held in main memory
and no secondary memory is required this sorting process. There are
3 types of internal sorts: (i) SELECTION SORT, E.g. Selection sort
algorithm.
(ii) INSERTION SORT, E.g. Insertion sort algorithm, (iii) EXCHANGE
SORT, E.g. Bubble Sort.
 External Sorts: Sorting large amount of data requires external
or secondary memory. This process uses external memory such
as HDD, to store the data which is not fit into the main
memory. So, primary memory holds the currently being sorted data
only. All external sorts are based on process of merging. Different
parts of data are sorted separately and merged together. E.g.
Merge Sort

SELECTION SORT
In selection sort the list is divided into two sub-lists sorted and unsorted.
These two lists are divided by imaginary wall. We find a smallest element
from unsorted sub-list and swap it to the beginning. And the wall moves
one element ahead, as the sorted list is increases and unsorted list is
decreases. Assume that we have a list on n elements. By applying
selection sort, the first element is compared with all remaining (n-1)
elements. The smallest element is placed at the first location. Again, the
second element is compared with remaining (n-2) elements. At the time of

Page 4 on 55
comparison, the smaller element is swapped with larger element.
Similarly, entire array is checked for smallest element and then swapping
is done accordingly. Here we need n-1 passes or iterations to completely
rearrange the data.

Algorithm: Selection_Sort (A [ ], N) Example1: A list of unsorted elements are:


23 78 45 8 32 56
Step 1: Repeat For I = 0 to N–2 Begin
Step 2: Set Loc = I
Step 3: Repeat for K = I+1 to N–1 Begin
If A[K] < A [Loc]
Set Loc = K
End For
Step 5: Swap A [I] with A [Loc]
End For
Step 6: Exit
A list of sorted elements now: 8 23 32
45 56 78
Example2:

INSERTION SORT
Both the selection and bubble sorts exchange elements. But insertion sort
does not exchange elements. In insertion sort the element is inserted
at an appropriate place similar to card insertion. Here the list is divided
into two parts sorted and unsorted sub-lists. In each pass, the first element
of unsorted sub list is picked up and moved into the sorted sub list by
inserting it in suitable position. Suppose we have n elements, we need n-1
passes to sort the elements. Insertion sort works the way you might sort a
hand of playing cards. It works this way:

Page 5 on 55
 We start with an empty left hand [sorted array] and the cards face
down on the table [unsorted array].
 Then remove one card [key] at a time from the table [unsorted
array], and insert it into the correct position in the left hand [sorted
array].
 To find the correct position for the card, we compare it with each of
the cards already in the hand, from right to left.

INSERTION_SORT (A[ ])
1. FOR j ← 2 TO length[A]
2. DO key ← A[j]
3. {Put A[j] into the sorted sequence A[1 . . j − 1]}
4. i←j−1
5. WHILE i > 0 and A[i] > key
6. DO A[i +1] ← A[i]
7. i←i−1
8. A[i + 1] ← key
Example: the following figure shows the operation of INSERTION-SORT on
the array
A= (5, 2, 4, 6, 1, 3). Each part shows what happens for a particular
iteration with the value of j indicated. j indexes the "current card" being
inserted into the hand.

Read the figure row by row. Elements to the left of A[j] that are greater
than A[j] move one position to the right, and A[j] moves into the
evacuated position. Example: A list of unsorted elements are: 78 23 45 8
32 36. The results of insertion sort for each pass is as follows:

A list of sorted elements now: 8 23 32 36 45 78


Example2:

Page 6 on 55
BUBBLE SORT
In bubble sort method the list is divided into two sub-lists sorted and
unsorted. The smallest element is bubbled from unsorted sub-list. After
moving the smallest element the imaginary wall moves one element
ahead. The bubble sort was originally written to bubble up the highest
element in the list. But there is no difference whether highest / lowest
element is bubbled. This method is easy to understand but time
consuming. In this type, two successive elements are compared and
swapping is done. Thus, step-by-step entire array elements are checked.
Given a list of ‘n’ elements the bubble sort requires up to n-1 passes to
sort the data.

Algorithm for Bubble Sort:


Bubble_Sort (A [ ] , N )
Step 1: Repeat For P = 1 to N – 1 Begin
Step 2: Repeat For J = 1 to N – P Begin
Step 3: If (A [ J ] < A [ J – 1 ] )
Swap (A [ J ] , A [ J – 1 ] )
End For
End For
Step 4: Exit
Example: A list of unsorted elements are: 10 47 12 54 19 23. Bubble up for
highest value shown below.

A list of sorted elements now: 54 47 23 19 12 10


Example2: A list of unsorted elements are: PEOPLE

Page 7 on 55
Here there are 6 elements so number of comparison=5+4+3+2+1=15.
And the number of pass=6-1=5.
MERGE SORT
The basic concept of merge sort is divides the list into two smaller
sub-lists of approximately equal size. Recursively repeat this procedure
till only one element is left in the sub-list. After this, various sorted sub-
lists are merged to form sorted parent list. This process goes on
recursively till the original sorted list arrived. The algorithm for merge sort
is as follows:
Merge sort is based on the divide-and-conquer paradigm. Its worst-
case running time has a lower order of growth than insertion sort. Since
we are dealing with sub-problems, we state each sub-problem as sorting a
sub-array A[p .. r]. Initially, p = 1 and r = n, but these values change as
the recursive procedure is going through sub-problems.
To sort A[p .. r]:
 Divide Step: If a given array A has zero or one element, simply
return; it is already sorted. Otherwise, split A[p .. r] into two sub-
arrays A[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r].
That is, q is the halfway point of A[p .. r].
 Conquer Step: Conquer by recursively sorting the two sub-arrays :
A[p .. q] and A[q + 1 .. r].
 Combine Step: Combine the elements back in A[p .. r] by merging
the two sorted sub-arrays A[p .. q] and A[q + 1 .. r] into a sorted
sequence. To accomplish this step, we will define a procedure
MERGE (A, p, q, r). Note that the recursion bottoms out when the
sub-array has just one element, so that it is trivially sorted. To sort
the entire sequence A[1 .. n], make the initial call to the procedure
MERGE-SORT (A, 1, n).
MERGE-SORT (A, p, r)
1. IF p < r // Check for base case
2. THEN q = FLOOR[(p + r)/2] // Divide step
3. MERGE (A, p, q) // Conquer step.
4. MERGE (A, q + 1, r) // Conquer step.
5. MERGE (A, p, q, r) // Conquer step.
E.g. A list of unsorted elements are: 39 9 81 45 90 27 72 18

Page 8 on 55
Sorted elements are: 9 18 27 39 45 72 81 90
SEARCHING ALGORITHMS
Searching is a process of locating an element stored in a file. The
searching algorithms are:
LINEAR SEARCH ALGORITHM
Linear search technique is also known as sequential search technique. The
linear search is a method of searching an element in a list in sequence. In
this method, the array is searched for the required element from the
beginning of the list/array or from the last element to first element
of array and continues until the item is found or the entire list/array has
been searched.
Algorithm:
Step 1: START
Step 2: input N
Step 3: for I←0 to N -1
Read A [ I ]
Step 4: Loc = -1
Step 5: For I = 0 to N- 1 do
Step 6: If ( ele = A[I]) Then
Step 7: Loc = I
[End if]
[End for]
Step 8: If ( Loc > = 0 ) then
Step 9: Print “ ele Found in location “ , Loc
Step 10: else
Step 11: Print “ ele not found ”
Step 12: Stop
Advantages:
 It is simple and conventional method of searching data. The linear or
sequential name implies that the items are stored in a systematic
manner.
 The elements in the list can be in any order. I.e. The linear search
can be applied on sorted or unsorted linear data structure.
Disadvantage:

Page 9 on 55
 This method is insufficient when large number of elements is present
in list.
 It consumes more time and reduces the retrieval rate of the system.
Time complexity: O(n)
BINARY SEARCH ALGORITHM
Binary search is quicker than the linear search. However, it cannot be
applied on unsorted data structure. The binary search is based on the
approach divide-and-conquer. The binary search starts by testing the
data in the middle element of the array. This determines target is whether
in the first half or second half. If target is in first half, we do not need to
check the second half and if it is in second half no need to check in first
half. Similarly we repeat this process until we find target in the list or not
found from the list. Here we need 3 variables to identify first, last and
middle elements. To implement binary search method, the elements must
be in sorted order. Search is performed as follows:
 The key is compared with item in the middle position of an array
 If the key matches with item, return it and stop
 If the key is less than mid positioned item, then the item to be found
must be in first half of array, otherwise it must be in second half of
array.
 Repeat the procedure for lower (or upper half) of array until the
element is found.

Recursive algorithm for Binary Search Iterative algorithm for Binary Search
Binary_Search(a,key,lb,ub) Step 1: Low = 0
begin Step 2: High = N – 1
Step 1: [initialization] Step 3: Loc = -1
lb=0 Step 4: While (Low < = High) Do
ub=n-1 Step 5: M = (Low + High) / 2
Step 2: [search for the item] Step 6: If (elemt = A[M]) Then
Repeat through step 4 while lower Step 7: Loc = M goto Step 1&2
bound (lb) is less than upper bound. [End if]
Step 3: [obtain the index of middle value] Step 8: If (elemt < A [M]) Then
mid = (lb+ub)/2 Step 9: High ←M – 1
Step 4: [compare to search for item] Step 10: Else
if(key < a[mid]) then Step 11: Low ←M + 1
ub=mid-1 [End of if]
otherwise if( key > a[mid]) then [End of While loop]
lb=mid+1 Step 12: if (Loc > = 0) Then
otherwise if(key==a[mid]) Write Step 13: Print “elemt, found, search
“match found” successful” , Loc
return (mid) Step 14: else
return Binary_Search(a,key,lb,ub) Step 15: Print “Item 10 found,
Pagenot on 55 search
Step 5: [unsuccessful search] Unsuccessful”
Write “match not found” [End if]
Step 6: [end of algorithm] Step 16: Stop
Comparison

ALGORITHMS ANALYSIS/COMPLEXITY
A computer helps to manipulate data according to a list of instructions, called a
program. It can save a lot of data and produce the same, instantly. It is also
Complexity
called of information-processing
a universal an algorithm is the measure
machine. of analysis
It is of algorithm.
a common operating
system used in corporate businesses, educational organizations that
Analyzing an algorithm means predicting the resources the
and many
algorithm requires such as memory, communication bandwidth, logic
research programs, medicine and many works of life.
T gates and time.

he complexity of an algorithm "A" is the function f(n) which gives the


running time and/or storage space (time complexity and space
complexity) requirement of the algorithm in terms of the size "n" of the
input data. Frequently, the storage space required by an algorithm is
simply a multiple of the data size "n". Therefore, the term “complexity”
shall refer to the running time of the algorithm. Algorithmic complexity is
generally written in a form known as Big O notation, where the "O"
represents the complexity of the algorithm and a value "n" represents the
size of the set the algorithm is run against. The two classes of sorting
algorithms are O(n2), which includes the bubble, insertion, selection sort

Page 11 on 55
and O(n log n) which includes merge sort. It is not always possible to say
that one sorting algorithm is better than another sorting algorithm.
Performance of various sorting algorithm depend upon the data being
sorted. Good algorithms are those which offer various trade-offs in
efficiency, simplicity, memory use and other factors.
FACTORS WHICH DETERMINE GOODS ALGORITHMS
There are generally two criteria used to determine whether one
algorithm is "better" than another. There is: space requirements and
time requirements.
TIME COMPLEXITY
Time complexity estimates depend on what we define to be a fundamental
step. For the analysis to correspond usefully to the actual execution time,
the time required to perform a fundamental step must be guaranteed to
be bounded above by a constant.
SPACE COMPLEXITY
Space complexity estimates depend on what we define to be a
fundamental storage location. Such storage must offer reading and writing
functions as fundamental steps. Most computers offer interesting relations
between time and space complexity. In general, space complexity is
bounded by time complexity. Many algorithms that require a large time
can be implemented using small space.
COMPUTATIONAL COMPLEXITY IN TERMS OF THE SIZE OF LIST
(n)
Most often it is computational time that is measured for finding a more
suitable algorithm. The amount of memory needed by program to run to
completion is referred to as space complexity. For an algorithm, time
complexity depends upon the size of the input, thus, it is a function of
input size "n". So the amount of time needed by an algorithm to run to its
completion is referred as time complexity.
The two cases one usually investigates in complexity theory are: worst
case and best case.
 Worst case: the maximum value of O(n) for any possible input. The
worst case occurs when Item is the last element in the array Data or
is it not there at all. In both the cases, we get O(n)=n
 Best case: the minimum possible value of O(n). We might get best
behavior from a sorting algorithm if the input to it is already sorted.
 Average case: we assume that the item is present in the array and
is likely to be present in any position in the array. Thus, the number
of comparisons can be any of the numbers 1, 2, 3……..n and each
number occurs with probability p = 1/n.
O(n) = 1x1/n + 2x1/n + … + nx1/n = (n+1) / 2 thus the average
number of comparisons needed to locate the item in to array Data is
approximately equal to half the number of elements in the Data list.

Page 12 on 55
COMPLEXITY OF THE SELECTION SORT
The first pass results in n-1 comparisons. The second pass with n-2
comparisons. We observe that as the iterations or passes increases the
comparisons decreases. Finally the total number of comparisons will be
f(n) = (n-1) + (n - 2) + (n – 3) + . . . . . +2+1 = n x (n -1) / 2 = O(n2)
COMPLEXITY OF INSERTION SORT
The first pass of the algorithm results in one comparison and in the worst
case may result in one exchange also. The second pass results in two
comparisons and in the worst case may result in two exchanges.
Continuing the analysis we observe that as the iterations or passes
increases the comparisons and exchanges increases.
f(n)= 1 + 2 + 3 + …….. + (n -3) + (n- 2) + (n -1) = (n)x(n – 1) /2 = O(n2)
COMPLEXITY OF THE BUBBLE SORT
The first pass results in n-1 comparisons. The second pass results in n-2
and worst case results in n-2 exchanges. Continuing the analysis we
observe that as the iterations increases the comparisons decreases. f(n) =
(n-1) + (n - 2) + (n – 3) + . . . . . +2+1 = n x (n -1) / 2 = O(n2)
Comparison of Various Sorting Algorithms

Comparing Time Complexities for searching algorithms


Algorithm Best case Worst case Average
case
Binary search O(1) key is middle O(log n) O(log n )

Page 13 on 55
element
Sequential O(1) search key is first O( n ) search key not present or last O(n )
search element element

CATEGORIZING PERFORMANCE
In theoretical analysis of algorithms it is common to estimate their
complexity in the asymptotic sense: estimate the complexity function for
arbitrarily large input. For instance, binary search is said to run in a
number of steps proportional to the logarithm of the length of the list
being searched, or in O(log(n)) ("in logarithmic time“). The followings are
the asymptotic with their corresponding bound names: O(1): Constant
algorithm, O(logn): Logarithmic algorithm, O(n): Linear algorithm, O(n2):
Quadratic algorithm, O(n3): Cubic algorithm, O(an): Exponential
algorithm, O(n!): Factorial algorithm.

EXPANDING YOUR KNOWLEDGE

BUILDING AND APPLYING YOUR KNOWLEDGE


The best algorithm to solve a given problem is one that requires less
memory and takes less time to complete its execution. But in practice it
is not always possible to achieve both of these objectives. There may be
more than one approach to solve a same problem. One such approach
may require more space but takes less time to complete its execution
while the other approach requires less space but more time to
complete its execution. Thus we may have to compromise one to
improve the other. That is, we may be able to reduce space
requirement by increasing running time or reducing running time by
allocating more memory space. This situation where we compromise one
to better the other is known as Time-space tradeoff.
PROGRAMMING LANGUAGE OPERATORS
 Arithmetic operator: C provides all the basic arithmetic operators:
(+) addition, (–) subtraction, (*) multiplication, (/) division and (%)
modulo division-produces the remainder (r) of a/b. E.g. a+b, a-b,
a*b, a/b and a%b.
 Relational operator: These operators are used to compare two
variables or expressions. C provide the following relational
operators: (<) less than, (<=) less than or equal to, (>) greater
than, (>=) greater than or equal, (==) to equal to, (!=) not equal
to.
 Logical operator: (&&) AND, (| |) OR, (!) NOT. The logical operator
&& and | | are used to test more than one condition. E.g. a>b && x
== 10

Page 14 on 55
For example: (4 == 4) && (5!= 1) evaluates to True (1), because
both operands are true.
(4 > 1) || (9 < 1) evaluates to True (1), because one operand is true
(4>1).
!(5 == 4) evaluates to True (1), because the operand is false.
 Increment and Decrement operator: (++) increment, (– –)
decrement. The operator ++ adds 1 to the operand, – – subtracts 1.
They both are unary operators. (x = x+1) = x++ = ++x; x=+
+m != m++
E.g. (if m = 10, x = ++m, Then x = 11, m = 11). And (if m = 10, y
= m++ then y = 10, m = 11).
 Conditional Operator: A ternary operator pair "?" is available in C
to construct conditional expressions of the form exp_1 ? exp_2 :
exp_3 where exp_1, exp_2, exp_3 are expressions. exp_l is
evaluated first. If it is nonzero (true), then the expression exp2 is
evaluated and becomes the value of the expression. If exp_1 is false,
exp3 is evaluated, and its value becomes the value of the
expression.
E.g. a=l0, b=15, x=(a>b)? a:b;
/* Power (xn)
IMPLEMENTING SOME ALGORITHMS INTO C PROGRAMMING
implementation */ LANGUAGE
#include <stdio.h>
/* Factorial
int power(int y, int m);
implementation */ /* Fibonacci sequence
int main(){
#include <stdio.h> */
int n,x,p;
int factorial(int m); #include <stdio.h>
printf("enter x and a number
int main(){ n"); int Fibonacci(int m);
int n,fac; printf("\n"); int main(){
printf("enter a number n"); int n,fibo;
scanf("%d",&x);
printf("\n"); printf("enter a number
scanf("%d",&n);
scanf("%d",&n); n");
p = power(x,n);
fac = factorial(n); printf("\n");
printf(" %d raised to the
printf("the factorial of %d scanf("%d",&n);
power %d is: %d", x,n,p);
is %d", fibo = Fibonacci(n);
printf("\n");
n,fac); printf("Fibonacci of %d is
return 0;
printf("\n"); %d", n,fibo);
}
return 0; printf("\n");
} return 0;
}

int power(int y, int m){


if (m==0)
return 1;
else int Fibonacci(int m){
int factorial(int m){ if (m==0 || m==1)
return
if (m==0 || m==1)
y*power(y,m-1); m; 15 on 55
returnPage
return 1; else
}
else return Fibonacci(m-
return m*factorial(m-1); 1)+Fibonacci(m-2);
}
The Fibonacci sequence corresponding to F0, F1, F2, F3, F4, F5, F6, F7… is 0, 1,
1, 2, 3, 5, 8, 13 …

/*Bubble sort implementation*/ /*Insertion sort implementation*/


#include<stdio.h> #include<stdio.h>
#include<conio.h> #include<conio.h>
int main(){ int main( ){
int i,n,temp,j,arr[10]; int a[10],i,j,k,n;
printf("Enter the number of elements in printf("How many elements you want to
the Array: "); sort?\n");
scanf("%d",&n); scanf("%d",&n);
printf("\nEnter the elements:\n\n"); printf("\nEnter the Elements into an array:
for(i=0 ; i<n ; i++){ \n\n ");
printf(" Array[%d] = ",i); for(i=0 ; i<n ; i++){
scanf("%d",&arr[i]); printf(" Array[%d] = ",i);
} scanf("%d",&a[i]);
for(i=0 ; i<n ; i++){ }
for(j=0 ; j<n-i-1 ; j++){ for(i=1;i<n;i++){
if(arr[j]>arr[j+1]){ //Swapping Condition is k=a[i];
Checked for(j= i-1; j>=0 && k<a[j]; j--)
temp=arr[j]; a[j+1]=a[j];
arr[j]=arr[j+1]; a[j+1]=k;
arr[j+1]=temp; }
} printf("\n\n Elements after sorting: \n\n ");
} for(i=0;i<n;i++)
} printf("Array[%d] = %d\n", i,a[i]);
printf("\nThe Sorted Array is:\n\n"); printf("\n\n");
for(i=0 ; i<n ; i++){ return 0;
printf("Array[%d] = %d\n",i,arr[i]); }
}
printf("\n\n");
return 0;
}

Page 16 on 55
/*Selection sort /*continuation*/
/*Merge sort implementation*/ void mergesort(int low,int
implementation*/
#include<stdio.h> mid,int high){
#include<stdio.h>
void disp( ); int t[50],i,j,k;
int main( ){ void mergesort(int,int,int); i=low;
int i,j,t,n,min,a[10]; void msortdiv(int,int); j=mid+1;
printf("\n How many elements to int a[10],n; k=low;
sort? "); int main(){ while((i<=mid) && (j<=high)){
scanf("%d",&n); int i; if(a[i]>=a[j])
printf("\n Enter elements for an printf("\nEnter the n value:\n"); t[k++]=a[j++];
array:\n"); scanf("%d",&n); else
for(i=0 ; i<n ; i++){ printf("\nEnter elements for an t[k++]=a[i++];
printf(" Array[%d] = ",i); array:\n"); }
scanf("%d",&a[i]); for(i=0;i<n;i++) while(i<=mid)
} scanf("%d",&a[i]); t[k++]=a[i++];
printf("\nBeforeSorting the elements
for(i=0;i<n;i++){ while(j<=high)
are:\n");
min=i; t[k++]=a[j++];
disp();
for(j=i+1;j<n;j++) msortdiv(0,n-1); for(i=low;i<=high;i++)
if(a[j] > a[min]) printf("\nAfter Sorting the elements a[i]=t[i];
{ are:\n"); }
min=j; disp();
} return 0;
t=a[i]; } void msortdiv(int low,int
a[i]=a[min]; high){
a[min]=t; int mid;
} if(low!=high){
void disp(){
printf("\nAfter sorting the elements mid=((low+high)/2);
int i;
are:\n"); msortdiv(low,mid);
for(i=0;i<n;i++)
for(i=0;i<n;i++) printf("%d ",a[i]); msortdiv(mid+1,high);
printf("Array[%d] = %d\n", } mergesort(low,mid,high);
i,a[i]); /* continuation */ }
printf("\n\n"); }
return 0;
}

Page 17 on 55
/* Recursive program for Linear
Search*/
#include<stdio.h>
int linear(int [],int,int);
/*Iterative program for Linear Search*/ int main( ){
int a[10],pos=-1,n,k,i;
#include<stdio.h> printf("\nEnter number of value:");
int linear(int [],int,int); scanf("%d",&n);
int main( ){ printf("\nEnter elements for an array:\n");
int a[10], pos = -1, n, k, i; for(i=0 ; i<n ; i++){
printf("\nEnter the number of value:"); printf(" Array[%d] = ",i);
scanf("%d",&n); scanf("%d",&a[i]);
}
printf("\nEnter elements for an array:\n");
printf("\n Enter the element to be
for(i=0 ; i<n ; i++){
searched:");
printf(" Array[%d] = ",i); scanf("%d",&k);
scanf("%d",&a[i]); pos=linear(a,n,k);
} if(pos!=-1)
printf("\nEnter the element to be printf("\n Search successful, Element found
searched:"); at index:%d => Array[%d]\n",pos,pos);
scanf("%d",&k); else
pos=linear(a,n,k); printf("Search unsuccessful, element not
found \n");
if(pos != -1)
return 0;
printf("\n Search successful element found
}
at index:%d => Array[%d]\n",pos,pos);
else
printf("\n Search unsuccessful, element not
found\n"); int linear(int a[],int n,int k){
return 0; int i;
} for(i=n-1;i>=0;i--){
if(a[i]==k)
return(i);
int linear(int a[],int n,int k){ else{
int i; n=n-1;
return(linear(a,n,k));
for(i=0;i<n;i++){
}
if(a[i]==k)
}
return(i); return -1;
} }
return -1;
}
Page 18 on 55
/* recursive binary search*/
/*Iterative program for binary search */ #include<stdio.h>
#include<stdio.h> int bsearch(int [],int, int, int);
int bsearch(int [],int,int); int main( ){
int main( ){ int a[20],pos,n,k,i,lb,ub;
int a[10],pos,n,k,i; printf("\nEnter the number value:");
printf("\nEnter the number of value:"); scanf("%d",&n);
scanf("%d",&n); printf("\nEnter elements for an array:\n");
printf("\nEnter elements for an array:\n"); for(i=0 ; i<n ; i++){
for(i=0 ; i<n ; i++){ printf(" Array[%d] = ",i);
printf(" Array[%d] = ",i); scanf("%d",&a[i]);
scanf("%d",&a[i]); }
} printf("\nEnter the key value:");
printf("\nEnter the key value:"); scanf("%d",&k);
scanf("%d",&k); lb=0;
pos=bsearch(a,n,k); ub=n-1;
if(pos!= -1) pos=bsearch(a,k,lb,ub);
printf("Search successful, element found at if(pos!=-1)
index:%d => Array[%d]\n",pos,pos); printf("Search successful, element found at
else index:%d => Array[%d]\n",pos,pos);
printf("Search unsuccessful, element not found\ else
n"); printf("Search unsuccessful, element not
return 0; found\n");
} return 0;
}

int bsearch(int a[],int n, int k)


{ int bsearch(int a[ ], int k, int lb, int ub){
int lb=0,ub,mid; int mid;
lb=0; while(ub>=lb){
ub=n-1; mid=(lb+ub)/2;
while(ub>=lb) if(k<a[mid])
{ ub=mid-1;
mid=(lb+ub)/2; else if(k>a[mid])
if(k<a[mid]) lb=mid+1;
ub=mid-1; else if(k==a[mid])
Page 19 on 55
else if(k>a[mid]) return(mid);
lb=mid+1; return(bsearch(a,k,lb,ub));
else if(k==a[mid]) }
} }
return -1;
}

DATA STRUCTURES
Data structure is a kind of representation of logical relationship between
related data elements.
In data structure, decision on the operations such as storage, retrieval and
access must be carried out between the logically related data elements.
Data structure can be nested, i.e. we can have a Data structure that
consists of other Data structure. Data structure is a most convenient way
to handle data of different types including abstract data type (ADT) for a
known problem. A data type is a collection of values and a set of
operation on those values. System defined data types are data types
that have been defined by the compiler of any program. The C language
contains 4 basic data types: int, float, char and double. Data type exists
in various types: Primitive (or primary, or atomic) data type, composite (or
complex, or secondary data type) and abstract data type.
PRIMITIVE DATA TYPES
Typical primitive data types include:
 Character (char): usually stored in 8 bits (one byte) of internal
storage, unsigned char have values between 0 to 255 while signed
chars range from -128 to 127.
 Integer (int, short, long, byte): with a variety of precisions. C has
three classes of internal storage, namely short int, int, and long int,
in both signed and unsigned forms.
 Floating-point number (float, double, real, double precision):
These are stored in 32 bits, with 6 digits of precision.
 Boolean: have the values true and false.
 Reference (also called a pointer or handle): a small value
referring to another object's address in memory.

Page 20 on 55
The size a data type is compiler dependent. One can be using 2 bytes to
represent an integer while another is using 4 bytes. In C, the size of a data
type can be obtained using the function sizeof() defined in the library
stdio.h. For instance, with the declaration int i, we have
sizeof(i) = sizeof(int) = 4.
COMPLEXE DATA TYPES
Complex data types can be obtained from a composition of standard
types: Arrays, strings, records, lists. The process of constructing a
composite type is known as composition.
RECORDS OR STRUCTURES
A structure is a group of data of different data types. A structure
template contains definition of a structure name and a list of members
along with data types. Structure declaration means memory storage is
allotted to every structure variable. The dot (.) operator is used to access
the structure elements. The syntax for structure definition is as follows:
struct structurename {
Datatype member1;
Datatype member2;
…………………….
};
The following define a structure named "student" containing two fields
"name" and "marks".
struct student {
char name[10];
int marks[6];
int total;
};
The "struct" keyword, which identifies the beginning of a structure
definition, must be followed immediately by the structure name, or tag.
Within the braces following the structure name is a list of the structure's
member variables. The size of a structure depends on the data type of its
each field.
ARRAYS
An array is a collection of data storage locations, each having the same
data type and the same name. Each storage location in an array is called
an array element. A particular value is indicated by writing a number
called index or subscript after array name. For example age[3] stands
for the 4 element in the array age since the first index is zero in C. Arrays
can be of any variable type.

The syntax for declaring a one dimensional array is: datatype array-
name [size];

Page 21 on 55
 Initializing an array means storing data into an array at the design or
running time. Initialization at design level: in this method of
initialization the data values are stored into the array at the time of
array declaration. Syntax: datatype array-name[size]={list of
values};
Ex: int age[5]={10,50,20,30,5}; or int age[]={10,50,20,30,5};
 Initialization at run time: to initialize an array at run time means to
input data values into the array at the time of execution of the
program.
Ex: int age[10]; To print elements of an array a for
loop is used.
For (i=0; i<10; i++) For (i=0; i<10; i++)
Scanf(“%d”,&age[i]); Printf(“%d\n”,age[i]);
For a multidimensional array, the principle is the same but we use two
indexes. An initialization example is as follows: int a[6]
[2]={ 10,50,20,30,5,3,82,15,8,9,25,2}; the following shows how to
initialize and display:
Ex: int a[5][2]; For(i=0;i<5;i++) {
For(i=0;i<4;i++) For(j=0;j<2;j++) {
For(j=0;j<5;j++) Printf(“%d\t”,a[i][j]); }
Scanf(“%d”,&a[i][j]); Printf(“\n”); }

It is not necessary to initialize an array but if all the values in the array are
not initialized, they would be garbage values in it. Unlike an array,
Structure have elements of different types. An array is a derived data type
whereas a structure is a programmer-defined one.
STRING
A string is a sequence of one or more characters. The syntax for
declaration of a string is:
char array-name [size]; Ex: char str [10]; an example for string
initialization is:
char str[10] ={"MY STRING"};
char[0]='M'… An important characteristic a string is its length, which is the
number of characters in it. The difference between a null character and a
null sting is that a null character indicates the end of the string and a null
string is a string with no characters. In C gets() function receives a string
input up to the end of the line or a new line character, meaning that string
may have blank spaces and punctuations. But Scanf() function accepts a
string only up to the first blank space or punctuation.

Page 22 on 55
The header file needed to perform string operation in C language is
#include<string.h> and to perform character operations, the header file
#include<ctype.h> is needed.
POINTER
A pointer is a variable which contains the address in memory of another
variable. We can have a pointer to any variable type. A pointer is declared
as follows: datatype *identifier; ex: int *ptr; where *ptr is a pointer of int
type. The unary or monadic operator & gives the “address of a variable”.
The indirection or dereference operator * gives the “contents of an object
pointed to by a pointer”.
int *p;
int a;
p=&a;

After this assignment, we say that 'p' is “referring to” (or “pointing to”) the
variable 'a'. The pointer 'p' contains the memory address of the variable
'a'. The advantages of using a pointer are as follows:
 Function cannot return more than one value. But when the same
function can modify many pointer variables and function as if it is
returning more than one variable. Through pointers we can access a
variable that is declared outside a function.
 The length and complexity of a program is reduced and the
execution speed in reduced.
 By using pointer to an array of character strings helps in saving of
data storage space in memory.
Disadvantages of pointers:
 If the programmer is not careful and consistent with the use of
pointers, the program may crash. The same if sufficient memory is
not available during runtime for the storage of pointers.
 Direct access to memory means you can do things that perhaps you
should not.
ABSTRACT DATA TYPES
A useful tool for specifying the logical properties of a data type is the
abstract data type or ADT. The term “abstract data type” refers to the
basic mathematical concept that defines the data type. Abstract Data
Type = Interface + Implementation. The interface defines the logical
properties of the ADT, and especially the profiles or signatures of its

Page 23 on 55
operations. The implementation defines the representation of the data
structure and the algorithms that implement the operations.
COMMON EXAMPLES OF ADT
The stack
A stack is one of the most commonly used data structure. A stack is also
called a Last-In-First-Out (LIFO) system and is useful whenever we need to
store and retrieve data in last in, first out order. For example, the
computer processes instructions using a stack in which the next
instruction to execute is at the top of the stack. A stack is a linear list in
which insertion or deletion can take place only at one end called the top.
This structure operates in the same way as the stack of trays.

(a) Stack after pushing elements 8,10,12,5,6


(b) Stack after popping 2 elements.
Common properties of stack
 A stack is a last-in-first-out ( LIFO ) structure.
 Insertion operation is referred as “PUSH” and deletion operation is
referred as “POP”.
 The most accessible element in the stack is the element at the
position “TOP”.
 Stack must be created as empty.
 Whenever an element is pushed into stack, it must be checked
whether the stack is full or not.
 Whenever an element is popped form stack, it must be checked
whether the stack is empty or not.
 We can implement the stack ADT either with array or linked list.

Applications of stack
 Recursive functions are implemented using stacks. The copies of
variables at each level of recursion are stored in stack.
 Compilers use stacks in syntax analysis phase to check whether a
particular statement in a program is syntactically correct or not.

Page 24 on 55
 Computers use stack during interrupts and function calls. The
information regarding actual parameters return values, return
addresses and machine status is stored in stack.
The queue
Queue is a linear data structure that permits insertion of new element at
one end and deletion of an element at the other end.
Common properties of queue
 The end at which insertion of a new element can take place is called
"rear" and the end at which deletion of an element take place is
called "front".
 The first element that gets added into queue is the first one to get
removed from the list, hence queue is also referred to as First-In-
First-Out (FIFO) list.
 Queue must be created as empty.
 Whenever an element is inserted into queue, it must be checked
whether the queue is full or not.
 Whenever an element is deleted form queue, it must be checked
whether the queue is empty or not.
 We can implement the queue ADT either with array or linked list.

Applications of Queue
 Queue is used in time sharing system in which programs with the
same priority form a queue while waiting to be executed.
 Execution of Threads.
 Event and message queuing.
 Job scheduling: when jobs are submitted to a networked printer, they
are arranged in order of arrival, i.e. Jobs are placed in a queue.
Linked list
A linear linked list is a linear collection of data elements, called nodes,
where the linear order is given by means of pointers. Each node is divided
into two parts:
 The first part contains the information of the element and
 The second part contains the address of the next node (link /next
pointer field) in the list.

Page 25 on 55
The 1st element of the list has as value12 at the address 3 (the beginning
of the list).
The 2nd element of the list has as value14 at the address 4 (since the
pointer of the address cell 3 is 4). The 3 rd element of the list has as
value10 at the address 2 (since the pointer of the address cell 4 is 2). The
4th element of the list has as value 24 at the address 1 (since the pointer
of the address cell 2 is 1)

Double linked list

In a doubly linked list, also called as 2 way list, each node is divided into 3
parts. The first part is called previous pointer field. It contains the address
of the preceding elements in the list. The second part contains the
information of the element and the third part is called next pointer field. It
contains the address of the succeeding elements in the list. In addition 2
pointer variable HEAD and TAIL are used that contains the address of the
1st element or the address of the last element of the list. Doubly linked list
can be traversed in both directions.
Binary tree
A tree is a data structure that is made of nodes and pointers, much like a
linked list, stacks and queues, but unlike them, a tree is non-linear a data
structure. The difference between them lies in how they are organized:
 The top node in the tree is called the root and all other nodes branch
off from this one.
 Every node in the tree can have some number of children. Each child
node can in turn be the parent node to its children and so on. Child
nodes can have links only from a single parent.
 Any node higher up than the parent is called an ancestor node.
Nodes having no children are called leaves. Any node which is
neither a root, nor a leaf is called an interior node.
 The height of a tree is defined to be the length of the longest path
from the root to a leaf in that tree (including the path to root). A
common example of a tree structure is the binary tree.
A binary tree is a tree in which each node can have maximum two
children. Thus each node can have no child, one child or two children. The
pointers help us to identify whether it is a left child or a right child.

Page 26 on 55
For the binary tree above, we have the following:
Level 0: 1 node (height 0), Level 1: 2 nodes (height 1), Level 3: 4 nodes
(height 2), Level 3: 8 nodes (height 3). The maximum numbers of nodes a
binary tree of depth/height d can have is 2d+1-1. The number of leaf nodes
in a complete binary tree at depth d is 2d
Application of trees
 A file system can be represented as a tree, with the top-most
directory as the root.
 An arithmetic expression can be represented by a tree the leave
nodes are the variables/values.
 Game trees: An interesting application of trees is the playing of
games such as tic-tac-toe,
We can picture the sequence of possible moves by mean of a game
tree in which the root denotes the initial situation and the branches
from the root denote the legal moves that the first player could
make. At next level down, the branches corresponds to the legal
move by the second player in situation and so on, with branches
from vertices at even levels denoting moves by the 1st player and
from vertices at odd levels denoting moves by the 2 nd player. E.g.
Tic-tac-toe game.

Common trees operations:


 Add: Places an element in the tree (where elements end up depends
on the kind of tree).
 Remove: Removes something from the tree (how the tree is
reorganized after a removal depends on the kind of tree).
Other operations may be necessary, depending on the kind of tree we use.
Tree Traversals
A tree traversal is a specific order in which to trace the nodes of a tree. To
perform a traversal of a data structure, we use a method of visiting every

Page 27 on 55
node in some predetermined order. Traversals can be used to: test data
structures for equality, display a data structure, construct a data structure
of a give size and to copy a data structure
There are three common tree traversals:
 In-order: left, root, right- To traverse a binary tree in In-order, we
traverse the left most sub-tree starting at the left external node,
visit the root, and traverse the right sub-tree starting at the left
external node.
 Pre-order: root, left, right-To traverse a binary tree in Pre-order, we
visit the root, traverse the left sub-tree, and traverse the right sub-
tree.
 Post-order: left, right, root-To traverse a binary tree in Post-order,
we traverse all the left external nodes starting with the left most
sub-tree which is then followed by bubble-up all the internal nodes,
traverse the right sub-tree starting at the left external node which is
then followed by bubble-up all the internal nodes, and visit the root.

Exercise: For the following binary trees perform the Pre-order traversal,
In-order traversal and Post-order traversal.

Solution : (1) Pre-order traversal: GEBDFKMR, In-order traversal:


BDEFGKMR and Post-order traversal: DBFERMKG (2) Pre-order traversal:
7,5,4,2,3,8,9,1, In-order traversal: 4,2,5,3,7,9,8,1 and Post-order traversal:
2,4,3,5,9,1,8,7
Data structures are classified in several ways:
 Linear: Elements are arranged in sequential fashion. Ex : Array,
Linear list, stack, queue

Page 28 on 55
 Non-Linear: Elements are not arranged in sequence. Ex : trees,
graphs
 Homogenous: All Elements are belongs to same data type. Ex :
Arrays
 Non-Homogenous: Different types of Elements are grouped and
form a data structure. Ex: classes
 Dynamic: Memory allocation of each element in the data structure
is done before their usage using D.M.A functions Ex : Linked Lists
 Static: All elements of a data structure are created at the beginning
of the program. They cannot be resized. Ex: Arrays.
Generally Data structures are classified into two types as follows:

The above abstract data types when implemented are known as data structure.

Comparison of static and dynamic data structures

PROGRAMMING
A Computer program (also called software or just a program) is a
sequence of instructions written to perform a specified task with a
computer. A good program means that it should produce correct and
faster results, taking into account all the memory constraints.

L earning to write computer program is very much like learning any skill.
First, we should understand the problems well and then try to solve it
in a logical manner. Computer programming is the process of writing,
testing, troubleshooting, debugging and maintaining of a computer
program. And the statements that make up a computer program are
collectively referred to as program code. An effective program is that

Page 29 on 55
which gives result of all different inputs, including wrong input also. While
creating program, we need to follow certain systematic approach. The
program structure is implemented by using top-down or bottom-up.
ATTRIBUTES (CHARACTERISTCS) OF A GOOD PROGRAM
Every computer needs proper programs to perform the assigned task. The
quality of the program depends upon the instructions given to it. However,
it is required to provide the proper and correct instructions to the
computer in order to produce a correct and desired output. Some of the
most essential attributes of good computer program include:
 Efficiency: programs should not make wasteful use of system
resources such as memory and processor cycles. Efficiency therefore
includes responsiveness, processing time, and memory utilization.
 Maintainability: programs should be written in such a way that it
may evolve to meet the changing needs of customers. This is a
critical attribute because programs change is an inevitable
consequence of a changing business environment.
 Dependability: Program dependability has a range of
characteristics, including reliability, security and safety. Dependable
programs should not cause physical or economic damage in the
event of system failure. Reliability is the ability of a program to do
its intended function accurately even if there are even small
changes in the computer system. Moreover, the program must be
able to handle unexpected situation like wrong input or no input.
 Portability: Portability refers to the ability of an application to run
on different platforms (operating systems) with or without minimal
changes. Java programs are portable.
 Usability: programs must be usable, without undue effort, by the
type of user for whom it is designed. This means that it should have
an appropriate user interface and adequate documentation.
PROGRAMMING PARADIGMS
A programming paradigm (or technique) is a fundamental style of
computer programming. It describes a programming language approach
for solving a problem. High level languages can be classified under the
following paradigms:

PROCEDURAL-ORIENTED (STRUCTURED) PROGRAMMING


Procedural or imperative programming languages are languages that use
sequences of instructions in the form of algorithms and subroutines.
Software artefact
The end result of program design and construction is an artefact, a piece
of software that when executed solves some given problem, hopefully the
one required to be solved.

Page 30 on 55
Design
"Design" means to plan or mark out the form and method of solution.
Software design takes a real-world problem and produces a plan for a
computer-based solution.
Structured design
Structured design is a disciplined process of deciding which components
interconnected in which way will solve some well-specified problem.
Structured design relies on a principle known DIVIDE and CONQUER
A problem is divided into smaller and smaller sub problems, so that each
sub problem will eventually correspond to a component of the system
which solves the problem.
The partitioning process into smaller and smaller sub problems is done until
the sub problems are
 manageably small, and
 solvable separately, i.e. relatively independent of one another.
Good design
Good design is an exercise in partitioning and organising the pieces of the
system so that each is:
 cohesive, and
 loosely coupled
We call the pieces of the system, modules or components or units.
The modules/components/units are plugged together to create the system.
. Modules/components/units can be implemented as subroutines
(procedures and functions).
Cohesive/Cohesion
Cohesion measures the strength of the interconnection between
elements within a module.
To achieve cohesion, highly interrelated parts of the real-world problem
should be in the same piece of the software system, i.e. things that
belong together should go together.

Above is a schematic mapping from a problem in a real-world system to a highly-cohesive software


architecture of a structured design for a computer-based system to solve the problem.

To achieve a solution in which modules/components are highly-cohesive


the software system needs to be broken down into modules

Page 31 on 55
• which are highly independent and
• which accomplish a single objective.
The aim is to achieve functional cohesion, i.e. modules which contain only
one function or perform only one logically-related task.
Loosely coupled
Coupling measures the strength of relationships between modules. The
objective of structured design is to minimise the coupling between
modules, so that they will be as independent as possible.
The lower the coupling, the less likely that other modules will have to be
considered in order to
• create a module
• understand a given module
• debug a given module
• change a given module.
Coupling results from connections.
A connection exists when an element of code references a location in
memory defined outside the module. Some connection must exist among
modules in a program, or else they would not be part of the same
program.
The objective is to minimise the coupling among modules.
To minimise coupling among modules:
• Subroutines or modules should only be allowed to access that
data which they needs
to perform their assigned task
• All data transfer between modules is visible in the module
parameters
• There must be no hidden flows of data via global variables or
shared data areas
• There should be no control information passing between modules,
e.g. Boolean flags
• The number of module parameters should be minimal.
Loose coupling is achieved when a module’s data interface with other
modules is its module parameter list. Here we interpret module to mean a
subroutine (procedure/function).
Hierarchy charts
We show the software architecture of the simple calculator system
resulting from the structured design approach in Figure below. We call this
a hierarchy chart.

Page 32 on 55
This doesn’t tell the whole story because the chart doesn’t show the
coupling between the modules/components.
Loose coupling is achieved when a module’s data interface with other
modules is its module parameter list.

Page 33 on 55
Program structure in Pascal of simple calculator written as procedures
Programming task:
Create a simple calculator program based on the program structure shown
in Figure above in a C programming language.

Structured design does not address the issue of how to write the
program for the body of each procedure only the division of a system into
its components and how those components fit together to produce a
solution. To write the program for the body of the procedures we use the
principles of structured programming.
Structured programming principles
Structured programming advocates a disciplined approach to the
construction of programs in order to avoid problems which can arise if the
approach is not disciplined.
At the lowest level, the main principles of structured programming are
concerned with the flow of control through a program unit such as shown
in Figure below. The most fundamental idea is that the main flow of
control through a program unit should be from top to bottom of the
program.
This translates to every block (sequence, selection and iteration)
should have one entry point and one exit point. In Figure below this means
that the flow of control enters at the top of a dotted block and exits at the
bottom. Within a block, the flow doesn't have to be forward, e.g. iteration.

Program unit showing flow of control from top to bottom through the three
basic control constructs, sequence, selection and iteration.
We use structured programming because it leads to programs which are
easier to understand, maintain and reason about.

Page 34 on 55
A goal of structured programming is to reduce the conceptual gap
between the program text and the corresponding computations. In
other words, how can you be sure that your program does what it is
required to do, i.e. meets its specification?
Writing programs that are easy to understand contributes to achieving this
goal even if it is done at the expense of efficiency, e.g. execution time and
memory requirements.
Structured programming requires that:
1. The main flow of control through a program unit should be from top
to bottom of the
program.
2. Program blocks should have one entry point and one exit point
3. Meaningful identifiers should be used for variables, subroutines
(procedures and functions), etc, to aid readability and understanding
4. Indentation should be used that reflects the structure of the program
and which aids readability and understanding
5. The following control constructs should be used: sequence, selection
and iteration
6. GoTos should be avoided in all but the one case of a major error in
the computation which would require, if handled in a structured
programming way, convoluted and difficult-to-understand program
source code to be written
7. The use of global variables which are used in a global way should be
avoided
8. Data should be passed to subroutines in subroutine parameters and
results returned through subroutine parameters or preferable as a
function return data type
9. Local variables should be used for handling data within subroutines.
Example: FORTRAN (FORmula TRANslation), COBOL (COmmon Business-
Oriented Language), PASCAL and C.

Subroutines (procedures/functions)
A subroutine is a named 'out of line' block of code which may be
executed (called) by simply writing its name in a program statement. By
encapsulating and naming a block of instructions in a program it becomes
possible to call the block from other parts of the program.
This is very useful in situations where the same block of instructions or
action or calculation needs to be repeated in multiple places in a program.
A subroutine may be:
 Built-in / Library defined subroutines: Some of the operations
are programmed and stored in C library so that they can be called in
the program. These functions are called as library functions. E.g.:
printf(), sqrt(), pow()

Page 35 on 55
 User defined subroutines: is a complete and independent
program unit, which can be used (or invoked) by the main program
or by other sub-programs.
Figure below shows the control structure of a program block consisting of
program statements S1, S2, S3, a selection statement controlling
execution of two statements S4 and S5. This selection statement is
followed by a loop which controls a block of statements S.
A procedure T consists of program statements S1, S2, S3 (different ones
from the program block statements) and a loop controlling a block of
statements S (also different from the program block S).
The flow of control in the program block is in line, forwards from the
beginning to the end, except when statement S1 is encountered. This
statement transfers control 'out of line’ to procedure T.
Flow of control in procedure T is from its beginning to its end.
On reaching the end of T, control is transferred back to the program block.
Execution is resumed in the program block at statement S2, the statement
immediately following S1, where the call to T occurred. If T was a function
then control would be returned to statement S1along with the function’s
result. Procedure T is a subroutine.

In line flow of control to out of line flow of control to a named out of line block
of code
Advantages of using subroutines in programs
Without programming language support for subroutines (procedures and
functions), all programming would consist of inline blocks of program
instructions.
This would make even programs of modest size
• difficult to understand

Page 36 on 55
• difficult to debug.
The program block is reduced in length because where it relies on the
instructions in subroutines these are referenced by a short and descriptive
name (Abstraction).
If subroutines are self-contained they can be worked on separately, useful
when writing and debugging software. In software projects involving a
team of developers, different subroutines can be given to different
members of the team to write and debug.
The more self-contained (independent) the subroutine the easier it is to
write and debug without having to understand the program block in which
it is called.
Subroutines written for one program may be reused in a different
program. The more self-contained they are the easier it is to do this.
If a subroutine is particularly useful, it may be added to a library of
subroutines which can be imported into any program which needs them.
Using parameters to pass data within programs
A subroutine parameter is one way of passing data into and out of a
subroutine. When a subroutine is called, any data passed to it via the
subroutine parameter mechanism is copied into the memory area
reserved for subroutine formal parameters as shown in the memory
schematic in Figure below.

An example of a pseudocode program with a subroutine call involving data associated


with two program variables being passed to the subroutine

Page 37 on 55
Memory map showing an area reserved for program variables and an area reserved for
subroutine variables

Passing data to Modules / Returning data from Modules


 Call by value: means sending the values of the arguments- The
value of each of the actual arguments in the calling function is
copied into corresponding formal arguments of the called function.
The changes made to the formal arguments have no effect on the
values of actual arguments in the calling function. This technique of
passing arguments is called call by value illustrated by swapv(int x,
int y) function in the following example.
 Call by reference/Address: means sending the addresses of the
arguments- the addresses of actual arguments in the calling function
are copied into formal arguments of the called function. Using these
addresses we are actually working on actual argument so changes
will be reflected in the calling function. This technique of passing
arguments is called call by reference, illustrated by swapr(int *x,int
*y) in following example:

#include<stdio.h>
int main() // main is the calling function.
{
int i=10,j=20;
printf("The values before swap is i: %d, j:%d\n",i,j);
swapv(i,j); // here, i and j are formal parameters.
printf("The values after swap is i: %d, j:%d\n",i,j);
printf("\n");

Page 38 on 55
swapr(&i,&j); // swapv and swapr are called functions.
printf("The values after swap is i: %d, j:%d\n",i,j);
printf("\n");
return 0;
}
swapv(int x,int y) { swapr(int *x,int *y) { The value of
i and j is 10 and 20 only after
int temp; int temp; calling
function swapv, that is call by value.
temp=x; temp=*x; However
the result of calling swapr(),
x=y; *x=*y; call by
reference is i=20 and j=10.
y=temp; *y=temp;
} }
We can have the following scenarios with respect to functions:
 Functions with no parameter and no return value.
 Functions with parameter (s) and no return value.
 Functions with parameter (s) and return value.
 Recursive functions.
Local variables in subroutines
Declaring local variables
A subroutine may declare its own variables and use these within the body
of the subroutine as shown in the subroutine DoExample

Subroutine with local variables

The variables s and t are known as local variables. They are only visible
inside subroutine DoExample, i.e. they cannot be accessed from outside
the subroutine. In fact, they do not exist until the subroutine starts
executing. They disappear when the subroutine stops executing. Thus any
values that they hold are stored temporarily.
Lifetime: The lifetime of a local variable is the lifetime of an execution of
the subroutine in which the local variable is declared.
Visibility: The scope of a local variable is the scope of the subroutine in
which it is declared – scope means where the local variable is visible and
can be used.
Why use local variables?

Page 39 on 55
Support for modularisation
Subroutines enable modularisation of a program.
A solution to a problem can be divided into separate and independent
modules. These modules can be implemented as subroutines.
Undesirable side-effects
In figure below, the formal parameter of subroutine DoExample is x.
However, instead of declaring local variables, the subroutine manipulates
two global variables s and t.
These global variables come into existence when the program starts
executing and only disappear when it finishes executing.
They are visible and therefore accessible inside procedure DoExample.
Unfortunately, this means that when DoExample is called and it executes
the values of s and t are altered as shown below.
This is called a side-effect of executing subroutine DoExample.

Demonstrating the side-effect of using global variables inside a subroutine

Side-effects are undesirable because


• debugging a program is made harder - isolating the source of an error is
made more difficult,
the whole program may need to be examined
• the independence of the subroutine is reduced
• it cannot be designed without considering the context in which it will be
used
• it cannot be debugged without considering the context in which it will be
used
• its reusability is reduced because it will be more difficult to use it in
another program.

Page 40 on 55
The only variable outside of DoExample which should be manipulated by
DoExample is y because this variable is passed into DoExample through its
INOUT interface, the formal parameter x.

Contrasting local variables with global variables

Contrasting the scope of global variables and local variables


The space allocated to sand tis retained throughout the program’s
execution. We call this kind of allocation of memory static memory
allocation and as a consequence the global variables are called static
variables. The space allocated to sand t is de-allocated on exit from the
program.

FUNCTIONAL PROGRAMMING PARADIGM


The functional programming paradigm is an example of a declarative
programming language where all the algorithms call functions. This means
that the lines of code look and behave like mathematical functions,
requiring an input and producing an output. A value is produced for each
function call. The idea is that there is one main function, which in turn calls
further functions in order to achieve a specific task. Each function may call
another function, or call itself recursively in order to generate a result.
Functional programming including Python, Perl, C#, D, F#, Java 8 and
Delphi XE.

OBJECT ORIENTED PROGRAMMING (OOP)


The key difference between procedural and object-oriented programming
is that in procedural programming, the lines of code and the data the code
operates on are stored separately. An object-oriented program puts all
the data and the processes that can be carried out on that data together
in one place called an object and allows restrictions to be placed on how

Page 41 on 55
the data can be manipulated by the code. Object-oriented programming
can be described as being organised in a way that reflects the real world.
Object-oriented programming languages include: C++, Java, Python, C#
and Perl.
Benefits of reusability
 Rather than starting from scratch with each new application, a
programmer will consult libraries of existing components to see if
any are appropriate as starting points for the design of a new
application.
 Time Savings: by relying upon existing components there is less
software to develop and hence applications can be built quicker.
 Decreased maintenance effort: Using someone else’s
components decreases the amount of maintenance effort that the
application developer needs to expend.
 Consistency: Reliance on a library of standard components will
tend to spread a consistency of design message throughout a team
of programmers working on an application.
 Libraries can be created enabling code to be reused easily.
 Objects can inherit attributes and behaviours making code reusable
throughout the program.
The three main principles of object-oriented programming are:
encapsulation, polymorphism, and inheritance.
1. Encapsulation: The concept of keeping the data and the code
within the same object is called encapsulation. The code is known as
methods, which are the subroutines contained within an object that
are designed to perform particular tasks on the data within the
object. This is the main concept behind object-oriented languages
meaning that the objects are self-contained. The implication of this
is that there are far fewer side-effects than with procedural
languages. A side effect is where a subroutine changes the value of
a variable which then has implications for other parts of the
program. This concept is sometimes called information hiding, which
means that the data are directly available in the object that actually
needs to use it. Code outside of the class can only access data
through the defined methods.

Page 42 on 55
The two main building blocks of an object-oriented program are
classes and objects:
Class: is a blueprint or master copy that defines a related group
of things. It contains
properties, which describe what the data are, and methods, which
describe the way in
which the data can behave. However, it does not store any data.
Classes can be created
based on other classes.
Objects: are created from a class. Every object is defined as an
instance of a class so
will have the same properties and methods from the class from
which it is built. It will
also contain the data on which the methods will be run.
2. Inheritance: it is the process by which one object can acquire the
properties of another object. The derivation of one class from
another so that the attributes and methods of one class are part of
the definition of another class. The first class is often referred to the
base or parent (super) class. The child is often referred to as a
derived or sub-class. E.g. both cats and dogs are animals and
therefore share some common attributes and behavior but may add
their own that are unique to that particular type. Inheritance
provides reusability i.e. adding additional features to an existing
class without modifying it.

A class diagram for Account


3. Polymorphism: The literal meaning of polymorphism is to take on
many shapes. In object-oriented programming it describes a
situation where a method inherited from a base class can be
redefined and used in different ways depending on the data in the
subclass that inherited it.
Page 43 on 55
Other OOP concepts
4. Abstraction and generalization: Abstraction reduces complexity
by hiding irrelevant detail while generalization reduces complexity
by replacing multiple entities which perform similar functions with a
single construct.
5. Modularity: It is breaking up of a complex systems into small, self-
contained pieces that can be managed independently. Each module
is capable of performing its specific function without need for
modules.
6. Hierarchy: It is the ordering of abstractions into a tree-like structure
so that level of abstraction increases from the base where there is
less abstraction to the top where there is the main, single
abstraction.
7. Interface: the Interface separates the implementation and defines
the structure. Interface can be used to define a generic template
and then one or more abstract classes to define partial
implementations of the interface.
8. Members of the class are the code and data that constitute a
class.
9. Member variables or instance variables are the data defined by
the class.
10. Member methods or just methods are the code that operates on
that data. Method is Java’s term for a subroutine or function in "C".
11. Attribute: A characteristic of an object. Collectively the attributes
of an object describe its state. E.g. a Car may have attributes of
Speed, direction, registration Number…
PROBLEM SOLVING PROCESS
A typical programming task can be divided into two phases:
 Problem solving phase: produce an ordered sequence of steps
that describe solution of problem. This sequence of steps is called an
algorithm
 Implementation phase: implement the program in some
programming language
The problem-solving process for a computational problem can be outlined
in the following order:
 Define the problem.
 Create a mathematical model.
 Develop a computational method for solving the problem.
 Implement the computational method.
 Test and assess the solution.
PROGRAMMING ATTITUDES
Before writing a program:
 Have a thorough understanding of the problem

Page 44 on 55
 Carefully plan an approach for solving it
While writing a program:
 Know what “building blocks” are available
 Use good programming principles
Coding an algorithm (Programming):
At the final step, the algorithm expressed in either flowcharts or pseudo-
codes, are translated into the programming language (Java, C etc.). A
program is a sequence of instructions that specifies how to perform a
computation (mathematical or symbolic).
 The source code is written in programming language.
 Purpose is to create a program that exhibits a certain behavior.
PROBLEM SOLVING METHODOLOGY
PROGRAMMING STEPS
Step 1: Problem analysis-The designer must fully understand the
nature of the problem to be solved and the need for the solution.
Step 2: Problem statement-Develop a detailed statement of the
problem to be solved with a computer program.
Step 3: Processing scheme-Define the inputs required and the outputs
to be produced by the program.
Step 4: Algorithm-Design the step-by-step procedure by fragmenting
the overall problem into smaller and manageable parts. The use of
modular programming (top-down design and bottom-up design) is to get
proper solution. This list of tasks is the structure plan, it is written in
pseudocode, or in a flowchart. The goal is to design a plan that is
understandable and easily translated into a computer language.
Step 5: Program algorithm-Translate or convert the algorithm into a
computer language (e.g. C) and debug the syntax errors until the tool
performs successfully.
Step 6: Evaluation-Test all of the options and conduct a validation study
of the computer program, compare results
 with other programs that do similar tasks,
 experimental data if appropriate,
 theoretical predictions.
Step 7: Application-Solve the problems using the program was designed
to solve.

PROGRAMMING LANGUAGES
A programming language is determined by syntax and semantic. The
syntax is the rules guiding the formulations of the program statement. It
specifies how words and symbols are put together to form statements and
expressions. It is the grammar of the language. Semantic describes the
meaning of a program statement. It is the vocabulary of the language.

Page 45 on 55
Computer languages are used to develop programs to be executed.
They can be described in terms of levels.
LOW-LEVEL LANGUAGES
Machine language:
 Lowest-level language
 Binary-encoded instructions that are directly executed by the
hardware.
 Language is closely tied to the hardware design.
 Machine specific: machine language for one computer is different
from that of a computer having a different hardware design.
 Strings of numbers giving machine specific instructions
Assembly language:
 Also unique to a specific computer design.
 Commands or instructions are written in text statements instead of
numbers.
 Assembly language programming requires good knowledge of the
specific computer hardware.
 System software called an assembler translates the assembly
language program to a machine language program for execution on
the hardware.
 English-like abbreviations representing elementary computer
operations (translated via assemblers)
HIGH-LEVEL LANGUAGES
Compiled Languages:
 Have English-like command syntax.
 Include languages such as Basic, Fortran, COBOL, Pascal, Ada, C, C+
+, and Java.
 Supported on many computer designs and knowledge of the specific
hardware is not as critical in writing a program.
 System software called a compiler translates the program into
machine language.
Interpreted languages:
Interpreted and executed using an interpreter examples are MATLAB, VBA
and HTML.
 Interpreted languages are easier to learn. An interpreter lets the
programmer know immediately when and where problems exist in
the code, compiled programs make the programmer wait until the
program is complete.
 Interpreters therefore can be easier to use and produce more
immediate results, however the source code of an interpreted
language cannot run without the interpreter.

Page 46 on 55
 Compilers produce better optimized code that generally run faster
and compiled code is self-sufficient and can be run on their intended
platforms without the compiler present.

A low-level programming language is a programming language that


provides little or no abstraction from computer’s microprocessor. They are
machine-oriented languages. And a high-level programming language
is a programming language that is more abstract, easier to use, and more
portable across platforms.
PROGRAMMING LANGUAGES
COMPILATION PROCESS
 Original program is called the source program
 Translated machine language program is called the object program.
 Errors detected by the compiler (called compile errors or syntax
errors) must be corrected and compilation performed again.
 The process of compiling, correcting errors (or debugging) must
often be repeated several times before the program compiles
without compile errors.

An Assembler translates the symbolic codes of programs of an assembly


language into machine language instructions.
A compiler converts the source program (user-written program) into an
object code (machine language by checking the entire program before
execution. If the program is error free, object program is created and
loaded into memory for execution. A compiler produces an error list of the

Page 47 on 55
program in one go and all have to be taken care even before the
execution of first statement begin. It takes less time for execution.
An interpreter is also a language translator that translates and executes
statements in the program one by one. It work on one statement at a time
and if error free, executes the instruction before going to second
instruction. Debugging is simpler in interpreter as it is done in stages. An
interpreter takes more time for execution of a program as compared to a
compiler.
EXECUTION PROCESS
 To prepare the object program for execution, system software is
applied to link other machine language statements to the object
program and then load the program into memory.
 The program is executed and new errors, called execution errors,
runtime errors or logic errors (also called bugs) may be identified.
 These program errors are caused when the programmer errors in
determining the correct steps to be taken in the problem solution,
such as an attempt to divide by zero.
 These errors must be corrected in the source program, which must
again be compiled and link-loaded for execution.

PROBLEM SOLVING EXAMPLE


A small object is launched into flight from the ground at a speed of 50
km/hour at 30 degrees above the horizontal over level ground. Determine
the time of flight and the distance traveled when the ball returns to the
ground.

DEFINITION OF THE PROBLEM


Assumptions: air resistance is negligible, gravitational acceleration g = 9.8
m/s2 unit conversions: km/h= 1000m/3600s and 360 degrees = 2 pi
radians
Given: initial velocity: 50 km/hour and angle: 300
Output: the results to be produced are the time of flight (seconds) and the
distance traveled (meters).
MATHEMATICAL MODEL
Time: t (s), with t = 0 when the object is launched. Initial velocity
magnitude: v = 50 km/hour.

Page 48 on 55
Initial angle: θ = 30◦. Horizontal position of ball: x(t) (m). Vertical position
of ball: y(t) (m).
Acceleration of gravity: g = 9.8 m/s2, directed in negative y direction.

{
x (t)=vt cos θ
{v h=v cos θ
v v =v sin θ
1
y (t)=v t sin θ− g t 2
2
COMPUTATIONAL MODEL
Using the mathematical model, expressions for the desired results can be
obtained. The object will hit the ground when its vertical position is zero:
2 v sin θ
x (t¿ ¿ g)=v t g cos θ ¿, Flight time: t g=
g
1 2
The horizontal position (distance of travel) at this time: y ( t ) =v t sin θ− g t =0
2

COMPUTATIONAL IMPLEMENTATION
 The equations defined in the computational method can be readily
implemented using C.
 Observe that the steps match closely to the solution steps from the
computational method.

After compiling and executing the above code, we obtain the following
output:

PROGRAM TESTING AND DEBUGGING


DIFFERENCE BETWEEN TESTING AND DEBUGGING

Page 49 on 55
Program testing is the process of checking program, to verify that it
satisfies its requirements and to detect errors. Testing include necessary
steps to detect all possible errors in the program. This can be done either
at a module level known as unit testing or at program level known as
integration testing.
Debugging is a methodical process of finding and reducing the number of
bugs in a computer program making it behave as expected.
Different debugging techniques
 To place print statements throughout the program to display the
values of variables. Once the location of the error is found, the error
is corrected and debugging statement may be removed.
 Conditional compilation can be used to switch on or off debugging
statements.
 Elimination and refinement: location of error is arrived by listing the
possible causes of the error.
 Backtrack: The incorrect result is backtracked through the program
logic until error is located.

Types of errors
The following are types of errors, which are considered while testing
programs:
 Syntax errors: syntax errors also known as compilation errors are
caused by violation of the grammar rules of the language. The
compiler detects, isolate these errors and terminate the source
program after listing the errors. Some of the common syntactic
errors are: missing or misplaced ";" or "}", missing return type for a
procedure, missing or duplicate variable declaration.
 Run-time errors: errors such as mismatch of data types or array
out of bound error are known as runtime errors. These errors are
generally go undetected by the compiler so programs with run-time
error will run but produce erroneous results.
 Semantic errors are logical errors: If there is a semantic error in
a program, it will run successfully, in the sense that the computer
will not generate any error messages (it is due to a poor
understanding of the problem or a lack of clarity of hierarchy of
operators.), but it will not do the right thing. The problem is that the
meaning of the program (its semantics) is wrong. Identifying
semantic errors can be tricky because it requires working backward
by looking at the output of the program and trying to figure out what
it is doing.

Page 50 on 55
 Latent errors: These are hidden errors which come into the picture
only when a particular data set is used. For example, consider the
following statement: int z=x/x-y; an error will occur when x and y are
equal.
PROGRAM TESTING APPROACH
White box testing strategy deals with the internal logic and structure of
the code. White box testing also known as glass, structural, open box
or clear box testing, tests code written, branches, paths, statements and
internal logic of the code etc. In order to implement white box testing, the
tester has to deal with the code and hence is needed to possess
knowledge of coding and logic i.e. internal working of the code. White box
test also needs the tester to look into the code and find out which
unit/statement/chunk of the code is malfunctioning. White box testing is
applicable at unit, integration and system level however it is mainly done
at unit level.
Black box testing: this test design treats the system as a “black-box”, so
it doesn't explicitly use knowledge of the internal structure. It takes an
external perspective of the test object to derive test cases. These tests
can be functional or non-functional, though usually functional. The test
designer selects valid and invalid input and determines the correct output.
There is no knowledge of the test object's internal structure. This method
of test design is applicable to all levels of software testing: unit,
integration, functional testing, system and acceptance.
PROGRAM DOCUMENTATION

The programming process is incomplete if it is not properly documented.


Documentation includes the problem definition, design documents, a
description of the test perform, a history of the program development and
its different versions and a user’s manual. Such a manual is designed for a
naive user and illustrates the preparation of input data, running the
program, obtaining and interpreting the results. Proper documentation
also enables the user to understand the program and acts as a reference
tool for programmers, in case modifications are required. It is easier to
understand the logic of a program, from the documented records rather
than the code. The source code, which uses suitable name for the
identifiers (variables and methods), is called self-documenting code.
Also, giving proper name for variables and methods would tell the reader
of your code clearly. A good program must have a self-documenting code.
TYPES OF DOCUMENTATION
The documentation is classified into two categories as follows:
Documentation for the User
 System Manuals: A system manual contains the detailed technical
information for each application system. It describes each of the

Page 51 on 55
major functions performed by the application, the data affected by
or used within the application and the logical flow of data within the
application, as well as details concerning all application input and
output.
 User Manuals: User manuals assist the person to understand the
proper use of the system. It provides detailed instructions
concerning the procedure(s) to be followed to enter data into the
application for processing, as well as to request nonrecurring reports
and other information.
Documentation for Program Designers
 Comments: Comments are used within a program to assist the
reader in understanding the function or purpose of a single or
multiple instructions. In C programming we use "//" and "/*…*/" to
insert comment in the code.
 Programming Tools: Algorithms, flowcharts and pseudocodes are
useful means of program documentation. They help in making the
program logic easier to understand. In case, the program needs
certain modification, the programmer can fall back on these tools.
 Various Reports: Programming involves many steps. In all these
steps, certain reports are generated such as error reports,
maintenance reports and testing reports.
PROGRAM MAINTENANCE
It is not directly part of the original implementation process, but needs
special emphasis. All activities that occur after a program operation are
part of the program maintenance. Many large programs have long life
span that often exceed the lifetime of the hardware they run on. Usually,
the expenditure for the program maintenance will be more than the
developmental cost of the program. The program maintenance includes
the following:
 Finding and eliminating previously undetected program errors.
 Modifying the current program, often to improve its performance, or
to adapt to new requirements (Hardware or OS).
 Adding new features or a better user interface, or new capabilities to
the program.
 Updating the internal as well as external documentation.
Maintenance documentation will include results of the program
development steps, design documents, program code and test
information.

Page 52 on 55
EXPANDING YOUR KNOWLEDGE

BUILDING AND APPLYING YOUR KNOWLEDGE


SOME C CONCEPTS
Preprocessor directives (for example, #include or #define) which is
conceptionally a separate first step in compilation. The two most
frequently used features are #include, to include the contents of a file
during compilation, and #define, to replace a token by an arbitrary
sequence of characters.
A pre-processor is a program that processes the source code before it
passes through the
compiler. It operates under the control of preprocessor directive. These
are placed in the source program before the main. The preprocessor
directive definition is not terminated by a semicolon. For example #define
COUNT 100 will replace all occurrences of COUNT with 100 in the whole
program before compilation.
The main()function is like any other programming function in that it groups
like activities
and can take in parameters (information) and pass back values (again,
information). What
makes the main()function unique from other functions, however, is that
the values it returns
are returned to the operating system. Other functions that you will use
and create in this
book return values back to the calling C statement inside the
main()function.

The printf()function is used to display output to the computer screen.


The library name stdio.his short for standard input output and contains
links to various
standard C library functions, such as printf().

Page 53 on 55
The scanf()function is another built in function provided by the standard
input output
library <stdio.h>; it reads standard input from the keyboard and stores it
in previously
declared variables. It takes two arguments as demonstrated next.
scanf("conversion specifier", variable);

#include <stdio.h>
main()
{
int iOperand1 = 0;
int iOperand2 = 0;
printf("\n\tAdder Program, by Michael Vine\n");
printf("\nEnter first operand: ");
scanf("%d", &iOperand1);
printf("Enter second operand: ");
scanf("%d", &iOperand2);
printf("The result is %d\n", iOperand1 + iOperand2);
return 0;
}

MODULAR PROGRAMMING

Page 54 on 55
Modular Programming is the process of designing and writing programs as
modules (independent set of statements, which can be called in another
program or simply a program segment) that each one performs, a single
well-defined function, which has minimal interaction between the sub-
programs. It means that the content of each module is cohesive and there
is low coupling between them. There are two methods available for
modular programming as follows:
 Top-Down design: The principles of top-down design dictate that a
program should be divided into a main module and its related
module. Each module should also be divided into sub modules
according to software engineering and programming style. The
division continues till the module consists only of an elementary
process that is intrinsically understood and cannot be further sub-
divided.
 Bottom-up design: Bottom-up design is just the opposite of top-
down design. It refers to a style of programming, in which, an
application is constructed with existing primitives of the
programming language and then gradually more and more
complicated features are added till applications are written. In other
words, initiating the design with simple modules and then build them
into more complex structures ending at the top is bottom-up design.
Importance of modular programming
 Distributed development: By breaking down the problem into
multiple tasks, different developers can work in parallel. And this will
shorten the development time.
 Code reusability: A program module can be reused in programs.
This is a convenient feature because it reduces redundant code.
Modules can also be reused in future projects. It is much easier to
reuse a module than recreate program logic from scratch.
 Program readability: Modular programming leads to more
readable programs. Modules can be implemented as user-defined
functions. A program that has plenty of functions is straightforward.
But a program with no functions can be very long and hard to follow.
 Manageable tasks: Breaking down a programming project into
modules makes it more manageable. These individual modules are
easier to design, implement and test. Then you can use these
modules to construct the overall program.
 Maintainability: An error in the system provide from a given
module. It is then easier to maintain a single module than to
maintain a whole program

Page 55 on 55

You might also like