0% found this document useful (0 votes)
4 views30 pages

software engineering unit 3 part 2

Uploaded by

shivix20
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
4 views30 pages

software engineering unit 3 part 2

Uploaded by

shivix20
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 30

lOMoARcPSD|19500584

UNIT-3 - Notes of unit 3

Software Engineering (Dr. A.P.J. Abdul Kalam Technical University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Shivani (20082003ss@gmail.com)
lOMoARcPSD|19500584

Software Design:
Software design is a process to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation.
For assessing user requirements, an SRS (Software Requirement Specification) document is
created whereas for coding and implementation, there is a need of more specific and detailed
requirements in software terms. The output of this process can directly be used into
implementation in programming languages.
Software design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain. It tries to specify how to fulfill the
requirements mentioned in SRS.
Objectives of Software Design:

1. Correctness:
A good design should be correct i.e. it should correctly implement all the
functionalities of the system.
2. Efficiency:
A good software design should address the resources, time and cost optimization
issues.
3. Understandability:
A good design should be easily understandable, for which it should be modular
and all the modules are arranged in layers.
4. Completeness:
The design should have all the components like data structures, modules, and
external interfaces, etc.
5. Maintainability:
A good software design should be easily amenable to change whenever a change
request is made from the customer side.

Software Design Levels

Software design yields three levels of results:

• Architectural Design - The architectural design is the highest abstract version of the
system. It identifies the software as a system with many components interacting with
each other. At this level, the designers get the idea of proposed solution domain.
• High-level Design- The high-level design breaks the ‘single entity-multiple
component’ concept of architectural design into less-abstracted view of sub-systems
and modules and depicts their interaction with each other. High-level design focuses on
how the system along with all of its components can be implemented in forms of
modules. It recognizes modular structure of each sub-system and their relation and
interaction among each other.
• Detailed Design- Detailed design deals with the implementation part of what is seen as
a system and its sub-systems in the previous two designs. It is more detailed towards
modules and their implementations. It defines logical structure of each module and their
interfaces to communicate with other modules.

Modularization

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

Modularization is a technique to divide a software system into multiple discrete and


independent modules, which are expected to be capable of carrying out task(s) independently.
These modules may work as basic constructs for the entire software. Designers tend to design
modules such that they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving
strategy this is because there are many other benefits attached with the modular design of a
software.
Advantage of modularization:

• Smaller components are easier to maintain


• Program can be divided based on functional aspects
• Desired level of abstraction can be brought in the program
• Components with high cohesion can be re-used again
• Concurrent execution can be made possible
• Desired from security aspect

Concurrency

Back in time, all software are meant to be executed sequentially. By sequential execution we
mean that the coded instruction will be executed one after another implying only one portion
of program being activated at any given time. Say, a software has multiple modules, then only
one of all the modules can be found active at any time of execution.
In software design, concurrency is implemented by splitting the software into multiple
independent units of execution, like modules and executing them in parallel. In other words,
concurrency provides capability to the software to execute more than one part of code in
parallel to each other.
It is necessary for the programmers and designers to recognize those modules, which can be
made parallel execution.

Example

The spell check feature in word processor is a module of software, which runs along side the
word processor itself.

Coupling and Cohesion

When a software program is modularized, its tasks are divided into several modules based on
some characteristics. As we know, modules are set of instructions put together in order to
achieve some tasks. They are though, considered as single entity but may refer to each other
to work together. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.

In software engineering, the coupling is the degree of interdependence between software


modules. Two modules that are tightly coupled are strongly dependent on each other. However,
two modules that are loosely coupled are not dependent on each other. Uncoupled
modules have no interdependence at all within them.

The various types of coupling techniques are shown in fig:

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls between
modules increase or the amount of shared data is large. Thus, it can be said that a design with
high coupling will have more errors.

Types of Module Coupling

1. No Direct Coupling: There is no direct coupling between M1 and M2.

In this case, modules are subordinates to different modules. Therefore, no direct coupling.

2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite
data items such as structure, objects, etc. When the module passes non-global data structure or
entire structure to another module, they are said to be stamp coupled. For example, passing
structure variable in C or object in C++ language to a module.

4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.

5. External Coupling: External Coupling arises when two modules share an externally
imposed data format, communication protocols, or device interface. This is related to
communication to external tools and devices.

6. Common Coupling: Two modules are common coupled if they share information through
some global data items.

7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.

Module Cohesion

In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.

Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or


"low cohesion."

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

Types of Modules Cohesion

1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of


a module, cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element
of a module form the components of the sequence, where the output from one
component of the sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if
all tasks of the module refer to or update the same data structure, e.g., the set of
functions defined on an array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose
of the module are all parts of a procedure in which particular sequence of steps has to
be carried out for achieving a goal, e.g., the algorithm for decoding a message.

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

5. Temporal Cohesion: When a module includes functions that are associated by the fact
that all the methods must be executed in the same time, the module is said to exhibit
temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the
module perform a similar operation. For example Error handling, data input and data
output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs
a set of tasks that are associated with each other very loosely, if at all.

Differentiate between Coupling and Cohesion

Coupling Cohesion

Coupling is also called Inter-Module Cohesion is also called Intra-Module Binding.


Binding.

Coupling shows the relationships Cohesion shows the relationship within the
between modules. module.

Coupling shows the Cohesion shows the module's


relative independence between the relative functional strength.
modules.

While creating, you should aim for low While creating you should aim for high cohesion,
coupling, i.e., dependency among i.e., a cohesive component/ module focuses on a
modules should be less. single function (i.e., single-mindedness) with
little interaction with other modules of the
system.

In coupling, modules are linked to the In cohesion, the module focuses on a single
other modules. thing.

Design Verification

The output of software design process is design documentation, pseudo codes, detailed logic
diagrams, process diagrams, and detailed description of all functional or non-functional
requirements.
The next phase, which is the implementation of software, depends on all outputs mentioned
above.
It is then becomes necessary to verify the output before proceeding to the next phase. The
early any mistake is detected, the better it is or it might not be detected until testing of the

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

product. If the outputs of design phase are in formal notation form, then their associated tools
for verification should be used otherwise a thorough design review can be used for verification
and validation.
By structured verification approach, reviewers can detect defects that might be caused by
overlooking some conditions. A good design review is important for good software design,
accuracy and quality.

Function Oriented Design

Function Oriented design is a method to software design where the model is decomposed into
a set of interacting units or modules where each unit or module has a clearly defined function.
Thus, the system is designed from a functional viewpoint.

Design Notations

Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following:

Data Flow Diagram

Data-flow design is concerned with designing a series of functional transformations that


convert system inputs into the required outputs. The design is described as data-flow diagrams.
These diagrams show how data flows through a system and how the output is derived from the
input through a series of functional transformations.

Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They
show end-to-end processing. That is the flow of processing from when data enters the system
to where it leaves the system can be traced.

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

Data-flow design is an integral part of several design methods, and most CASE tools support
data-flow diagram creation. Different ways may use different icons to represent data-flow
diagram entities, but their meanings are similar.

The notation which is used is based on the following symbols:

The report generator produces a report which describes all of the named entities in a data-flow
diagram. The user inputs the name of the design represented by the diagram. The report
generator then finds all the names used in the data-flow diagram. It looks up a data dictionary

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

and retrieves information about each name. This is then collated into a report which is output
by the system.

Data Dictionaries

A data dictionary lists all data elements appearing in the DFD model of a system. The data
items listed contain all data flows and the contents of all data stores looking on the DFDs in
the DFD model of a system.

A data dictionary lists the objective of all data items and the definition of all composite data
elements in terms of their component data items. For example, a data dictionary entry may
contain that the data grossPay consists of the parts regularPay and overtimePay.

grossPay = regularPay + overtimePay

For the smallest units of data elements, the data dictionary lists their name and their type.

A data dictionary plays a significant role in any software development process because of the
following reasons:

o A Data dictionary provides a standard language for all relevant information for use by
engineers working in a project. A consistent vocabulary for data items is essential since,
in large projects, different engineers of the project tend to use different terms to refer
to the same data, which unnecessarily causes confusion.
o The data dictionary provides the analyst with a means to determine the definition of
various data structures in terms of their component elements.

Structured Charts

It partitions a system into block boxes. A Black box system that functionality is known to the
user without the knowledge of internal design.

Structured Chart is a graphical representation which shows:

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

o System partitions into modules


o Hierarchy of component modules
o The relation between processing modules
o Interaction between modules
o Information passed between modules

The following notations are used in structured chart:

Pseudo-code

Pseudo-code notations can be used in both the preliminary and detailed design phases. Using
pseudo-code, the designer describes system characteristics using short, concise, English
Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.

Object-Oriented Design

In the object-oriented design method, the system is viewed as a collection of objects (i.e.,
entities). The state is distributed among the objects, and each object handles its state data. For
example, in a Library Automation Software, each library representative may be a separate
object with its data and functions to operate on these data. The tasks defined for one purpose
cannot refer or change data of other objects. Objects have their internal data which represent
their state. Similar objects create a class. In other words, each object is a member of some class.
Classes may inherit features from the superclass.

The different terms related to object design are:

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

1. Objects: All entities involved in the solution design are known as objects. For example,
person, banks, company, and users are considered as objects. Every entity has some
attributes associated with it and has some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity
of the target object, the name of the requested operation, and any other action needed
to perform the function. Messages are often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction.
Abstraction is the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data
and operations are linked to a single unit. Encapsulation not only bundles essential
information of an object together but also restricts access to the data and methods from
the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where
the lower or sub-classes can import, implement, and re-use allowed variables and
functions from their immediate superclasses.This property of OOD is called an
inheritance. This makes it easier to define a specific class and to create generalized
classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing
similar tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different
types. Depending upon how the service is invoked, the respective portion of the code
gets executed.

Top-Down and Bottom-Up Design

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

A good system design is to organise the program modules in such a way that are easy to
develop and change. Structured design techniques help developers to deal with the size and
complexity of programs. Analysts create instructions for the developers about how code
should be written and how pieces of code should fit together to form a program.
Importance :
1. If any pre-existing code needs to be understood, organised and pieced together.
2. It is common for the project team to have to write some code and produce original
programs that support the application logic of the system.
There are many strategies or techniques for performing system design. They are:
1. Bottom-up approach:
The design starts with the lowest level components and subsystems. By using
these components, the next immediate higher level components and subsystems
are created or composed. The process is continued till all the components and
subsystems are composed into a single component, which is considered as the
complete system. The amount of abstraction grows high as the design moves to
more high levels.
By using the basic information existing system, when a new system needs to be
created, the bottom up strategy suits the purpose.

Advantages:
• The economics can result when general solutions can be reused.
• It can be used to hide the low-level details of implementation and be
merged with top-down technique.
Disadvantages:
• It is not so closely related to the structure of the problem.
• High quality bottom-up solutions are very hard to construct.
• It leads to proliferation of ‘potentially useful’ functions rather than
most approprite ones.

2. Top-down approach:
Each system is divided into several subsystems and components. Each of the
subsystem is further divided into set of subsystems and components. This process
of division facilitates in forming a system hierarchy structure. The complete
software system is considered as a single entity and in relation to the
characteristics, the system is split into sub-system and component. The same is
done with each of the sub-system.
This process is continued until the lowest level of the system is reached. The
design is started initially by defining the system as a whole and then keeps on
adding definitions of the subsystems and components. When all the definitions
are combined together, it turns out to be a complete system.

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

For the solutions of the software need to be developed from the ground level, top-
down design best suits the purpose.

Advantages:
• The main advantage of top down approach is that its strong focus on
requirements helps to make a design responsive according to its
requirements.
Disadvantages:
• Project and system boundries tends to be application specification
oriented. Thus it is more likely that advantages of component reuse
will be missed.
• The system is likely to miss, the benefits of a well-structured, simple
architecture.

3. Hybrid Design:
It is a combination of both the top – down and bottom – up design strategies. In
this we can reuse the modules.

Software Metrics

A software metric is a measure of software characteristics which are measurable or countable.


Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.

Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.

Classification of Software Metrics

Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:

1. Size and complexity of software.


2. Quality and reliability of software.

These metrics can be computed for different stages of SDLC.

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.

Types of Metrics

Internal metrics: Internal metrics are the metrics used for measuring properties that are
viewed to be of greater importance to a software developer. For example, Lines of Code (LOC)
measure.

External metrics: External metrics are the metrics used for measuring properties that are
viewed to be of greater importance to the user, e.g., portability, reliability, functionality,
usability, etc.

Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.

Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost,
and time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be
improved. As quality improves, the number of errors and time, as well as cost required, is also
reduced.

Advantage of Software Metrics

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

Comparative study of various design methodology of software systems.

For analysis, comparison, and critical study of different programming language concerning
their characteristics.

In comparing and evaluating the capabilities and productivity of people involved in software
development.

In the preparation of software quality specifications.

In the verification of compliance of software systems requirements and specifications.

In making inference about the effort to be put in the design and development of the software
systems.

In getting an idea about the complexity of the code.

In taking decisions regarding further division of a complex module is to be done or not.

In guiding resource manager for their proper utilization.

In comparison and making design tradeoffs between software development and maintenance
cost.

In providing feedback to software managers about the progress and quality during various
phases of the software development life cycle.

In the allocation of testing resources for testing the code.

Disadvantage of Software Metrics

The application of software metrics is not always easy, and in some cases, it is difficult and
costly.

The verification and justification of software metrics are based on historical/empirical data
whose validity is difficult to verify.

These are useful for managing software products but not for evaluating the performance of the
technical staff.

The definition and derivation of Software metrics are usually based on assuming which are not
standardized and may depend upon tools available and working environment.

Most of the predictive models rely on estimates of certain variables which are often not known
precisely.

Size Oriented Metrics

LOC Metrics

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

It is one of the earliest and simpler metrics for calculating the size of the computer program. It
is generally used in calculating and comparing the productivity of programmers. These metrics
are derived by normalizing the quality and productivity measures by considering the size of the
product as a metric.

Following are the points regarding LOC measures:

1. In size-oriented metrics, LOC is considered to be the normalization value.


2. It is an older method that was developed when FORTRAN and COBOL programming
were very popular.
3. Productivity is defined as KLOC / EFFORT, where effort is measured in person-
months.
4. Size-oriented metrics depend on the programming language used.
5. As productivity depends on KLOC, so assembly language code will have more
productivity.
6. LOC measure requires a level of detail which may not be practically achievable.
7. The more expressive is the programming language, the lower is the productivity.
8. LOC method of measurement does not apply to projects that deal with visual (GUI-
based) programming. As already explained, Graphical User Interfaces (GUIs) use
forms basically. LOC metric is not applicable here.
9. It requires that all organizations must use the same method for counting LOC. This is
so because some organizations use only executable statements, some useful comments,
and some do not. Thus, the standard needs to be established.
10. These metrics are not universally accepted.

Based on the LOC/KLOC count of software, many other metrics can be computed:

a. Errors/KLOC.
b. $/ KLOC.
c. Defects/KLOC.
d. Pages of documentation/KLOC.
e. Errors/PM.
f. Productivity = KLOC/PM (effort is measured in person-months).
g. $/ Page of documentation.

Advantages of LOC

1. Simple to measure

Disadvantage of LOC

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

1. It is defined on the code. For example, it cannot measure the size of the specification.
2. It characterizes only one specific view of size, namely length, it takes no account of
functionality or complexity
3. Bad software design may cause an excessive line of code
4. It is language dependent
5. Users cannot easily understand it

Halstead's Software Metrics

According to Halstead's "A computer program is an implementation of an algorithm considered


to be a collection of tokens which can be classified as either operators or operand."

Token Count

In these metrics, a computer program is considered to be a collection of tokens, which may be


classified as either operators or operands. All software science metrics can be defined in terms
of these basic symbols. These symbols are called as a token.

The basic measures are

n1 = count of unique operators.


n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.

In terms of the total tokens used, the size of the program can be expressed as N = N1 + N2.

Halstead metrics are:

Program Volume (V)

The unit of measurement of volume is the standard unit for size "bits." It is the actual size of a
program if a uniform binary encoding for the vocabulary is used.

V=N*log2n

Program Level (L)

The value of L ranges between zero and one, with L=1 representing a program written at the
highest possible level (i.e., with minimum size).

L=V*/V

Program Difficulty

The difficulty level or error-proneness (D) of the program is proportional to the number of the
unique operator in the program.

D= (n1/2) * (N2/n2)

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

Programming Effort (E)

The unit of measurement of E is elementary mental discriminations.

E=V/L=D*V

Estimated Program Length

According to Halstead, The first Hypothesis of software science is that the length of a well-
structured program is a function only of the number of unique operators and operands.

N=N1+N2

And estimated program length is denoted by N^

N^ = n1log2n1 + n2log2n2

The following alternate expressions have been published to estimate program length:

o NJ = log2 (n1!) + log2 (n2!)


o NB = n1 * log2n2 + n2 * log2n1
o NC = n1 * sqrt(n1) + n2 * sqrt(n2)
o NS = (n * log2n) / 2

Potential Minimum Volume

The potential minimum volume V* is defined as the volume of the most short program in which
a problem can be coded.

V* = (2 + n2*) * log2 (2 + n2*)

Here, n2* is the count of unique input and output parameters

Size of Vocabulary (n)

The size of the vocabulary of a program, which consists of the number of unique tokens used
to build a program, is defined as:

n=n1+n2

where

n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands

Language Level - Shows the algorithm implementation program language level. The same
algorithm demands additional effort if it is written in a low-level program language. For
example, it is easier to program in Pascal than in Assembler.

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

L' = V / D / D
lambda = L * V* = L2 * V

Language levels

Language Language level λ Variance σ

PL/1 1.53 0.92

ALGOL 1.21 0.74

FORTRAN 1.14 0.81

CDC Assembly 0.88 0.42

PASCAL 2.54 -

APL 2.42 -

C 0.857 0.445

Counting rules for C language

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple
occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique
operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {...} while ( ), while ( ) {...}, for ( ) {...}, all control
statements e.g., if ( ) {...}, if ( ) {...} else {...}, etc. are considered as operators.
8. In control construct switch ( ) {case:...}, switch as well as all the case statements are
considered as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are considered as
operators.
10. All the brackets, commas, and terminators are considered as operators.

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

11. GOTO is counted as an operator, and the label is counted as an operand.


12. The unary and binary occurrence of "+" and "-" are dealt with separately. Similarly "*"
(multiplication operator) are dealt separately.
13. In the array variables such as "array-name [index]" "array-name" and "index" are
considered as operands and [ ] is considered an operator.
14. In the structure variables such as "struct-name, member-name" or "struct-name ->
member-name," struct-name, member-name are considered as operands and '.', '->' are
taken as operators. Some names of member elements in different structure variables are
counted as unique operands.
15. All the hash directive is ignored.

Example: Consider the sorting program as shown in fig: List out the operators and operands
and also calculate the value of software science measure like n, N, V, E, λ ,etc.

Solution: The list of operators and operands is given in the table

Operators Occurrences Operands Occurrences

int 4 SORT 1

() 5 x 7

, 4 n 3

[] 7 i 8

if 2 j 7

< 2 save 3

; 11 im1 3

for 2 2 2

= 6 1 3

- 1 0 1

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

<= 2 - -

++ 2 - -

return 2 - -

{} 3 - -

n1=14 N1=53 n2=10 N2=38

Here N1=53 and N2=38. The program length N=N1+N2=53+38=91

Vocabulary of the program n=n1+n2=14+10=24

Volume V= N * log2N=91 x log2 24=417 bits.

The estimate program length N of the program

= 14 log214+10 log2)10
= 14 * 3.81+10 * 3.32
= 53.34+33.2=86.45

Conceptually unique input and output parameters are represented by n2*.

n2*=3 {x: array holding the integer to be sorted. This is used as both input and output}

{N: the size of the array to be sorted}

The Potential Volume V*=5log25=11.6

Since L=V*/V

We may use another formula

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

V^=V x L^= 417 x 0.038=15.67


E^=V/L^=D^ x V

Therefore, 10974 elementary mental discrimination is required to construct the program.

This is probably a reasonable time to produce the program, which is very simple.

Functional Point (FP) Analysis

Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been
further modified by the International Function Point Users Group (IFPUG). FPA is used to
make estimate of the software project, including its testing in terms of functionality or function
size of the software product. However, functional point analysis may be used for the test
estimation of the product. The functional size of the product is measured in terms of the
function point, which is a standard of measurement to measure the software application.

Objectives of FPA

The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.
Further, it is used to measure the software project development along with its maintenance,
consistently throughout the project irrespective of the tools and the technologies.

Following are the points regarding FPs

1. FPs of an application is found out by counting the number and types of functions used in the
applications. Various functions used in an application can be put under five types, as shown in
Table:

Types of FP Attributes

Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

5. Number of external interfaces (EIF) Shared databases and shared routines.

All these parameters are then individually assessed for complexity.

The FPA functional units are shown in Fig:

2. FP characterizes the complexity of the software system and hence can be used to depict the
project time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.

5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain characteristics.

7. All the parameters mentioned above are assigned some weights that have been
experimentally determined and are shown in Table

Weights of 5-FP Attributes

Measurement Parameter Low Average High

1. Number of external inputs (EI) 7 10 15

2. Number of external outputs (EO) 5 7 10

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

3. Number of external inquiries (EQ) 3 4 6

4. Number of internal files (ILF) 4 5 7

5. Number of external interfaces (EIF) 3 4 6

The functional complexities are multiplied with the corresponding weights against each
function, and the values are added up to determine the UFP (Unadjusted Function Point) of the
subsystem.

Here that weighing factor will be simple, average, or complex for a measurement parameter
type.

The Function Point (FP) is thus calculated with the following formula.

FP = Count-total * [0.65 + 0.01 * ∑(fi)]


= Count-total * CAF

where Count-total is obtained from the above Table.

CAF = [0.65 + 0.01 *∑(fi)]

and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-
CAF (where i ranges from 1 to 14). Usually, a student is provided with the value of ∑(f i)

Also note that ∑(fi) ranges from 0 to 70, i.e.,

0 <= ∑(fi) <=70

and CAF ranges from 0.65 to 1.35 because

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

a. When ∑(fi) = 0 then CAF = 0.65


b. When ∑(fi) = 70 then CAF = 0.65 + (0.01 * 70) = 0.65 + 0.7 = 1.35

Based on the FP measure of software many other metrics can be computed:

a. Errors/FP
b. $/FP.
c. Defects/FP
d. Pages of documentation/FP
e. Errors/PM.
f. Productivity = FP/PM (effort is measured in person-months).
g. $/Page of Documentation.

8. LOCs of an application can be estimated from FPs. That is, they are interconvertible. This
process is known as backfiring. For example, 1 FP is equal to about 100 lines of COBOL
code.

9. FP metrics is used mostly for measuring the size of Management Information System (MIS)
software.

10. But the function points obtained above are unadjusted function points (UFPs). These
(UFPs) of a subsystem are further adjusted by considering some more General System
Characteristics (GSCs). It is a set of 14 GSCs that need to be considered. The procedure for
adjusting UFPs is as follows:

a. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b)
If a particular GSC has no influence, then its weight is taken as 0 and if it has a strong influence
then its weight is 5.
b. The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
c. Then Value Adjustment Factor (VAF) is computed from TDI by using the
formula: VAF = (TDI * 0.01) + 0.65

Remember that the value of VAF lies within 0.65 to 1.35 because

a. When TDI = 0, VAF = 0.65


b. When TDI = 70, VAF = 1.35
c. VAF is then multiplied with the UFP to get the final FP count: FP = VAF * UFP

Example: Compute the function point, productivity, documentation, cost per function for the
following data:

1. Number of user inputs = 24


2. Number of user outputs = 46

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

3. Number of inquiries = 8
4. Number of files = 4
5. Number of external interfaces = 2
6. Effort = 36.9 p-m
7. Technical documents = 265 pages
8. User documents = 122 pages
9. Cost = $7744/ month

Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2, 2, 4, 5.

Solution:

Measurement Parameter Count Weighing factor

1. Number of external inputs (EI) 24 * 4 = 96

2. Number of external outputs (EO) 46 * 4 = 184

3. Number of external inquiries (EQ) 8 * 6 = 48

4. Number of internal files (ILF) 4 * 10 = 40

5. Number of external interfaces (EIF) Count-total → 2 * 5 = 10


378

So sum of all fi (i ← 1 to 14) = 4 + 1 + 0 + 3 + 5 + 4 + 4 + 3 + 3 + 2 + 2 + 4 + 5 = 43

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408

Total pages of documentation = technical document + user document


= 265 + 122 = 387pages

Documentation = Pages of documentation/FP


= 387/408 = 0.94

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

Differentiate between FP and LOC

FP LOC

1. FP is specification based. 1. LOC is an analogy based.

2. FP is language independent. 2. LOC is language dependent.

3. FP is user-oriented. 3. LOC is design-oriented.

4. It is extendible to LOC. 4. It is convertible to FP (backfiring)

Cyclomatic Complexity

Cyclomatic complexity is a software metric used to measure the complexity of a program.


Thomas J. McCabe developed this metric in 1976.McCabe interprets a computer program as a
set of a strongly connected directed graph. Nodes represent parts of the source code having no
branches and arcs represent possible control flow transfers during program execution. The
notion of program graph has been used for this measure, and it is used to measure and control
the number of paths through a program. The complexity of a computer program can be
correlated with the topological complexity of a graph.

How to Calculate Cyclomatic Complexity?

McCabe proposed the cyclomatic number, V (G) of graph theory as an indicator of software
complexity. The cyclomatic number is equal to the number of linearly independent paths
through a program in its graphs representation. For a program control graph G, cyclomatic
number, V (G), is given as:

V (G) = E - N + 2 * P

E = The number of edges in graphs G

N = The number of nodes in graphs G

P = The number of connected components in graph G.

Example:

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

Properties of Cyclomatic complexity:

Following are the properties of Cyclomatic complexity:

1. V (G) is the maximum number of independent paths in the graph


2. V (G) >=1
3. G will have one path if V (G) = 1
4. Minimize complexity to 10
5. o calculate Cyclomatic complexity of a program module, we use the formula -
6. V(G) = e – n + 2
7.
8. Where
9. e is total number of edges
10. n is total number of nodes

11.
12. The Cyclomatic complexity of the above module is
13. e = 10
14. n = 8

Downloaded by Shivani (20082003ss@gmail.com)


lOMoARcPSD|19500584

15. Cyclomatic Complexity = 10 - 8 + 2


16. =4
17. According to P. Jorgensen, Cyclomatic Complexity of a module should not exceed 10.

Downloaded by Shivani (20082003ss@gmail.com)

You might also like