software engineering unit 3 part 2
software engineering unit 3 part 2
Software Design:
Software design is a process to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation.
For assessing user requirements, an SRS (Software Requirement Specification) document is
created whereas for coding and implementation, there is a need of more specific and detailed
requirements in software terms. The output of this process can directly be used into
implementation in programming languages.
Software design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain. It tries to specify how to fulfill the
requirements mentioned in SRS.
Objectives of Software Design:
1. Correctness:
A good design should be correct i.e. it should correctly implement all the
functionalities of the system.
2. Efficiency:
A good software design should address the resources, time and cost optimization
issues.
3. Understandability:
A good design should be easily understandable, for which it should be modular
and all the modules are arranged in layers.
4. Completeness:
The design should have all the components like data structures, modules, and
external interfaces, etc.
5. Maintainability:
A good software design should be easily amenable to change whenever a change
request is made from the customer side.
• Architectural Design - The architectural design is the highest abstract version of the
system. It identifies the software as a system with many components interacting with
each other. At this level, the designers get the idea of proposed solution domain.
• High-level Design- The high-level design breaks the ‘single entity-multiple
component’ concept of architectural design into less-abstracted view of sub-systems
and modules and depicts their interaction with each other. High-level design focuses on
how the system along with all of its components can be implemented in forms of
modules. It recognizes modular structure of each sub-system and their relation and
interaction among each other.
• Detailed Design- Detailed design deals with the implementation part of what is seen as
a system and its sub-systems in the previous two designs. It is more detailed towards
modules and their implementations. It defines logical structure of each module and their
interfaces to communicate with other modules.
Modularization
Concurrency
Back in time, all software are meant to be executed sequentially. By sequential execution we
mean that the coded instruction will be executed one after another implying only one portion
of program being activated at any given time. Say, a software has multiple modules, then only
one of all the modules can be found active at any time of execution.
In software design, concurrency is implemented by splitting the software into multiple
independent units of execution, like modules and executing them in parallel. In other words,
concurrency provides capability to the software to execute more than one part of code in
parallel to each other.
It is necessary for the programmers and designers to recognize those modules, which can be
made parallel execution.
Example
The spell check feature in word processor is a module of software, which runs along side the
word processor itself.
When a software program is modularized, its tasks are divided into several modules based on
some characteristics. As we know, modules are set of instructions put together in order to
achieve some tasks. They are though, considered as single entity but may refer to each other
to work together. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.
A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls between
modules increase or the amount of shared data is large. Thus, it can be said that a design with
high coupling will have more errors.
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite
data items such as structure, objects, etc. When the module passes non-global data structure or
entire structure to another module, they are said to be stamp coupled. For example, passing
structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally
imposed data format, communication protocols, or device interface. This is related to
communication to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.
Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.
5. Temporal Cohesion: When a module includes functions that are associated by the fact
that all the methods must be executed in the same time, the module is said to exhibit
temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the
module perform a similar operation. For example Error handling, data input and data
output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs
a set of tasks that are associated with each other very loosely, if at all.
Coupling Cohesion
Coupling shows the relationships Cohesion shows the relationship within the
between modules. module.
While creating, you should aim for low While creating you should aim for high cohesion,
coupling, i.e., dependency among i.e., a cohesive component/ module focuses on a
modules should be less. single function (i.e., single-mindedness) with
little interaction with other modules of the
system.
In coupling, modules are linked to the In cohesion, the module focuses on a single
other modules. thing.
Design Verification
The output of software design process is design documentation, pseudo codes, detailed logic
diagrams, process diagrams, and detailed description of all functional or non-functional
requirements.
The next phase, which is the implementation of software, depends on all outputs mentioned
above.
It is then becomes necessary to verify the output before proceeding to the next phase. The
early any mistake is detected, the better it is or it might not be detected until testing of the
product. If the outputs of design phase are in formal notation form, then their associated tools
for verification should be used otherwise a thorough design review can be used for verification
and validation.
By structured verification approach, reviewers can detect defects that might be caused by
overlooking some conditions. A good design review is important for good software design,
accuracy and quality.
Function Oriented design is a method to software design where the model is decomposed into
a set of interacting units or modules where each unit or module has a clearly defined function.
Thus, the system is designed from a functional viewpoint.
Design Notations
Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following:
Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They
show end-to-end processing. That is the flow of processing from when data enters the system
to where it leaves the system can be traced.
Data-flow design is an integral part of several design methods, and most CASE tools support
data-flow diagram creation. Different ways may use different icons to represent data-flow
diagram entities, but their meanings are similar.
The report generator produces a report which describes all of the named entities in a data-flow
diagram. The user inputs the name of the design represented by the diagram. The report
generator then finds all the names used in the data-flow diagram. It looks up a data dictionary
and retrieves information about each name. This is then collated into a report which is output
by the system.
Data Dictionaries
A data dictionary lists all data elements appearing in the DFD model of a system. The data
items listed contain all data flows and the contents of all data stores looking on the DFDs in
the DFD model of a system.
A data dictionary lists the objective of all data items and the definition of all composite data
elements in terms of their component data items. For example, a data dictionary entry may
contain that the data grossPay consists of the parts regularPay and overtimePay.
For the smallest units of data elements, the data dictionary lists their name and their type.
A data dictionary plays a significant role in any software development process because of the
following reasons:
o A Data dictionary provides a standard language for all relevant information for use by
engineers working in a project. A consistent vocabulary for data items is essential since,
in large projects, different engineers of the project tend to use different terms to refer
to the same data, which unnecessarily causes confusion.
o The data dictionary provides the analyst with a means to determine the definition of
various data structures in terms of their component elements.
Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the
user without the knowledge of internal design.
Pseudo-code
Pseudo-code notations can be used in both the preliminary and detailed design phases. Using
pseudo-code, the designer describes system characteristics using short, concise, English
Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.
Object-Oriented Design
In the object-oriented design method, the system is viewed as a collection of objects (i.e.,
entities). The state is distributed among the objects, and each object handles its state data. For
example, in a Library Automation Software, each library representative may be a separate
object with its data and functions to operate on these data. The tasks defined for one purpose
cannot refer or change data of other objects. Objects have their internal data which represent
their state. Similar objects create a class. In other words, each object is a member of some class.
Classes may inherit features from the superclass.
1. Objects: All entities involved in the solution design are known as objects. For example,
person, banks, company, and users are considered as objects. Every entity has some
attributes associated with it and has some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity
of the target object, the name of the requested operation, and any other action needed
to perform the function. Messages are often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction.
Abstraction is the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data
and operations are linked to a single unit. Encapsulation not only bundles essential
information of an object together but also restricts access to the data and methods from
the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where
the lower or sub-classes can import, implement, and re-use allowed variables and
functions from their immediate superclasses.This property of OOD is called an
inheritance. This makes it easier to define a specific class and to create generalized
classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing
similar tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different
types. Depending upon how the service is invoked, the respective portion of the code
gets executed.
A good system design is to organise the program modules in such a way that are easy to
develop and change. Structured design techniques help developers to deal with the size and
complexity of programs. Analysts create instructions for the developers about how code
should be written and how pieces of code should fit together to form a program.
Importance :
1. If any pre-existing code needs to be understood, organised and pieced together.
2. It is common for the project team to have to write some code and produce original
programs that support the application logic of the system.
There are many strategies or techniques for performing system design. They are:
1. Bottom-up approach:
The design starts with the lowest level components and subsystems. By using
these components, the next immediate higher level components and subsystems
are created or composed. The process is continued till all the components and
subsystems are composed into a single component, which is considered as the
complete system. The amount of abstraction grows high as the design moves to
more high levels.
By using the basic information existing system, when a new system needs to be
created, the bottom up strategy suits the purpose.
Advantages:
• The economics can result when general solutions can be reused.
• It can be used to hide the low-level details of implementation and be
merged with top-down technique.
Disadvantages:
• It is not so closely related to the structure of the problem.
• High quality bottom-up solutions are very hard to construct.
• It leads to proliferation of ‘potentially useful’ functions rather than
most approprite ones.
2. Top-down approach:
Each system is divided into several subsystems and components. Each of the
subsystem is further divided into set of subsystems and components. This process
of division facilitates in forming a system hierarchy structure. The complete
software system is considered as a single entity and in relation to the
characteristics, the system is split into sub-system and component. The same is
done with each of the sub-system.
This process is continued until the lowest level of the system is reached. The
design is started initially by defining the system as a whole and then keeps on
adding definitions of the subsystems and components. When all the definitions
are combined together, it turns out to be a complete system.
For the solutions of the software need to be developed from the ground level, top-
down design best suits the purpose.
Advantages:
• The main advantage of top down approach is that its strong focus on
requirements helps to make a design responsive according to its
requirements.
Disadvantages:
• Project and system boundries tends to be application specification
oriented. Thus it is more likely that advantages of component reuse
will be missed.
• The system is likely to miss, the benefits of a well-structured, simple
architecture.
3. Hybrid Design:
It is a combination of both the top – down and bottom – up design strategies. In
this we can reuse the modules.
Software Metrics
Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.
1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:
2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are
viewed to be of greater importance to a software developer. For example, Lines of Code (LOC)
measure.
External metrics: External metrics are the metrics used for measuring properties that are
viewed to be of greater importance to the user, e.g., portability, reliability, functionality,
usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost,
and time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be
improved. As quality improves, the number of errors and time, as well as cost required, is also
reduced.
For analysis, comparison, and critical study of different programming language concerning
their characteristics.
In comparing and evaluating the capabilities and productivity of people involved in software
development.
In making inference about the effort to be put in the design and development of the software
systems.
In comparison and making design tradeoffs between software development and maintenance
cost.
In providing feedback to software managers about the progress and quality during various
phases of the software development life cycle.
The application of software metrics is not always easy, and in some cases, it is difficult and
costly.
The verification and justification of software metrics are based on historical/empirical data
whose validity is difficult to verify.
These are useful for managing software products but not for evaluating the performance of the
technical staff.
The definition and derivation of Software metrics are usually based on assuming which are not
standardized and may depend upon tools available and working environment.
Most of the predictive models rely on estimates of certain variables which are often not known
precisely.
LOC Metrics
It is one of the earliest and simpler metrics for calculating the size of the computer program. It
is generally used in calculating and comparing the productivity of programmers. These metrics
are derived by normalizing the quality and productivity measures by considering the size of the
product as a metric.
Based on the LOC/KLOC count of software, many other metrics can be computed:
a. Errors/KLOC.
b. $/ KLOC.
c. Defects/KLOC.
d. Pages of documentation/KLOC.
e. Errors/PM.
f. Productivity = KLOC/PM (effort is measured in person-months).
g. $/ Page of documentation.
Advantages of LOC
1. Simple to measure
Disadvantage of LOC
1. It is defined on the code. For example, it cannot measure the size of the specification.
2. It characterizes only one specific view of size, namely length, it takes no account of
functionality or complexity
3. Bad software design may cause an excessive line of code
4. It is language dependent
5. Users cannot easily understand it
Token Count
In terms of the total tokens used, the size of the program can be expressed as N = N1 + N2.
The unit of measurement of volume is the standard unit for size "bits." It is the actual size of a
program if a uniform binary encoding for the vocabulary is used.
V=N*log2n
The value of L ranges between zero and one, with L=1 representing a program written at the
highest possible level (i.e., with minimum size).
L=V*/V
Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional to the number of the
unique operator in the program.
D= (n1/2) * (N2/n2)
E=V/L=D*V
According to Halstead, The first Hypothesis of software science is that the length of a well-
structured program is a function only of the number of unique operators and operands.
N=N1+N2
N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to estimate program length:
The potential minimum volume V* is defined as the volume of the most short program in which
a problem can be coded.
The size of the vocabulary of a program, which consists of the number of unique tokens used
to build a program, is defined as:
n=n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
Language Level - Shows the algorithm implementation program language level. The same
algorithm demands additional effort if it is written in a low-level program language. For
example, it is easier to program in Pascal than in Assembler.
L' = V / D / D
lambda = L * V* = L2 * V
Language levels
PASCAL 2.54 -
APL 2.42 -
C 0.857 0.445
Example: Consider the sorting program as shown in fig: List out the operators and operands
and also calculate the value of software science measure like n, N, V, E, λ ,etc.
int 4 SORT 1
() 5 x 7
, 4 n 3
[] 7 i 8
if 2 j 7
< 2 save 3
; 11 im1 3
for 2 2 2
= 6 1 3
- 1 0 1
<= 2 - -
++ 2 - -
return 2 - -
{} 3 - -
= 14 log214+10 log2)10
= 14 * 3.81+10 * 3.32
= 53.34+33.2=86.45
n2*=3 {x: array holding the integer to be sorted. This is used as both input and output}
Since L=V*/V
This is probably a reasonable time to produce the program, which is very simple.
Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been
further modified by the International Function Point Users Group (IFPUG). FPA is used to
make estimate of the software project, including its testing in terms of functionality or function
size of the software product. However, functional point analysis may be used for the test
estimation of the product. The functional size of the product is measured in terms of the
function point, which is a standard of measurement to measure the software application.
Objectives of FPA
The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.
Further, it is used to measure the software project development along with its maintenance,
consistently throughout the project irrespective of the tools and the technologies.
1. FPs of an application is found out by counting the number and types of functions used in the
applications. Various functions used in an application can be put under five types, as shown in
Table:
Types of FP Attributes
2. FP characterizes the complexity of the software system and hence can be used to depict the
project time and the manpower requirement.
3. The effort required to develop the project depends on what the software does.
5. FP method is used for data processing systems, business systems like information systems.
6. The five parameters mentioned above are also known as information domain characteristics.
7. All the parameters mentioned above are assigned some weights that have been
experimentally determined and are shown in Table
The functional complexities are multiplied with the corresponding weights against each
function, and the values are added up to determine the UFP (Unadjusted Function Point) of the
subsystem.
Here that weighing factor will be simple, average, or complex for a measurement parameter
type.
The Function Point (FP) is thus calculated with the following formula.
and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-
CAF (where i ranges from 1 to 14). Usually, a student is provided with the value of ∑(f i)
a. Errors/FP
b. $/FP.
c. Defects/FP
d. Pages of documentation/FP
e. Errors/PM.
f. Productivity = FP/PM (effort is measured in person-months).
g. $/Page of Documentation.
8. LOCs of an application can be estimated from FPs. That is, they are interconvertible. This
process is known as backfiring. For example, 1 FP is equal to about 100 lines of COBOL
code.
9. FP metrics is used mostly for measuring the size of Management Information System (MIS)
software.
10. But the function points obtained above are unadjusted function points (UFPs). These
(UFPs) of a subsystem are further adjusted by considering some more General System
Characteristics (GSCs). It is a set of 14 GSCs that need to be considered. The procedure for
adjusting UFPs is as follows:
a. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b)
If a particular GSC has no influence, then its weight is taken as 0 and if it has a strong influence
then its weight is 5.
b. The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
c. Then Value Adjustment Factor (VAF) is computed from TDI by using the
formula: VAF = (TDI * 0.01) + 0.65
Remember that the value of VAF lies within 0.65 to 1.35 because
Example: Compute the function point, productivity, documentation, cost per function for the
following data:
3. Number of inquiries = 8
4. Number of files = 4
5. Number of external interfaces = 2
6. Effort = 36.9 p-m
7. Technical documents = 265 pages
8. User documents = 122 pages
9. Cost = $7744/ month
Solution:
FP LOC
Cyclomatic Complexity
McCabe proposed the cyclomatic number, V (G) of graph theory as an indicator of software
complexity. The cyclomatic number is equal to the number of linearly independent paths
through a program in its graphs representation. For a program control graph G, cyclomatic
number, V (G), is given as:
V (G) = E - N + 2 * P
Example:
11.
12. The Cyclomatic complexity of the above module is
13. e = 10
14. n = 8