Module 1_ Testing Methodology
Module 1_ Testing Methodology
Methodology
-by
Rohini M. Sawant
INTRODUCTION
Software testing has always been considered a single phase performed after
coding.
But time has proved that our failures in software projects are mainly due to the fact
that we have not realized the role of software testing as a process.
Software is becoming complex, but the demand for quality in software products
has increased.
This rise in customer awareness for quality increases the workload and
responsibility of the software development team.
That is why software testing has gained so much popularity
SOFTWARE TESTING DEFINITIONS
● Testing is the process of executing a program with the intent of finding
errors-Myers.
● A successful test is one that uncovers an as-yet-undiscovered error.- Myers
● Testing can show the presence of bugs but never their absence- W. Dijkstra
● Program testing is a rapidly maturing area within software engineering that is
receiving increasing notice both by computer science theoreticians and
practitioners. Its general aim is to affirm the quality of software systems by
systematically exercising the software in carefully controlled circumstances-
E. Miller
Myths about Software Testing
● Testing is a single phase in SDLC .
● Testing is easy.
● Software development is worth more than testing.
● Complete testing is possible. (Hence we go for Effective testing)
● Testing starts after program development.
● The purpose of testing is to check the functionality of the software.
GOALS OF SOFTWARE TESTING
GOALS OF SOFTWARE TESTING
Short-term or immediate goals:
These goals are the immediate results after performing testing. These goals may
be set in the individual phases of SDLC. Some of them are discussed below.
● Bug discovery: The immediate goal of testing is to find errors at any stage of
software development. More the bugs discovered at an early stage, better will
be the success rate of software testing.
● Bug prevention: It is the consequent action of bug discovery. From the
behaviour and interpretation of bugs discovered, everyone in the software
development team gets to learn how to code safely such that the bugs
discovered should not be repeated in later stages or future projects.Though
errors cannot be prevented to zero, they can be minimized. In this sense, bug
prevention is a superior goal of testing.
GOALS OF SOFTWARE TESTING
Long-term goals: These goals affect the product quality in the long run, Some of them are discussed
here:
● Reliability
● Quality: Reliability is a matter of confidence that the software will not fail, and this level of
confidence increases with rigorous testing. The confidence in reliability, in turn, increases the quality.
● Customer satisfaction: Software Testing > Reliability > Quality > Customer Satisfaction
A complete testing process achieves reliability, reliability enhances the quality, and quality in turn,
increases the customer satisfaction.
EXHAUSTIVE TESTING:
● Exhaustive or complete software testing means that every statement in the program and
every possible path combination with every possible combination of data must be executed.
● This combination of possible tests is infinite in the sense that the processing resources and
time are not sufficient for performing these tests.
● Computer speed and time constraints limit the possibility of performing all the tests.
● Complete testing requires the organization to invest a long time which is not cost-effective.
● However, due to the complexity and size of modern software systems, exhaustive testing is
rarely feasible or cost-effective. Instead, software testing typically focuses on strategically
selecting representative test cases that are likely to uncover the most significant defects or
vulnerabilities.
● Therefore, testing must be performed on selected subsets that can be performed within the
constrained resources.
EFFECTIVE SOFTWARE TESTING VS. EXHAUSTIVE SOFTWARE TESTING
a) Valid inputs:
It seems that we can test every valid input on the software. But look at a very
simple example of adding two-digit two numbers.
Their range is from –99 to 99 (total 199). So the total number of test case
combinations will be 199 × 199 = 39601.
Further, if we increase the range from two digits to four-digits, then the number of
test cases will be 399,960,001.
Most addition programs accept 8 or 10 digit numbers or more.
How can we test all these combinations of valid inputs?
EFFECTIVE SOFTWARE TESTING VS. EXHAUSTIVE SOFTWARE TESTING
b) Invalid inputs
● Testing the software with valid inputs is only one part of the input sub-domain.
● There is another part, invalid inputs, which must be tested for testing the
software effectively.
● The set of invalid inputs is also too large to test.
● If we consider again the example of adding two numbers, then the following
possibilities may occur from invalid inputs: (i) Numbers out of range (ii)
Combination of alphabets and digits (iii) Combination of all alphabets (iv)
Combination of control characters (v) Combination of any other key on the
keyboard
EFFECTIVE SOFTWARE TESTING VS. EXHAUSTIVE SOFTWARE TESTING
c) Edited inputs:
● If we can edit inputs at the time of providing inputs to the program, then many
unexpected input events may occur.
● For example, you can add many spaces in the input, which are not visible to
the user. It can be a reason for non-functioning of the program.
● The behaviour of users cannot be judged.
● They can behave in a number of ways, causing defect in testing a program.
That is why edited inputs are also not tested completely
EFFECTIVE SOFTWARE TESTING VS. EXHAUSTIVE SOFTWARE TESTING
There are too Many Possible Paths Through the Program to Test
● A program path can be traced through the code from the start of a program to
its termination. Two paths differ if the program executes different statements
in each, or executes the same statements but in different order.
● If there are two paths in one iteration. Now the total number of paths will be
2n + 1, where n is the number of times the loop will be carried out.
● Therefore, all these paths cannot be tested, as it may take years.
● The complete path testing, if performed somehow, does not guarantee that
there will not be errors.
Every Design Error Cannot be Found
EFFECTIVE SOFTWARE TESTING VS. EXHAUSTIVE SOFTWARE TESTING
EFFECTIVE TESTING
● This selected group of subsets, but not the whole domain of testing, makes
effective software testing.
● Effective testing can be enhanced if subsets are selected based on the
factors which are required in a particular environment.
● Effective testing provides the flexibility to select only the subsets of the
domain of testing based on project priority such that the chances of failure in
a particular environment is minimized.
EFFECTIVE SOFTWARE TESTING VS. EXHAUSTIVE SOFTWARE TESTING
● Failure- When the software is tested, failure is the first term being used. It
means the inability of a system or component to perform a required function
according to its specification. Failure is the term which is used to describe the
problems in a system on the output side.
● Fault is a condition that in actual causes a system to produce failure. It is the
reason embedded in any phase of SDLC and results in failures. Fault is
synonymous with the words defect or bug.
● Error Whenever a development team member makes a mistake in any phase
of SDLC, errors are produced. It might be a typographical error, a misleading
of a specification, a misunderstanding of what a subroutine does, and so on.
Error is a very general term used for human mistakes. Thus, an error causes
a bug/fault and the bug in turn causes failures
SOFTWARE TESTING TERMINOLOGY
Test Design: One of the major activities in testing is the design of test cases. However,
this activity is not an intuitional process; rather it is a well-planned process. The test
design is an important phase after test planning. It includes the following critical activities:
● Determining the test objectives and their prioritization
● Preparing list of items to be tested
● Mapping items to test cases
● Selection of test case design techniques
● Creating test cases and test data
● Setting up the test environment and supporting tools
● Creating test procedure specification
SOFTWARE TESTING LIFE CYCLE (STLC)
Test Execution: In this phase, all test cases are executed including verification
and validation. Verification test cases are started at the end of each phase of
SDLC. Validation test cases are started after the completion of a module. It is the
decision of the test team to opt for automation or manual execution. Test results
are documented in the test incident reports, test logs, testing status, and test
summary.
SOFTWARE TESTING LIFE CYCLE (STLC)
Post-Execution/Test Review:
Software Testing Strategy: Testing strategy is the planning of the whole testing process
into a well-planned series of steps. In other words, strategy provides a roadmap that
includes very specific activities that must be performed by the test team in order to
achieve a specific goal.
● Test Factors: Test factors are risk factors or issues related to the system under
development. Risk factors need to be selected and ranked according to a specific
system under development. The testing process should reduce these test factors to
a prescribed level.
● Test Phase: This is another component on which the testing strategy is based. It
refers to the phases of SDLC where testing will be performed. Testing strategy may
be different for different models of SDLC, e.g. strategies will be different for waterfall
and spiral models
TEST STRATEGY MATRIX
Test Strategy Matrix: It identifies the concerns that will become the focus of test planning and execution.
In this way, this matrix becomes an input to develop the testing strategy. The matrix is prepared using test
factors and test phase. The steps to prepare this matrix are discussed below.
● Select and rank test factors: Based on the test factors list, the most appropriate factors according to
specific systems are selected and ranked from the most significant to the least. These are the rows
of the matrix.
● Identify system development phases: Different phases according to the adopted development model
are listed as columns of the matrix. These are called test phases.
● Identify risks associated with the system under development: In the horizontal column under each of
the test phases, the test concern with the strategy used to address this concern is entered. The
purpose is to identify the concerns that need to be addressed under a test phase.The risks may
include any events, actions, or circumstances that may prevent the test program from being
implemented or executed according to a schedule, such as late budget approvals, delayed arrival of
test equipment, or late availability of the software application
DEVELOPMENT OF TEST STRATEGY
● When the project under consideration starts and progresses, testing too starts from
the first step of SDLC.
● Therefore, the test strategy should be such that the testing process continues till the
implementation of project.
● Moreover, the rule for development of a test strategy is that testing ‘begins from the
smallest unit and progresses to enlarge’.
● This means the testing strategy should start at the component level and finish at the
integration of the entire system.
● Thus, a test strategy includes testing the components being built for the system, and
slowly shifts towards testing the whole system.
● This gives rise to two basic terms—Verification and Validation—the basis for any
type of testing. It can also be said that the testing process is a combination of
verification and validation
● Verification is ‘Are we building the product right?’
● Validation is ‘Are we building the right product?’
V-Testing Life Cycle Model
Validation has the following three activities which are also known as the three levels of
validation testing.
● Unit Testing: It is a major validation effort performed on the smallest module of the
system. If avoided, many bugs become latent bugs and are released to the
customer. Unit testing is a basic level of testing which cannot be overlooked, and
confirms the behaviour of a single module according to its functional specifications.
● Integration Testing: It is a validation technique which combines all unit-tested
modules and performs a test on their aggregation. When we unit test a module, its
interfacing with other modules remain untested. When one module is combined with
another in an integrated environment, interfacing between units must be tested.
VALIDATION ACTIVITIES
● System Testing: This testing level focuses on testing the entire integrated
system. It incorporates many types of testing, as the full system can have
various users in different environments. The purpose is to test the validity for
specific users and environments. The validity of the whole system is checked
against the requirement specifications
TESTING TACTICS
● Black-box testing This technique takes care of the inputs given to a system
and the output is received after processing in the system. What is being
processed in the system? How does the system perform these operations?
Black-box testing is not concerned with these questions. It checks the
functionality of the system only. That is why the term black-box is used. It is
also known as functional testing. It is used for system testing under validation.
● White-box testing This technique complements black-box testing. Here, the
system is not a black box. Every design feature and its corresponding code is
checked logically with every possible path execution. So, it takes care of the
structural paths instead of just outputs. It is also known as structural testing
and is used for unit testing under verification.
TESTING TACTICS
Testing Tools: Testing tools provide the option to automate the selected testing
technique with the help of tools. A tool is a resource for performing a test process.
The combination of tools and testing techniques enables the test process to be
performed. The tester should fi rst understand the testing techniques and then go
for the tools that can be used with each of the techniques.
TACTICAL TEST PLAN
A tactical test plan is required to start testing. This test plan provides:
Overall objectives
Test team
Testing materials including system documentation, software to be tested, test inputs, test documentation, test tools
Type of testing technique to be adopted in a particular SDLC phase or what are the specifi c tests to be conducted
in a particular phase
VERIFICATION & VALIDATION
A V-diagram provides the following insights about software testing:
● Requirement Analysis: This phase contains detailed ● Unit Testing: Unit Test Plans are developed during module
communication with the customer to understand their design phase. These Unit Test Plans are executed to eliminate
requirements and expectations. This stage is known as bugs at code or unit level.
● System Design: This phase contains the system design testing is performed. In integration testing, the modules are
and the complete hardware and communication setup for integrated and the system is tested. Integration testing is
developing product. performed on the Architecture design phase. This test verifies the
● Architectural Design: System design is broken down communication of modules among themselves.
● System Testing: System testing test the complete application
further into modules taking up different functionalities. The
with its functionality, inter dependency, and communication.It
data transfer and communication between the internal
tests the functional and non-functional requirements of the
modules and with the outside world (other systems) is
developed application.
clearly understood.
● User Acceptance Testing (UAT): UAT is performed in a user
● Module Design: In this phase the system breaks down into
environment that resembles the production environment. UAT
small modules. The detailed design of modules is specified,
verifies that the delivered system meets user’s requirement and
also known as Low-Level Design (LLD)
system is ready for use in real world.
VERIFICATION & VALIDATION ACTIVITIES
All the verification activities are performed in connection with the different phases
of SDLC. The following verification activities have been identified:
Following are the points against which every requirement in SRS should be verified:
Correctness: There are no tools or procedures to measure the correctness of a
specification. The tester uses his or her intelligence to verify the correctness of
requirements. Following are some points which can be adopted:
(a) Testers should refer to other documentations or applicable standards and compare
the specified requirement with them.
(b) Testers can interact with customers or users, if requirements are not well-understood.
(c) Testers should check the correctness in the sense of realistic requirement. If the tester
feels that a requirement cannot be realized using existing hardware and software
technology, it means that it is unrealistic. In that case, the requirement should either be
updated or removed from SRS.
HOW TO VERIFY REQUIREMENTS AND OBJECTIVES?
Unambiguous: A requirement should be verified such that it does not provide too many
meanings or interpretations. It should not create redundancy in specifications. The following must
be verified:
(a) Every requirement has only one interpretation.
(b) Each characteristic of the final product is described using a single unique term.
Consistent: No specification should contradict or conflict with another. Conflicts produce bugs in
the next stages, therefore they must be checked for the following: (a) Real-world objects conflict
(b)Logical conflict between two specified actions
(c) Conflicts in terminology should also be verified.
HOW TO VERIFY REQUIREMENTS AND OBJECTIVES?
(b) Check whether responses of every possible input (valid & invalid) to the
software have been defined.
(c) Check whether figures and tables have been labeled and referenced
completely.
HOW TO VERIFY REQUIREMENTS AND OBJECTIVES?
Check that every functional requirement in the SRS has been take care of in this design.
Check whether all exceptions handling conditions have been taken care of.
Verify the process of transform mapping and transaction mapping, used for the transition from
requirement model to architectural design.
Since architectural design deals with the classification of a system into subsystems or modules, check
the functionality of each module according to the requirements specified.
In the modular approach of architectural design, there are two issues with modularity— Module
Coupling and Module Cohesion. A good design will have low coupling and high cohesion.
Verification of User-Interface Design
Check all the interfaces between modules according to the architecture design.
Check all the interfaces between human and computer.
Check all the above interfaces for their consistency.
Check the response time for all the interfaces are within required ranges.
For a Help Facility, verify the following: (i) The representation of Help in its
desired manner (ii) The user returns to the normal interaction from Help
For error messages and warnings, verify the following: (i) Whether the message
clarifies the problem (ii) Whether the message provides constructive advice for
recovering from the error `
VERIFICATION OF LOW-LEVEL DESIGN
● In this verification, low-level design phase is considered. The abstraction level in this
phase is low as compared to high-level design.
● In LLD, a detailed design of modules and data are prepared such that an
operational software is ready.
● For this, SDD is preferred where all the modules and their interfaces are defined.
● Every operational detail of each module is prepared.
● The details of each module or unit is prepared in their separate SRS and SDD.
● Testers also perform the following parallel activities in this phase:
● 1. The tester verifies the LLD. The details and logic of each module is verified such
that the high-level and low-level abstractions are consistent.
● 2. The tester also prepares the Unit Test Plan.
VERIFICATION OF LOW-LEVEL DESIGN
● This is the last pre-coding phase where internal details of each design entity
are described. For verification, the SRS and SDD of individual modules are
referred to. Some points to be considered are listed below:
Verify the SRS of each module.
Verify the SDD of each module.
In LLD, data structures, interfaces, and algorithms are represented by design
notations; verify the consistency of every item with their design notations.
● Organizations can build a two-way traceability matrix between the SRS and
design (both HLD and LLD) such that at the time of verification of design,
each requirement mentioned in the SRS is verified.
HOW TO VERIFY CODE?
● Coding is the process of converting LLD specifications into a specific language. This is the
last phase when we get the operational software with the source code.
● The points against which the code must be verified are:
Check that every design specification in HLD and LLD has been coded using traceability
matrix.
Examine the code against a language specification checklist.
Code verification can be done most efficiently by the developer, as he has prepared the code.
He can verify every statement, control structure, loop, and logic such that every possible method
of execution is tested.
● Two kinds of techniques are used to verify the coding: (a) static testing, and (b) dynamic
testing.
VALIDATION
● Validation is a set of activities that ensures the software under consideration has been built right
and is traceable to customer requirements.
● Validation testing is performed after the coding is over.
The reasons for validation are:
To determine whether the product satisfies the users’ requirements, as stated in the requirement
specification.
To determine whether the product’s actual behaviour matches the desired behaviour, as described
in the functional design specification.
It is not always certain that all the stages till coding are bug-free. The bugs that are still present in
the software after the coding phase need to be uncovered.
Validation testing provides the last chance to discover bugs, otherwise these bugs will move to the
final product released to the customer.
Validation enhances the quality of software.
VALIDATION ACTIVITIES
The validation activities are divided into Validation Test Plan and Validation Test Execution
which are described as follows:
It starts as soon as the first output of SDLC, i.e. the SRS, is prepared. In every phase, the
tester performs two parallel activities—verification at that phase and the corresponding
validation test plan.
For preparing a validation test plan, testers must follow the points described below.
Testers must understand the current SDLC phase.
Testers must study the relevant documents in the corresponding SDLC phase.
On the basis of the understanding of SDLC phase and related documents, testers must
prepare the related test plans which are used at the time of validation testing. Under test
plans, they must prepare a sequence of test cases for validation testing.
VALIDATION ACTIVITIES
The following test plans have been recognized which the testers have already prepared
with the incremental progress of SDLC phases:
● Acceptance test plan: This plan is prepared in the requirement phase according to the
acceptance criteria prepared from the user feedback. This plan is used at the time of
Acceptance Testing.
● System test plan: This plan is prepared to verify the objectives specified in the SRS.
Here, test cases are designed keeping in view how a complete integrated system will
work or behave in different conditions. The plan is used at the time of System Testing.
● Function test plan: This plan is prepared in the HLD phase. In this plan, test cases are
designed such that all the interfaces and every type of functionality can be tested. The
plan is used at the time of Function Testing.
VALIDATION ACTIVITIES
● Integration test plan: This plan is prepared to validate the integration of all the
modules such that all their interdependencies are checked. It also validates
whether the integration is in conformance to the whole system design. This
plan is used at the time of Integration Testing.
● Unit test plan: This plan is prepared in the LLD phase. It consists of a test
plan of every module in the system separately. Unit test plan of every unit or
module is designed such that every functionality related to individual unit can
be tested. This plan is used at the time of Unit Testing.
VALIDATION ACTIVITIES
Validation Test Execution: Validation test execution can be divided in the following
testing activities:
● Unit validation testing: The testing strategy is to first focus on the smaller
building blocks of the full system. One unit or module is the basic building
block of the whole software that can be tested for all its interfaces and
functionality. Thus, unit testing is a process of testing the individual
components of a system. A unit or module must be validated before
integrating it with other modules. Unit validation is the first validation activity
after the coding of one module is over.
VALIDATION ACTIVITIES