Unit 03
Unit 03
Verification is the process to find whether the software The validation process is checked whether the
meets the specified requirements for particular phase. software meets requirements and expectation of
the customer.
The objectives of verification is to check whether The objectives of the validation is to check
software is constructed according to requirement and whether the specifications are correct and satisfy
design specification. the business need.
It describes whether the outputs are as per the inputs It explains whether they are accepted by the user
or not. or not.
Plans, requirement, specification, code are evaluated Actual product or software is tested under
during the verifications. validation.
It manually checks the files and document. It is a computer software or developed program
based checking of files and document.
Strategy of testing
A strategy of software testing is shown in the context of spiral.
Following figure shows the testing strategy:
Unit testing
Unit testing starts at the centre and each unit is implemented in source code.
Integration testing
An integration testing focuses on the construction and design of the software.
Validation testing
Check all the requirements like functional, behavioral and performance requirement are
validate against the construction software.
System testing
System testing confirms all system elements and performance are tested entirely.
2. Strategic issues
The best strategy will fail if a series of overriding issues are not addressed. Tom Gilb argues
that a software testing strategy will succeed when software testers.
Specify product requirements in a quantifiable manner long before testing commences.
Although the overriding objective of testing is to find errors, a good testing strategy also
assesses other quality characteristics such as portability, maintainability, and usability. These
should be specified in a way that is measurable so that testing results are unambiguous.
State testing objectives explicitly.
The specific objectives of testing should be stated in measurable terms. For example, test
effectiveness, test coverage, meantime-to-failure, the cost to find and fix defects, remaining
defect density or frequency of occurrence, and test work-hours should be stated within the
test plan.
Understand the users of the software and develop a profile for each user category.
Use cases that describe the interaction scenario for each class of user can reduce overall
testing effort by focusing testing on actual use of the product.
Develop a testing plan that emphasizes “rapid cycle testing.”
Gilb [Gil95] recommends that a software team “learn to test in rapid cycles (2 percent of
project effort) of customer-useful, at least field ‘trialable,’ increments of functionality and/or
quality improvement.” The feedback generated from these rapid cycle tests can be used to
control quality levels and the corresponding test strategies.
Build “robust” software that is designed to test itself.
Software should be designed in a manner that uses antibugging techniques. That is, software
should be capable of diagnosing certain classes of errors. In addition, the design should
accommodate automated testing and regression testing.
Use effective technical reviews as a filter prior to testing. Technical reviews can be as
effective as testing in uncovering errors. For this reason, reviews can reduce the amount of
testing effort that is required to produce high quality software.
Conduct technical reviews to assess the test strategy and test cases themselves.
Technical reviews can uncover inconsistencies, omissions, and outright errors in the testing
approach. This saves time and also improves product quality.
Develop a continuous improvement approach for the testing process.
The test strategy should be measured. The metrics collected during testing should be used as
part of a statistical process control approach for software testing.
b) Bottom-up integration- Begins construction and testing with components at the lowest
levels in the program structure. Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is always available and the
need for stubs is eliminated. A bottom-up integration strategy may be implemented with the
following steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Integration follows the following pattern—D are drivers and M are modules. Drivers will be
removed prior to integration of modules.
Regression testing:-
Each time a new module is added as part of integration testing, the software changes. New data
flow paths are established, new I/O may occur, and new control logic is invoked. These changes
may cause problems with functions that previously worked flawlessly. Regression testing is
the re-execution of some subset of tests that have already been conducted to ensure that changes
have not propagated unintended side effects.
Regression testing may be conducted manually or using automated capture/playback tools.
Capture/playback tools enable the software engineer to capture test cases and results for
subsequent playback and comparison. The regression test suite contains three different classes
of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by the change.
• Tests that focus on the software components that have been changed.
As integration testing proceeds, the number of regression tests can grow
Smoke testing: - It is an integration testing approach that is commonly used when product
software is developed. It is designed as a pacing mechanism for time-critical projects, allowing
the software team to assess the project on a frequent basis. In essence, the smoke-testing
approach encompasses the following activities:
1. Software components that have been translated into code are integrated into a build. A build
includes all data files, libraries, reusable modules, and engineered components that are required
to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “showstopper” errors that have the
highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds, and the entire product is smoke tested daily. The
integration approach may be top down or bottom up.
Smoke testing provides a number of benefits when it is applied on complex, time critical
software projects:
• Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and
other show-stopper errors are uncovered early,
• The quality of the end product is improved. Smoke testing is likely to uncover functional
errors as well as architectural and component-level design errors.
• Error diagnosis and correction are simplified. Errors uncovered during smoke testing are
likely to be associated with “new software increments”—that is, the software that has just been
added to the build(s) is a probable cause of a newly discovered error.
• Progress is easier to assess. With each passing day, more of the software has been integrated
and more has been demonstrated to work. This improves team morale and gives managers a
good indication that progress is being made.
Strategic options: - The major disadvantage of the top-down approach is the need for stubs
and the attendant testing difficulties that can be associated with them. The major disadvantage
of bottom-up integration is that “the program as an entity does not exist until the last module
is added”.
Selection of an integration strategy depends upon software characteristics and,
sometimes, project schedule. In general, a combined approach or sandwich testing may be
the best compromise.
As integration testing is conducted, the tester should identify critical modules. A critical
module has one or more of the following characteristics:
(1) Addresses several software requirements,
(2) Has a high level of control,
(3) Is complex or error prone?
(4) Has definite performance requirements.
Critical modules should be tested as early as is possible. In addition, regression tests should
focus on critical module function.
Integration test work products: - It is documented in a Test Specification. This work product
incorporates a test plan and a test procedure and becomes part of the software configuration.
Program builds (groups of modules) are created to correspond to each phase. The following
criteria and corresponding tests are applied for all test phases:
1. Interface integrity. Internal and external interfaces are tested as each module (or cluster) is
incorporated into the structure.
2. Functional validity. Tests designed to uncover functional errors are conducted.
3. Information content. Tests designed to uncover errors associated with local or global data
structures are conducted.
4. Performance. Tests designed to verify performance bounds established during software
design are conducted.
A history of actual test results, problems, or peculiarities is recorded in a Test Report that can
be appended to the Test Specification.
4. Validation testing
Validation testing begins at the culmination of integration testing, when individual components
have been exercised, the software is completely assembled as a package, and interfacing errors
have been uncovered and corrected. Validation can be defined in many ways, but a simple
(albeit harsh) definition is that validation succeeds when software functions in a manner that
can be reasonably expected by the customer. At this point a battle-hardened software developer
might protest: "Who or what is the arbiter of reasonable expectations?"
Reasonable expectations are defined in the Software Requirements Specification— a document
that describes all user-visible attributes of the software. The specification contains a section
called Validation Criteria. Information contained in that section forms the basis for a validation
testing approach.
Validation Test Criteria
Software validation is achieved through a series of black-box tests that demonstrate conformity
with requirements. A test plan outlines the classes of tests to be conducted and a test procedure
defines specific test cases that will be used to demonstrate conformity with requirements.
Both the plan and procedure are designed to ensure that all functional requirements are
satisfied, all behavioural characteristics are achieved, all performance requirements are
attained, documentation is correct, and human engineered and other requirements are met (e.g.,
transportability, compatibility, error recovery, maintainability).
After each validation test case has been conducted, one of two possible conditions exists:
(1) The function or performance characteristics conform to specification and are accepted or
(2) A deviation from specification is uncovered and a deficiency list is created.
Deviation or error discovered at this stage in a project can rarely be corrected prior to scheduled
delivery. It is often necessary to negotiate with the customer to establish a method for resolving
deficiencies.
Configuration Review
An important element of the validation process is a configuration review. The intent of the
review is to ensure that all elements of the software configuration have been properly
developed, are catalogued, and have the necessary detail to bolster the support phase of the
software life cycle.
Alpha Testing
The alpha test is conducted at the developer's site by a customer. The software is used in a
natural setting with the developer "looking over the shoulder" of the user and recording errors
and usage problems. Alpha tests are conducted in a controlled environment.
Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the software. Unlike
alpha testing, the developer is generally not present. Therefore, the beta test is a "live"
application of the software in an environment that cannot be controlled by the developer. The
customer records all problems (real or imagined) that are encountered during beta testing and
reports these to the developer at regular intervals. As a result of problems reported during beta
tests, software engineers make modifications and then prepare for release of the software
product to the entire customer base.
5. System testing
A classic system-testing problem is “finger pointing.” This occurs when an error is uncovered,
and the developers of different system elements blame each other for the problem. Rather than
indulging in such nonsense, you should anticipate potential interfacing problems and (1) design
error-handling paths that test all information coming from other elements of the system, (2)
conduct a series of tests that simulate bad data or other potential errors at the software interface,
(3) record the results of tests to use as “evidence” if finger pointing does occur, and (4)
participate in planning and design of system tests to ensure that software is adequately tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise
the computer-based system. Although each test has a different purpose, all work to verify that
system elements have been properly integrated and perform allocated functions.
Recovery Testing
Security Testing
Stress Testing
Performance Testing
Deployment Testing
Recovery Testing
Many computer-based systems must recover from faults and resume processing with little or
no downtime. In some cases, a system must be fault tolerant; that is, processing faults must not
cause overall system function to cease. In other cases, a system failure must be corrected within
a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies
that recovery is properly performed. If recovery is automatic (performed by the system itself),
re-initialization, check pointing mechanisms, data recovery, and restart are evaluated for
correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated to determine whether it is within acceptable limits.
Security Testing
Any computer-based system that manages sensitive information or causes actions that can
improperly harm (or benefit) individuals is a target for improper or illegal penetration.
Penetration spans a broad range of activities: hackers who attempt to penetrate systems for
sport, disgruntled employees who attempt to penetrate for revenge, dishonest individuals who
attempt to penetrate for illicit personal gain.
Security testing attempts to verify that protection mechanisms built into a system will, in fact,
protect it from improper penetration. To quote Beizer [Bei84]: “The system’s security must, of
course, be tested for invulnerability from frontal attack—but must also be tested for
invulnerability from flank or rear attack.”
During security testing, the tester plays the role(s) of the individual who desires to penetrate
the system. Anything goes! The tester may attempt to acquire passwords through external
clerical means; may attack the system with custom software designed to break down any
defences that have been constructed; may overwhelm the system, thereby denying service to
others; may purposely cause system errors, hoping to penetrate during recovery; may browse
through insecure data, hoping to find the key to system entry.
Given enough time and resources, good security testing will ultimately penetrate a system. The
role of the system designer is to make penetration cost more than the value of the information
that will be obtained.
Stress Testing
Earlier software testing steps resulted in thorough evaluation of normal program functions and
performance. Stress tests are designed to confront programs with abnormal situations. In
essence, the tester who performs stress testing asks: “How high can we crank this up before it
fails?”
Stress testing executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume. For example, (1) special tests may be designed that generate ten
interrupts per second, when one or two is the average rate, (2) input data rates may be increased
by an order of magnitude to determine how input functions will respond, (3) test cases that
require maximum memory or other resources are executed, (4) test cases that may cause
thrashing in a virtual operating system are designed, (5) test cases that may cause excessive
hunting for disk-resident data are created. Essentially, the tester attempts to break the program.
A variation of stress testing is a technique called sensitivity testing. In some situations (the
most common occur in mathematical algorithms), a very small range of data contained within
the bounds of valid data for a program may cause extreme and even erroneous processing or
profound performance degradation. Sensitivity testing attempts to uncover data combinations
within valid input classes that may cause instability or improper processing.
Performance Testing
For real-time and embedded systems, software that provides required function but does not
conform to performance requirements is unacceptable. Performance testing is designed to test
the run-time performance of software within the context of an integrated system. Performance
testing occurs throughout all steps in the testing process. Even at the unit level, the performance
of an individual module may be assessed as tests are conducted. However, it is not until all
system elements are fully integrated that the true performance of a system can be ascertained.
Performance tests are often coupled with stress testing and usually require both
hardware and software instrumentation. That is, it is often necessary to measure resource
utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor
execution intervals, log events (e.g., interrupts) as they occur, and sample machine states on a
regular basis. By instrumenting a system, the tester can uncover situations that lead to
degradation and possible system failure.
Deployment Testing
In many cases, software must execute on a variety of platforms and under more than one
operating system environment. Deployment testing, sometimes called configuration testing,
exercises the software in each environment in which it is to operate. In addition, deployment
testing examines all installation procedures and specialized installation software (e.g.,
“installers”) that will be used by customers, and all documentation that will be used to introduce
the software to end users.
As an example, consider the Internet-accessible version of SafeHome software that would
allow a customer to monitor the security system from remote locations. The SafeHome
WebApp must be tested using all Web browsers that are likely to be encountered. A more
thorough deployment test might encompass combinations of Web browsers with various
operating systems (e.g., Linux, Mac OS, and Windows). Because security is a major issue, a
complete set of security tests would be integrated with the deployment test.
6. Debugging
In the context of software engineering, debugging is the process of fixing a bug in the software.
In other words, it refers to identifying, analysing, and removing errors. This activity begins
after the software fails to execute properly and concludes by solving the problem and
successfully testing the software. It is considered to be an extremely complex and tedious task
because errors need to be resolved at all stages of debugging.
Debugging Process: Steps involved in debugging are:
• Problem identification and report preparation.
• Assigning the report to the software engineer to the defect to verify that it is genuine.
• Defect Analysis using modelling, documentation, finding and testing candidate flaws,
etc.
• Defect Resolution by making required changes to the system.
• Validation of corrections.
The debugging process will always have one of two outcomes:
1. The cause will be found and corrected.
2. The cause will not be found.
Later, the person performing debugging may suspect a cause, design a test case to help validate
that suspicion and work toward error correction in an iterative fashion.
During debugging, we encounter errors that range from mildly annoying to catastrophic. As
the consequences of an error increase, the amount of pressure to find the cause also increases.
Often, pressure sometimes forces a software developer to fix one error and at the same time
introduce two more.
Debugging Approaches/Strategies:
1. Brute Force: Study the system for a larger duration in order to understand the system.
It helps the debugger to construct different representations of systems to be debugging
depending on the need. A study of the system is also done actively to find recent
changes made to the software.
2. Backtracking: Backward analysis of the problem which involves tracing the program
backward from the location of the failure message in order to identify the region of
faulty code. A detailed study of the region is conducted to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying the
results. The region where the wrong outputs are obtained is the region that needs to be
focused on to find the defect.
4. Using the past experience of the software debug the software with similar problems
in nature. The success of this approach depends on the expertise of the debugger.
5. Cause elimination: it introduces the concept of binary partitioning. Data related to the
error occurrence are organized to isolate potential causes.
The white box testing contains various tests, which are as follows:
1. Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed at least
once. Since all lines of code are covered, helps in pointing out faulty code.
2. Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of
flowchart are covered
3. Condition Coverage: In this technique, all individual conditions must be covered as shown
in the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
• #TC1: X = 0, Y = 0
• #TC2: X = 0, Y = 5
• #TC3: X = 55, Y = 0
• #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2n test cases would be required.
5. Basis Path Testing: In this technique, control flow graphs are made from code or flowchart
and then Cyclomatic complexity is calculated which defines the number of independent paths
so that the minimal number of test cases can be designed for each independent path.
Steps:
10. Make the corresponding control flow graph
11. Calculate the cyclomatic complexity
12. Find the independent paths
13. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one that
represents a decision point that contains a condition after which the graph splits. Regions are
bounded by nodes and edges.
Cyclomatic Complexity: It is a measure of the logical complexity of the software and is used
to define the number of independent paths. For a graph G, V(G) is its cyclomatic complexity.
Calculating V(G):
14. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
15. V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
16. V(G) = Number of non-overlapping regions in the graph
Example:
6. Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
• Skip the loop entirely
• Only one pass through the loop
• 2 passes
• m passes, where m < n
• n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and
we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests
are applied for each. If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines
of code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language
as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.
Each column corresponds to a rule which will become a test case for testing. So there will be
4 test cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a
software system.
6. Compatibility testing – The test case result not only depends on the product but is also on
the infrastructure for delivering functionality. When the infrastructure parameters are changed
it is still expected to work properly. Some parameters that generally affect the compatibility of
software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
5 Usage It is done at higher levels of testing that It is done at lower levels of testing
are system testing and acceptance that are unit testing and integration
testing. testing.
6 Automation It is hard to automate black-box testing It is easy to automate the white box
due to the dependency of testers and testing.
programmers on each other.