0% found this document useful (0 votes)
31 views25 pages

Unit 03

software engineering notes

Uploaded by

Kamini Salunkhe
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
31 views25 pages

Unit 03

software engineering notes

Uploaded by

Kamini Salunkhe
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 25

1.

Strategic approach to Software testing


• Testing is a set of activities which are decided in advance i.e before the start of
development and organized systematically.
• In the literature of software engineering various testing strategies to implement the
testing are defined.
• All the strategies give a testing template.
Following are the characteristic that process the testing templates:
• The developer should conduct the successful technical reviews to perform the testing
successful.
• Testing starts with the component level and work from outside toward the integration
of the whole computer-based system.
• Different testing techniques are suitable at different point in time.
• Testing is organized by the developer of the software and by an independent test egroup.
• Debugging and testing are different activities, then also the debugging should be
accommodated in any strategy of testing.
Difference between Verification and Validation
Verification Validation

Verification is the process to find whether the software The validation process is checked whether the
meets the specified requirements for particular phase. software meets requirements and expectation of
the customer.

It estimates an intermediate product. It estimates the final product.

The objectives of verification is to check whether The objectives of the validation is to check
software is constructed according to requirement and whether the specifications are correct and satisfy
design specification. the business need.

It describes whether the outputs are as per the inputs It explains whether they are accepted by the user
or not. or not.

Verification is done before the validation. It is done after the verification.

Plans, requirement, specification, code are evaluated Actual product or software is tested under
during the verifications. validation.
It manually checks the files and document. It is a computer software or developed program
based checking of files and document.

Strategy of testing
A strategy of software testing is shown in the context of spiral.
Following figure shows the testing strategy:

Unit testing
Unit testing starts at the centre and each unit is implemented in source code.
Integration testing
An integration testing focuses on the construction and design of the software.
Validation testing
Check all the requirements like functional, behavioral and performance requirement are
validate against the construction software.
System testing
System testing confirms all system elements and performance are tested entirely.
2. Strategic issues
The best strategy will fail if a series of overriding issues are not addressed. Tom Gilb argues
that a software testing strategy will succeed when software testers.
Specify product requirements in a quantifiable manner long before testing commences.
Although the overriding objective of testing is to find errors, a good testing strategy also
assesses other quality characteristics such as portability, maintainability, and usability. These
should be specified in a way that is measurable so that testing results are unambiguous.
State testing objectives explicitly.
The specific objectives of testing should be stated in measurable terms. For example, test
effectiveness, test coverage, meantime-to-failure, the cost to find and fix defects, remaining
defect density or frequency of occurrence, and test work-hours should be stated within the
test plan.
Understand the users of the software and develop a profile for each user category.
Use cases that describe the interaction scenario for each class of user can reduce overall
testing effort by focusing testing on actual use of the product.
Develop a testing plan that emphasizes “rapid cycle testing.”
Gilb [Gil95] recommends that a software team “learn to test in rapid cycles (2 percent of
project effort) of customer-useful, at least field ‘trialable,’ increments of functionality and/or
quality improvement.” The feedback generated from these rapid cycle tests can be used to
control quality levels and the corresponding test strategies.
Build “robust” software that is designed to test itself.
Software should be designed in a manner that uses antibugging techniques. That is, software
should be capable of diagnosing certain classes of errors. In addition, the design should
accommodate automated testing and regression testing.
Use effective technical reviews as a filter prior to testing. Technical reviews can be as
effective as testing in uncovering errors. For this reason, reviews can reduce the amount of
testing effort that is required to produce high quality software.
Conduct technical reviews to assess the test strategy and test cases themselves.
Technical reviews can uncover inconsistencies, omissions, and outright errors in the testing
approach. This saves time and also improves product quality.
Develop a continuous improvement approach for the testing process.
The test strategy should be measured. The metrics collected during testing should be used as
part of a statistical process control approach for software testing.

3. Test strategies for conventional software


There are many strategies that can be used to test software. At one extreme, you can wait until
the system is fully constructed and then conduct tests on the overall system in hopes of finding
errors. This approach, although appealing, simply does not work. It will result in buggy
software that disappoints all stakeholders. At the other extreme, you could conduct tests on a
daily basis, whenever any part of the system is constructed. This approach, although less
appealing to many, can be very effective. Unfortunately, some software developers hesitate to
use it. What to do? A testing strategy that is chosen by most software teams falls between the
two extremes. It takes an incremental view of testing, beginning with the testing of individual
program units, moving to tests designed to facilitate the integration of the units, and
culminating with tests that exercise the constructed system. Each of these classes of tests is
described in the sections that follow.
I. Unit Testing
The unit test focuses on the internal processing logic and data structures within the boundaries
of a component. This type of testing can be conducted in parallel for multiple components.
Unit-test considerations:-
1. The module interface is tested to ensure proper information flows (into and out).
2. Local data structures are examined to ensure temporary data store during execution.
3. All independent paths are exercised to ensure that all statements in a module have been
executed at least once.
4. Boundary conditions are tested to ensure that the module operates properly at boundaries.
Software often fails at its boundaries.
5. All error-handling paths are tested. If data do not enter and exit properly, all other tests are
controversial. Among the potential errors that should be tested when error handling is evaluated
are:
(1) Error description is unintelligible,
(2) Error noted does not correspond to error encountered,
(3) Error condition causes system intervention prior to error handling,
(4) exception-condition processing is incorrect,
(5) Error description does not provide enough information to assist in the location of the cause
of the error.
Unit-test procedures:- The design of unit tests can occur before coding begins or after source
code has been generate. Because a component is not a stand-alone program, driver and/or stub
software must often be developed for each unit test.
Driver is nothing more than a “main program” that accepts test case data, passes such data to
the component (to be tested), and prints relevant results.
Stubs serve to replace modules that are subordinate (invoked by) the component to be tested.
A stub may do minimal data manipulation, prints verification of entry, and returns control to
the module undergoing testing.
Drivers and stubs represent testing “overhead.” That is, both are software that must be written
(formal design is not commonly applied) but that is not delivered with the final software
product.

II. Integration Testing


Data can be lost across an interface; one component can have an inadvertent, adverse effect on
another; sub functions, when combined, may not produce the desired major function. The
objective of Integration testing is to take unit-tested components and build a program structure
that has been dictated by design. The program is constructed and tested in small increments,
where errors are easier to isolate and correct. A number of different incremental integration
strategies are:-
a) Top-down integration: testing is an incremental approach to construction of the software
architecture. Modules are integrated by moving downward through the control hierarchy.
Modules subordinate to the main control module are incorporated into the structure in either a
depth-first or breadth-first manner. The integration process is performed in a series of five
steps:
1. The main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate
stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
5. Regression testing may be conducted to ensure that new errors have not been introduced.
The top-down integration strategy verifies major control or decision points early in the test
process. Stubs replace low-level modules at the beginning of top-down testing. Therefore, no
significant data can flow upward in the program structure. As a tester, you are left with three
choices:
(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module, or
(3) Integrate the software from the bottom of the hierarchy upward.

b) Bottom-up integration- Begins construction and testing with components at the lowest
levels in the program structure. Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is always available and the
need for stubs is eliminated. A bottom-up integration strategy may be implemented with the
following steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Integration follows the following pattern—D are drivers and M are modules. Drivers will be
removed prior to integration of modules.
Regression testing:-
Each time a new module is added as part of integration testing, the software changes. New data
flow paths are established, new I/O may occur, and new control logic is invoked. These changes
may cause problems with functions that previously worked flawlessly. Regression testing is
the re-execution of some subset of tests that have already been conducted to ensure that changes
have not propagated unintended side effects.
Regression testing may be conducted manually or using automated capture/playback tools.
Capture/playback tools enable the software engineer to capture test cases and results for
subsequent playback and comparison. The regression test suite contains three different classes
of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by the change.
• Tests that focus on the software components that have been changed.
As integration testing proceeds, the number of regression tests can grow
Smoke testing: - It is an integration testing approach that is commonly used when product
software is developed. It is designed as a pacing mechanism for time-critical projects, allowing
the software team to assess the project on a frequent basis. In essence, the smoke-testing
approach encompasses the following activities:
1. Software components that have been translated into code are integrated into a build. A build
includes all data files, libraries, reusable modules, and engineered components that are required
to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “showstopper” errors that have the
highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds, and the entire product is smoke tested daily. The
integration approach may be top down or bottom up.

Smoke testing provides a number of benefits when it is applied on complex, time critical
software projects:
• Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and
other show-stopper errors are uncovered early,
• The quality of the end product is improved. Smoke testing is likely to uncover functional
errors as well as architectural and component-level design errors.
• Error diagnosis and correction are simplified. Errors uncovered during smoke testing are
likely to be associated with “new software increments”—that is, the software that has just been
added to the build(s) is a probable cause of a newly discovered error.
• Progress is easier to assess. With each passing day, more of the software has been integrated
and more has been demonstrated to work. This improves team morale and gives managers a
good indication that progress is being made.

Strategic options: - The major disadvantage of the top-down approach is the need for stubs
and the attendant testing difficulties that can be associated with them. The major disadvantage
of bottom-up integration is that “the program as an entity does not exist until the last module
is added”.
Selection of an integration strategy depends upon software characteristics and,
sometimes, project schedule. In general, a combined approach or sandwich testing may be
the best compromise.
As integration testing is conducted, the tester should identify critical modules. A critical
module has one or more of the following characteristics:
(1) Addresses several software requirements,
(2) Has a high level of control,
(3) Is complex or error prone?
(4) Has definite performance requirements.
Critical modules should be tested as early as is possible. In addition, regression tests should
focus on critical module function.
Integration test work products: - It is documented in a Test Specification. This work product
incorporates a test plan and a test procedure and becomes part of the software configuration.
Program builds (groups of modules) are created to correspond to each phase. The following
criteria and corresponding tests are applied for all test phases:
1. Interface integrity. Internal and external interfaces are tested as each module (or cluster) is
incorporated into the structure.
2. Functional validity. Tests designed to uncover functional errors are conducted.
3. Information content. Tests designed to uncover errors associated with local or global data
structures are conducted.
4. Performance. Tests designed to verify performance bounds established during software
design are conducted.
A history of actual test results, problems, or peculiarities is recorded in a Test Report that can
be appended to the Test Specification.

4. Validation testing
Validation testing begins at the culmination of integration testing, when individual components
have been exercised, the software is completely assembled as a package, and interfacing errors
have been uncovered and corrected. Validation can be defined in many ways, but a simple
(albeit harsh) definition is that validation succeeds when software functions in a manner that
can be reasonably expected by the customer. At this point a battle-hardened software developer
might protest: "Who or what is the arbiter of reasonable expectations?"
Reasonable expectations are defined in the Software Requirements Specification— a document
that describes all user-visible attributes of the software. The specification contains a section
called Validation Criteria. Information contained in that section forms the basis for a validation
testing approach.
Validation Test Criteria
Software validation is achieved through a series of black-box tests that demonstrate conformity
with requirements. A test plan outlines the classes of tests to be conducted and a test procedure
defines specific test cases that will be used to demonstrate conformity with requirements.
Both the plan and procedure are designed to ensure that all functional requirements are
satisfied, all behavioural characteristics are achieved, all performance requirements are
attained, documentation is correct, and human engineered and other requirements are met (e.g.,
transportability, compatibility, error recovery, maintainability).
After each validation test case has been conducted, one of two possible conditions exists:
(1) The function or performance characteristics conform to specification and are accepted or
(2) A deviation from specification is uncovered and a deficiency list is created.
Deviation or error discovered at this stage in a project can rarely be corrected prior to scheduled
delivery. It is often necessary to negotiate with the customer to establish a method for resolving
deficiencies.
Configuration Review
An important element of the validation process is a configuration review. The intent of the
review is to ensure that all elements of the software configuration have been properly
developed, are catalogued, and have the necessary detail to bolster the support phase of the
software life cycle.
Alpha Testing
The alpha test is conducted at the developer's site by a customer. The software is used in a
natural setting with the developer "looking over the shoulder" of the user and recording errors
and usage problems. Alpha tests are conducted in a controlled environment.
Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the software. Unlike
alpha testing, the developer is generally not present. Therefore, the beta test is a "live"
application of the software in an environment that cannot be controlled by the developer. The
customer records all problems (real or imagined) that are encountered during beta testing and
reports these to the developer at regular intervals. As a result of problems reported during beta
tests, software engineers make modifications and then prepare for release of the software
product to the entire customer base.

5. System testing
A classic system-testing problem is “finger pointing.” This occurs when an error is uncovered,
and the developers of different system elements blame each other for the problem. Rather than
indulging in such nonsense, you should anticipate potential interfacing problems and (1) design
error-handling paths that test all information coming from other elements of the system, (2)
conduct a series of tests that simulate bad data or other potential errors at the software interface,
(3) record the results of tests to use as “evidence” if finger pointing does occur, and (4)
participate in planning and design of system tests to ensure that software is adequately tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise
the computer-based system. Although each test has a different purpose, all work to verify that
system elements have been properly integrated and perform allocated functions.
Recovery Testing
Security Testing
Stress Testing
Performance Testing
Deployment Testing

Recovery Testing
Many computer-based systems must recover from faults and resume processing with little or
no downtime. In some cases, a system must be fault tolerant; that is, processing faults must not
cause overall system function to cease. In other cases, a system failure must be corrected within
a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies
that recovery is properly performed. If recovery is automatic (performed by the system itself),
re-initialization, check pointing mechanisms, data recovery, and restart are evaluated for
correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated to determine whether it is within acceptable limits.

Security Testing
Any computer-based system that manages sensitive information or causes actions that can
improperly harm (or benefit) individuals is a target for improper or illegal penetration.
Penetration spans a broad range of activities: hackers who attempt to penetrate systems for
sport, disgruntled employees who attempt to penetrate for revenge, dishonest individuals who
attempt to penetrate for illicit personal gain.
Security testing attempts to verify that protection mechanisms built into a system will, in fact,
protect it from improper penetration. To quote Beizer [Bei84]: “The system’s security must, of
course, be tested for invulnerability from frontal attack—but must also be tested for
invulnerability from flank or rear attack.”
During security testing, the tester plays the role(s) of the individual who desires to penetrate
the system. Anything goes! The tester may attempt to acquire passwords through external
clerical means; may attack the system with custom software designed to break down any
defences that have been constructed; may overwhelm the system, thereby denying service to
others; may purposely cause system errors, hoping to penetrate during recovery; may browse
through insecure data, hoping to find the key to system entry.
Given enough time and resources, good security testing will ultimately penetrate a system. The
role of the system designer is to make penetration cost more than the value of the information
that will be obtained.

Stress Testing
Earlier software testing steps resulted in thorough evaluation of normal program functions and
performance. Stress tests are designed to confront programs with abnormal situations. In
essence, the tester who performs stress testing asks: “How high can we crank this up before it
fails?”
Stress testing executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume. For example, (1) special tests may be designed that generate ten
interrupts per second, when one or two is the average rate, (2) input data rates may be increased
by an order of magnitude to determine how input functions will respond, (3) test cases that
require maximum memory or other resources are executed, (4) test cases that may cause
thrashing in a virtual operating system are designed, (5) test cases that may cause excessive
hunting for disk-resident data are created. Essentially, the tester attempts to break the program.
A variation of stress testing is a technique called sensitivity testing. In some situations (the
most common occur in mathematical algorithms), a very small range of data contained within
the bounds of valid data for a program may cause extreme and even erroneous processing or
profound performance degradation. Sensitivity testing attempts to uncover data combinations
within valid input classes that may cause instability or improper processing.

Performance Testing
For real-time and embedded systems, software that provides required function but does not
conform to performance requirements is unacceptable. Performance testing is designed to test
the run-time performance of software within the context of an integrated system. Performance
testing occurs throughout all steps in the testing process. Even at the unit level, the performance
of an individual module may be assessed as tests are conducted. However, it is not until all
system elements are fully integrated that the true performance of a system can be ascertained.
Performance tests are often coupled with stress testing and usually require both
hardware and software instrumentation. That is, it is often necessary to measure resource
utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor
execution intervals, log events (e.g., interrupts) as they occur, and sample machine states on a
regular basis. By instrumenting a system, the tester can uncover situations that lead to
degradation and possible system failure.
Deployment Testing
In many cases, software must execute on a variety of platforms and under more than one
operating system environment. Deployment testing, sometimes called configuration testing,
exercises the software in each environment in which it is to operate. In addition, deployment
testing examines all installation procedures and specialized installation software (e.g.,
“installers”) that will be used by customers, and all documentation that will be used to introduce
the software to end users.
As an example, consider the Internet-accessible version of SafeHome software that would
allow a customer to monitor the security system from remote locations. The SafeHome
WebApp must be tested using all Web browsers that are likely to be encountered. A more
thorough deployment test might encompass combinations of Web browsers with various
operating systems (e.g., Linux, Mac OS, and Windows). Because security is a major issue, a
complete set of security tests would be integrated with the deployment test.

6. Debugging
In the context of software engineering, debugging is the process of fixing a bug in the software.
In other words, it refers to identifying, analysing, and removing errors. This activity begins
after the software fails to execute properly and concludes by solving the problem and
successfully testing the software. It is considered to be an extremely complex and tedious task
because errors need to be resolved at all stages of debugging.
Debugging Process: Steps involved in debugging are:
• Problem identification and report preparation.
• Assigning the report to the software engineer to the defect to verify that it is genuine.
• Defect Analysis using modelling, documentation, finding and testing candidate flaws,
etc.
• Defect Resolution by making required changes to the system.
• Validation of corrections.
The debugging process will always have one of two outcomes:
1. The cause will be found and corrected.
2. The cause will not be found.
Later, the person performing debugging may suspect a cause, design a test case to help validate
that suspicion and work toward error correction in an iterative fashion.
During debugging, we encounter errors that range from mildly annoying to catastrophic. As
the consequences of an error increase, the amount of pressure to find the cause also increases.
Often, pressure sometimes forces a software developer to fix one error and at the same time
introduce two more.
Debugging Approaches/Strategies:
1. Brute Force: Study the system for a larger duration in order to understand the system.
It helps the debugger to construct different representations of systems to be debugging
depending on the need. A study of the system is also done actively to find recent
changes made to the software.
2. Backtracking: Backward analysis of the problem which involves tracing the program
backward from the location of the failure message in order to identify the region of
faulty code. A detailed study of the region is conducted to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying the
results. The region where the wrong outputs are obtained is the region that needs to be
focused on to find the defect.
4. Using the past experience of the software debug the software with similar problems
in nature. The success of this approach depends on the expertise of the debugger.
5. Cause elimination: it introduces the concept of binary partitioning. Data related to the
error occurrence are organized to isolate potential causes.

7. White Box Testing


White box testing which also known as glass box is testing, structural testing, clear box
testing, open box testing and transparent box testing. It tests internal coding and
infrastructure of a software focus on checking of predefined inputs against expected and desired
outputs. It is based on inner workings of an application and revolves around internal structure
testing. In this type of testing programming skills are required to design test cases. The primary
goal of white box testing is to focus on the flow of inputs and outputs through the software and
strengthening the security of the software.
The term 'white box' is used because of the internal perspective of the system. The clear box or
white box or transparent box name denote the ability to see through the software's outer shell
into its inner workings.
Developers do white box testing. In this, the developer will test every line of the code of the
program. The developers perform the White-box testing and then send the application or the
software to the testing team, where they will perform the black box testing and verify the
application along with the requirements and identify the bugs and sends it to the developer.
The developer fixes the bugs and does one round of white box testing and sends it to the testing
team. Here, fixing the bugs implies that the bug is deleted, and the particular feature is working
fine on the application.
Working process of white box testing:
• Input: Requirements, Functional specifications, design documents, source code.
• Processing: Performing risk analysis for guiding through the entire process.
• Proper test planning: Designing test cases so as to cover entire code. Execute rinse-
repeat until error-free software is reached. Also, the results are communicated.
• Output: Preparing final report of the entire testing process.

The white box testing contains various tests, which are as follows:
1. Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed at least
once. Since all lines of code are covered, helps in pointing out faulty code.

Statement Coverage Example

2. Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of
flowchart are covered

3. Condition Coverage: In this technique, all individual conditions must be covered as shown
in the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0

4. Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
• #TC1: X = 0, Y = 0
• #TC2: X = 0, Y = 5
• #TC3: X = 55, Y = 0
• #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2n test cases would be required.

5. Basis Path Testing: In this technique, control flow graphs are made from code or flowchart
and then Cyclomatic complexity is calculated which defines the number of independent paths
so that the minimal number of test cases can be designed for each independent path.
Steps:
10. Make the corresponding control flow graph
11. Calculate the cyclomatic complexity
12. Find the independent paths
13. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one that
represents a decision point that contains a condition after which the graph splits. Regions are
bounded by nodes and edges.

Cyclomatic Complexity: It is a measure of the logical complexity of the software and is used
to define the number of independent paths. For a graph G, V(G) is its cyclomatic complexity.
Calculating V(G):
14. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
15. V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
16. V(G) = Number of non-overlapping regions in the graph
Example:

V(G) = 4 (Using any of the above formulae)


No of independent paths = 4
• #P1: 1 – 2 – 4 – 7 – 8
• #P2: 1 – 2 – 3 – 5 – 7 – 8
• #P3: 1 – 2 – 3 – 6 – 7 – 8
• #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8

6. Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
• Skip the loop entirely
• Only one pass through the loop
• 2 passes
• m passes, where m < n
• n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and
we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests
are applied for each. If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines
of code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language
as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

8. Black Box Testing


Black box testing is a technique of software testing which examines the functionality of
software without peering into its internal structure or coding. The primary source of black box
testing is a specification of requirements that is stated by the customer.
In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not. If the function produces correct
output, then it is passed in testing, otherwise failed. The test team reports the result to the
development team and then tests the next function. After completing testing of all functions if
there are severe problems, then it is given back to the development team for correction.

Black box testing can be done in the following ways:


1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers, language that can be represented by
context-free grammar. In this, the test cases are generated so that each grammar rule is used at
least once.
2. Equivalence partitioning – It is often seen that many types of inputs work similarly so
instead of giving all of them separately we can group them and test only one input of each
group. The idea is to partition the input domain of the system into several equivalence classes
such that each member of the class works similarly, i.e., if a test case in one class results in
some error, other members of the class would also result in the same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into a minimum of
two sets: valid values and invalid values. For example, if the valid range is 0 to 100
then select one valid input like 49 and one invalids like 104.
2. Generating test cases – (i) to each valid and invalid class of input assigns a unique
identification number. (ii) Write a test case covering all valid and invalid test cases
considering that no two invalid inputs mask each other. To calculate the square root of
a number, the equivalence classes will be: (a) Valid inputs:
• The whole number which is a perfect square- output will be an integer.
• The whole number which is not a perfect square- output will be a decimal
number.
• Positive decimals
• Negative numbers (integer or decimal).
• Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if
test cases are designed for boundary values of the input domain then the efficiency of testing
improves and the probability of finding errors also increases. For example – If the valid range
is 10 to 100 then test for 10,100 also apart from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes a relationship between logical input
called causes with corresponding actions called the effect. The causes and effects are
represented using Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop a cause-effect graph.
3. Transform the graph into a decision table.
4. Convert decision table rules to test cases.
For example, in the following cause-effect graph:
It can be converted into a decision table like:

Each column corresponds to a rule which will become a test case for testing. So there will be
4 test cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a
software system.
6. Compatibility testing – The test case result not only depends on the product but is also on
the infrastructure for delivering functionality. When the infrastructure parameters are changed
it is still expected to work properly. Some parameters that generally affect the compatibility of
software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

Black Box Testing Type


the following are the several categories of black box testing:
1. Functional Testing
2. Regression Testing
3. Non-functional Testing (NFT)
Functional Testing: It determines the system’s software functional requirements.
Regression Testing: It ensures that the newly added code is compatible with the existing code.
In other words, a new software update has no impact on the functionality of the software. This
is carried out after a system maintenance operation and upgrades.
Non-functional Testing: Non-functional testing is also known as NFT. This testing is not
functional testing of software. It focuses on the software’s performance, usability, and
scalability.
Advantages of Black Box Testing:
• The tester does not need to have more functional knowledge or programming skills to
implement the Black Box Testing.
• It is efficient for implementing the tests in the larger system.
• Tests are executed from the user’s or client’s point of view.
• Test cases are easily reproducible.
• It is used in finding the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing:
• There is a possibility of repeating the same tests while implementing the testing process.
• Without clear functional specifications, test cases are difficult to implement.
• It is difficult to execute the test cases because of complex inputs at different stages of
testing.
• Sometimes, the reason for the test failure cannot be detected.
• Some programs in the application are not tested.
• It does not reveal the errors in the control structure.
• Working with a large sample space of inputs can be exhaustive and consumes a lot of
time.

Difference between Block Box Testing and White Box Testing


S On the basis Black Box testing White Box testing
.no of
.
1 Basic It is a software testing technique that In white-box testing, the internal
examines the functionality of software structure of the software is known to
without knowing its internal structure the tester.
or coding.
2 Also known as Black Box Testing is also known as It is also known as structural testing,
functional testing, data-driven testing, clear box testing, code-based
and closed-box testing. testing, and transparent testing.
3 Programming In black-box testing, there is less In white-box testing, there is a
knowledge programming knowledge is required. requirement of programming
knowledge.
4 Algorithm It is not well suitable for algorithm It is well suitable and recommended
testing testing. for algorithm testing.

5 Usage It is done at higher levels of testing that It is done at lower levels of testing
are system testing and acceptance that are unit testing and integration
testing. testing.

6 Automation It is hard to automate black-box testing It is easy to automate the white box
due to the dependency of testers and testing.
programmers on each other.

7 Tested by It is mainly performed by the software It is mainly performed by


testers. developers.
8 Time- It is less time-consuming. In Black box It is more time-consuming. It takes
consuming testing, time consumption depends a long time to design test cases due
upon the availability of the functional to lengthy code.
specifications.
9 Base of testing The base of this testing is external The base of this testing is coding
expectations. which is responsible for internal
working.
10 Exhaustive It is less exhaustive than White Box It is more exhaustive than Black
testing. Box testing.
11 Implementatio In black-box testing, there is no In white-box testing, there is a
n knowledge implementation knowledge is required. requirement of implementation
knowledge.
12 Aim The main objective of implementing Its main objective is to check the
black box testing is to specify the code quality.
business needs or the customer's
requirements.
13 Defect In black-box testing, defects are Whereas, in white box testing, there
detection identified once the code is ready. is a possibility of early detection of
defects.
14 Testing method It can be performed by trial and error It can test data domain and data
technique. boundaries in a better way.
15 Types Mainly, there are three types of black- The types of white box testing are
box testing: functional testing, Non- – Path testing, Loop
Functional testing, and Regression testing, and Condition testing.
testing.
16 Errors It does not find the errors related to the In white-box testing, there is the
code. detection of hidden errors. It also
helps to optimize the code.

You might also like