Software Testing Guide Book v0.1
Software Testing Guide Book v0.1
Ajitha, Amrish Shah, Ashna Datye, Bharathy J, Deepa M G, James M, Jayapradeep J, Jeffin
Jacob M, Kapil Mohan Sharma, Leena Warrier, Mahesh, Michael Frank, Narendra N, Naveed M,
Phaneendra Y, Prathima N, Ravi Kiran N, Rajeev D, Sarah Salahuddin, Siva Prasad B, Shalini R,
Shilpa D, Subramanian D Ramprasad, Sunitha C N, Sunil Kumar M K, Usha Padmini K, Winston
George and Harinath P V
http://www.SofTReL.org 2
Table of Contents
http://www.SofTReL.org 4
18.1 What is a Defect?.................................................................................................107
18.2 Defect Taxonomies..............................................................................................108
18.3 Life Cycle of a Defect..........................................................................................108
19. Metrics for Testing......................................................................................108
References.......................................................................................... ..............110
http://www.SofTReL.org 5
1.The Software Testing Guide Book
Forward
Software Testing has gained a phenomenal importance in the recent years in the
System Development Life Cycle. Many learned people have worked on the topic and
provided various techniques and methodologies for effective and efficient testing.
Today, even though we have many books and articles on Software Test Engineering,
many people are misguided in understanding the underlying concepts of the subject.
Software Testing Guide Book (STGB) is an open source project aimed at bringing the
technicalities of Software Testing into one place and arriving at a common
understanding.
This guide book has been authored by professionals who have been working on Testing
various applications. We wanted to bring out a base knowledge bank where Testing
enthusiasts can start to learn the science and art of Software Testing, and this is how
this book has come out.
This guide book does not provide any appropriate methodologies to be followed while
Testing and instead provides the reader of conceptual understanding of the same.
Regards,
The SofTReL Team.
About SofTReL
The Software Testing Research Lab (SofTReL) is a non-profit organization dedicated for
Research and Advancements of Software Testing.
The concept of having a common place for Software Testing Research was formulated in
2001. Initially we named it ‘Software Quality and Engineering’. Recently in March 2004,
we renamed it to “Software Testing Research Lab’s” – SofTReL.
SofTReL is a non-profit organization dedicated to the research in Software Testing.
Professionals who are currently working with the industry and possess rich experience
in testing form the members of the Lab.
Visit http://www.softrel.org for more information.
http://www.SofTReL.org 6
Part I – Foundations of Software Testing
This section addresses the fundamentals of Software Testing and their practical
application in real life.
Part II – Software Testing for various Architectures
This section would concentrate in explaining testing applications under various
architectures like Client/Server, Web, Pocket PC, Mobile and Embedded.
Part III – Platform Specific Testing
This section addresses testing C++ and Java applications using white box testing type.
Authors
The guide book has been authored by professionals who ‘Test’ everyday.
Ajitha - GrayLogic Corporation, New Jersey, USA
Amrish Shah - MAQSoftware, Mumbai
Ashna Datye - RS Tech Inc, Canada
Bharathy Jayaraman - Ivesia Solutions (I) Pvt Limited, Chennai
Deepa M G - Ocwen Technology Xchange, Bangalore
James M - CSS, Chennai
Jayapradeep Jiothis - Satyam Computer Services, Hyderabad
Jeffin Jacob Mathew - ICFAI Business School, Hyderabad
Kapil Mohan Sharma - Pixtel Communitations, New Delhi
Mahesh, iPointSoft, Hyderabad
Michael Frank - USA
Narendra Nagaram - Satyam, Hyderabad
Naveed Mohammad – vMoksha, Bangalore
Phaneendra Y - Wipro Technologies, Bangalore
Prathima Nagaprakash – Wipro Technologies, Bangalore
Ravi Kiran N - Andale, Bangalore
Rajeev Daithankar - Persistent Systems Pvt. Ltd., Pune
Sarah Salahuddin - Arc Solutions, Pakistan
Siva Prasad Badimi - Danlaw Technologies, Hyderabad
Shalini Ravikumar - USA
Shilpa Dodla - Decatrend Technologies, Chennai
Subramanian Dattaramprasad - Impelsys India, Bangalore
Sunitha C N - Infosys Technologies, Mysore
Sunil Kumar M K - Yahoo India, Bangalore
Usha Padmini Kandala - Virtusa Corp, Massachusets
Winston George - Raj TV Networks, Chennai
http://www.SofTReL.org 7
Harinath – SofTReL, Bangalore - Co-Ordinator
Intended Audience
This guide book is aimed at all Testing Professionals – from a beginner to an advanced
user. This book would provide a baseline understanding of the conceptual theory.
How to Contribute
This is an open source project. If you are interested in contributing to the book or to the
Lab, please do write in to contribute@softrel.org. We need your expertise in our
research.
Future Enhancements
Initially we would be releasing the Part I of the STGB. Later, we would be releasing the
second and third parts of the book. For update information on this project, do continue
to visit http://www.softrel.org/stgb.html
Copyrights
SofTReL is not proposing the Testing methodologies, types and various other concepts.
We tried presenting each and every theoretical concept of Software Testing with a live
example for easier understanding of the subject and arriving at a common
understanding of Software Test Engineering.
However, we did put in few of our proposed ways to achieve specific tasks and these are
governed by The GNU Free Documentation License (GNU-FDL). Please visit
http://www.gnu.org/doc/doc.html for complete guidelines of the license.
http://www.SofTReL.org 8
2. What is Software Testing and Why is it Important?
A brief history of Software engineering and the SDLC.
The software industry has evolved through 4 eras, 50’s –60’s, mid 60’s –late 70’s, mid
70’s- mid 80’s, and mid 80’s-present. Each era has its own distinctive characteristics,
but over the years the software’s have increased in size and complexity. Several
problems are common to almost all of the eras and are discussed below.
The Software Crisis dates back to the 1960’s when the primary reasons for this
situation were less than acceptable software engineering practices. In the early stages of
software there was a lot of interest in computers, a lot of code written but no
established languages then in early 70’s a lot of computer programs started failing and
people lost confidence and thus an industry crisis was declared. Various reasons
leading to the crisis included:
Hardware advances outpacing the ability to build software for this hardware.
The ability to build in pace with the demands.
Increasing dependency on soft wares
Struggle to build reliable and high quality software
Poor design and inadequate resources.
This crisis though identified in the early years, exists to date and we have examples of
software failures around the world. Software is basically considered a failure if the
project is terminated because of costs or overrun schedules, if the project has
experienced overruns in excess of 50% of the original or if the software results in client
lawsuits. Some examples of failure include failure of Air traffic control systems, failure
of medical software, and failure in telecommunication software. The reason behind
these failures is due to one of the many reasons listed above and also because of bad
software engineering practices being adopted and followed. The worst software practices
include:
No historical software-measurement data.
Rejection of accurate cost estimates.
Failure to use automated estimating and planning tools.
Excessive, irrational schedule pressure and creep in user requirements.
Failure to monitor progress and to perform risk management.
Failure to use design reviews and code inspections.
To avoid these failures and thus improve the record, what is needed is a better
understanding of the process, better estimation techniques for cost time and quality
http://www.SofTReL.org 9
measures. But the question is that what is a process? Process transforms inputs to
outputs i.e. a product.
At present a large number of problems exist due to a chaotic software process and the
occasional success depends on individual efforts. Therefore to be able to deliver
successful software projects, a focus on the process is essential since a focus on the
product alone is likely to miss the scalability issues, and improvements in the existing
system. A focus on the process is likely to help in the predictability of out comes,
project trends, and project characteristics. A Software process is a set of activities,
methods and practices involving transformations that people use to develop and
maintain software.
A process needs to be managed well and thus process management comes into play.
Process management is concerned with the knowledge and management of the software
process, its technical aspects and also ensuring that the processes are being performed
as expected and improvements are being made.
From this we conclude that a set of defined processes can possibly save us from
software project failures. But it is nonetheless important to note that the process alone
cannot help us avoid all the problems, because with varying circumstances the need
varies and the process has to be adaptive to these varying needs. Importance needs to
be given to the human aspect of software development since that alone can have a lot of
impact on the results, and effective cost and time estimations may go totally waste if the
human resources are not planned and managed effectively. Secondly, the reasons
mentioned related to the software engineering principles may be resolved when the
needs are correctly identified. Correct identification would then make it easier to
develop the best practices because something that might be suitable for one
organisation may not be most suitable for another.
Therefore to make a successful product a combination of several things will be required
under the umbrella of a well-defined process.
Having talked about the Software process overall it is important to identify and relate
the role software testing plays in producing quality software and manoeuvring the
overall process.
The computer society defines testing as follows: “Testing -- A verification method that
applies a controlled set of conditions and stimuli for the purpose of finding errors. This
is the most desirable method of verifying the functional and performance requirements.
Test results are documented proof that requirements were met and can be repeated.
The resulting data can be reviewed by all concerned for confirmation of capabilities.”
http://www.SofTReL.org 10
There may be many definitions of software testing and many which appeal to us from
time to time, but its best to start with defining testing and then move on to suiting our
needs.
http://www.SofTReL.org 11
When you need to integrate a third party software to your existing software, this
demands the testing of the purchased software with your requirements. The two
systems are designed and developed differently. The integration takes the top priority
during testing. Also, Regression Testing of the integrated software is a must to cross
check if the two software’s are working as per the requirements.
http://www.SofTReL.org 12
4.4 Procedure Control Systems
Procedure Control Systems are the one’s which control the functions of another system.
"The process of designing a model of a real system and conducting experiments with
this model for the purpose of understanding the behavior of the system and/or
http://www.SofTReL.org 13
evaluating various strategies for the operation of the system"-- Introduction to
Simulation Using SIMAN, by C. D. Pegden, R. E. Shannon and R. P. Sadowski, McGraw-
Hill, 1990.
http://www.SofTReL.org 15
This state changes one discrete step at a time as events happens in the system.
Therefore, the actual designing of the simulation involves making choices about which
entities to model, what attributes represent the entity state, what events to model, how
these events effect the entity attributes, and the sequence of the events. Examples of
these systems are simulated battlefield scenarios, highway traffic control systems,
multiteller systems, computer networks etc.
Continuous Simulation Systems
If instead of using a model with discrete entities we use data with continuous values,
we will end up with continuous simulation. For example instead of trying to simulate
battlefield scenarios by using discrete entities such as soldiers and tanks, we can try to
model behavior and movements of troops by using differential equations.
Social Simulation Systems
Social simulation is not a technique by itself but uses the various types of simulation
described above. However, because of the specialized application of those techniques for
social simulation it deserves a special mention of its own.
The filed of social simulation involves using simulation to learn about and predict
various social phenomenon such as voting patterns, migration patterns, economic
decisions made by the general population, etc. One interesting application of social
simulation is in a field called artificial life which is used to obtain useful insights into
the formation and evolution of life.
One disadvantage is that the evaluation of the system is based on the "expert's" opinion,
which may differ from expert to expert. Also, if the system is very large then it is bound
to have many experts. Each expert may view it differently and can give conflicting
opinions. This makes it difficult to determine the validity of the system. Despite all
http://www.SofTReL.org 16
these disadvantages, subjective testing is necessary for testing systems with human
interaction.
Objective Testing
Objective testing is mainly used in systems where the data can be recorded while the
simulation is running. This testing technique relies on the application of statistical and
automated methods to the data collected.
Statistical methods are used to provide an insight into the accuracy of the simulation.
These methods include hypothesis testing, data plots, principle component analysis and
cluster analysis.
http://www.SofTReL.org 17
4.12 Data Presentation
Data Presentation software stores data and displays the same to the user when
required. An example is a Content Management System. You have a web site and this is
in English, you also have your web site in other languages. The user can select the
language he wishes to see and the system displays the same web site in the user
chosen language. You develop your web site in various languages and store them on
the system. The system displays the required language, the user chooses.
Visibility
Visibility is our ability to observe the states and outputs of the software under test.
Features to improve the visibility are
• Access to Code
http://www.SofTReL.org 18
Developers must provide full viewing access to testers. Code, change records
and design documents should be provided to the testing team. Someone on the
testing team must know how to read code.
• Event logging
The events to log include User events, System milestones, Error handlings and
Completed transactions. The logs may be stored in files, ring buffers in memory,
and/or serial ports. Things to be logged include description of event, timestamp,
subsystem, resource usage and severity of event. Logging should be adjusted by
subsystem and type. Logs report internal errors, help in isolating defects, and
give useful information about context, tests, customer usage and test coverage.
• Error detection mechanisms
Data integrity checking and System level error detection (e.g. Microsoft
Appviewer) are useful here. In addition, Assertions and probes with the following
features are really helpful
Code is added to detect internal errors.
Assertions abort on error.
Probes log errors.
Design by Contract theory---This technique requires that
assertions be defined for functions. Preconditions apply to inputs
and violations implicate calling functions while postconditions
apply to outputs and violations implicate called functions.This
effectively solves the oracle problem for testing.
• Resource Monitoring
Memory usage should be monitored to find memory leaks. States of running
methods, threads or processes should be watched (Profiling interfaces may be
used for this.). In addition, the configuration values should be dumped.
Control
Control refers to our ability to provide inputs and reach states in the software under
test.
The feature to improve controllability are:
• Test Points
Allow data to be inspected, inserted or modified at points in the software. It is
specially useful for dataflow applications. In addition, a pipe and filters
architecture provides many opportunities for test points.
• Custom User Interface controls
http://www.SofTReL.org 19
Custom UI controls often raise serious testability problems with GUI test drivers.
Ensuring testability usually requires:
Adding methods to report necessary information
Customizing test tools to make use of these methods
Getting a tool expert to advise developers on testability and to
build the required support.
Asking third party control vendors regarding support by test
tools.
• Test Interfaces
Interfaces may be provided specifically for testing e.g. Excel and Xconq etc.
Existing interfaces may be able to support significant testing e.g. InstallSheild,
Autocad, Tivoli, etc.
• Fault injection
Error seeding---instrumenting low level I/O code to simulate errors---makes it
much easier to test error handling.It can be handled at both system and
application level, Tivoli, etc.
• Installation and setup
Testers should be notified when installation has completed successfully. They
should be able to verify installation, programmatically create sample records
and run multiple clients, daemons or servers on a single machine.
A BROADER VIEW
Below are given a broader set of characteristics (usually known as James Bach
heuristics) that lead to testable software.
• Operability
The better it works, the more efficiently it can be tested.
The system should have few bugs, no bugs should block the execution of tests
and the product should evolve in functional stages (simultaneous development
and testing).
• Observability
What we see is what we test.
Distinct ouput should be generated for each input
http://www.SofTReL.org 20
Current and past system states and variables should be visible
during testing
All factors affecting the output should be visible.
Incorrect output should be easily identified.
Source code should be easily accessible.
Internal errors should be automatically detected(through self-testing
mechanisms) and reported.
• Controllability
The better we can control the software, the more the testing process can be
automated and optimized.
Check that
all outputs can be generated and code can be executed through
some combination of input.
Software and hardware states can be controlled directly by the
test engineer.
Inputs and output formats are consistent and structured.
Test can be conveniently, specified, automated and reproduced.
• Decomposability
By controlling the scope of testing, we can more quickly isolate problems and
perform smarter testing.
The software system should be built from independent modules which can be
tested independently.
• Simplicity
The less there is to test, the more quickly we can test it.
The points to consider in this regard are functional (e.g. minimum set of
features), structural (e.g. architecture is modularized) and code (e.g. a coding
standard is adopted) simplicity.
• Stability
The fewer the changes, the fewer are the disruptions to testing.
The changes to software should be infrequent, controlled and not invalidating
existing tests. The software should be able to recover well from failures.
• Understandability
The more information we will have, the smarter we will test.
The testers should be able to understand well the design, changes to the design
and the dependencies between internal, external and shared components.
Technical documentation should be instantly accessible, accurate, well
organized, specific and detailed.
http://www.SofTReL.org 21
• Suitability
The more we know about the intended use of the software,the better we can
organize our testing to find important bugs.
http://www.SofTReL.org 22
Now we consider these in detail.
Requirements Analysis
• Invest in analysis at the beginning of the project - Having a clear, concise and
formal statement of the requirements facilitates programming,
communication, error analysis an d test data generation.
Deciding the above issues is one of the test related activities that should
be performed during this stage.
• Start developing the test set at the requirements analysis phase - Data should
be generated that can be used to determine whether the requirements have
been met. To do this, the input domain should be partitioned into classes of
values that the program will treat in a similar manner and for each class a
representative element should be included in the test data. In addition,
following should also be included in the data set: (1) boundary values (2) any
non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.
http://www.SofTReL.org 23
Design
The design document aids in programming, communication, and error analysis and test
data generation. The requirements statement and the design document should together
give the problem and the organization of the solution i.e. what the program will do and
how it will do that.
• Analysis of design to check its completeness and consistency - the total process
should be analyzed to determine that no steps or special cases have been
overlooked. Internal interfaces, I/O handling and data structures should
specially be checked for inconsistencies.
• Generation of test data based on the design - Here the test generated should test
both the structure and the internal functions of the design -- the data
structures, algorithm, functions, heuristics and general program structure.
Standard extreme and special values should be included and expected output
should be recorded in the test data.
• Reexamination and refinement of the test data set generated at the requirements
analysis phase.
http://www.SofTReL.org 24
The first two steps should also be performed by some colleague and not only the
designer/developer.
Programming/Construction
• Check the code for consistency with design - the areas to check include modular
structure, module interfaces, data structures, functions, algorithms and I/O
handling.
• Perform the Testing process in an organized and systematic manner with test runs
dated, annotated and saved. A plan or schedule can be used as a checklist to
help the programmer organize testing efforts. If errors are found and changes
made to the program, all tests involving the erroneous segment (including those
which resulted in success previously) must be rerun and recorded.
• Asks some colleague for assistance - Some independent party, other than the
programmer of the specific part of the code, should analyze the development
product at each phase. The programmer should explain the product to the party
who will then question the logic and search for errors with a checklist to guide
the search. This is needed to locate errors the programmer has overlooked.
• Use available tools - the programmer should be familiar with various compilers
and interpreters available on the system for the implementation language being
used because they differ in their error analysis and code generation capabilities.
• Apply Stress to the Program - Testing should exercise and stress the program
structure, the data structures, the internal functions and the externally visible
functions or functionality. Both valid and invalid data should be included in the
test set.
• Test one at a time - Pieces of code, individual modules and small collections of
modules should be exercised separately before they are integrated into the total
program, one by one. Errors are easier to isolate when the no. of potential
interactions should be kept small. Instrumentation-insertion of some code into
http://www.SofTReL.org 25
the program solely to measure various program characteristics – can be useful
here. A tester should perform array bound checks, check loop control variables,
determine whether key data values are within permissible ranges, trace program
execution, and count the no. of times a group of statements is executed.
• Measure testing coverage/When should testing stop? - If errors are still found
every time the program is executed, testing should continue. Because errors
tend to cluster, modules appearing particularly error-prone require special
scrutiny.
The metrics used to measure testing thoroughness include statement testing
(whether each statement in the program has been executed at least once),
branch testing (whether each exit from each branch has been executed at least
once) and path testing (whether all logical paths, which may involve repeated
execution of various segments, have been executed at least once). Statement
testing is the coverage metric most frequently used as it is relatively simple to
implement.
The amount of testing depends upon the cost of an error. Critical programs or
functions require more thorough testing than the less significant functions.
Corrections, modifications and extensions are bound to occur even for small programs
and any time one is made, testing is required. Testing during maintenance is termed
regression testing. The test set, the test plan, and the test results for the original
program should exist. Modifications must be made to accommodate the program
changes, and then all portions of the program affected by the modifications must be
retested. After regression testing is complete, the program and test documentation must
be updated to reflect the changes.
http://www.SofTReL.org 26
4. The project duration is completed.
5. When the risk in the project is under acceptable limit.
Practically I feel that the decision of stopping the testing is based on the level of the risk
acceptable to the management. As the testing is an never ending process we can never
assume that the 100 % testing has been done, we can only minimize the risk of
shipping the product to client with X testing done. The risk can be measured by Risk
analysis but for small duration / low budget / low resources project, risk can be
deduced by simply: -
9. Verification Strategies
What is ‘Verification’?
Verification is the process of evaluating a system or component to determine whether
the products of a given development phase satisfy the conditions imposed at the start of
that phase.1
9.1 Review
A process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users, customers, or other interested parties
for comment or approval.
The main goal of reviews is to find defects. Reviews are a good compliment to testing to
help assure quality.
What are the various types of reviews?
Types of reviews include Management Reviews, Technical Reviews, Inspections,
Walkthroughs and Audits.
Management Reviews
http://www.SofTReL.org 27
Management review are performed by those directly responsible for the system in order
to monitor progress, determine status of plans and schedules, confirm requirements
and their system allocation.
Support decisions made during such reviews include Corrective actions, Changes in the
allocation of resources or changes to the scope of the project
The participants of the review play the roles of Decision Maker, Review Leader,
Recorder, Management Staff, and Technical Staff.
Technical Reviews
The participants of the review play the roles of Decision maker, Review leader, Recorder,
Technical staff.
http://www.SofTReL.org 28
interested parties for comment or approval. Types include system requirements review,
software requirements review.
Who involve in Requirement Review?
• Requirements review is led by product management. Members from every affected
department participates in the review
Input Criteria
Exit Criteria
Exit criteria include the filled & completed checklist with the reviewers’ comments &
suggestions and the re-verification whether they are incorporated in the documents.
Input Criteria
Design document is the essential document for the review. A checklist can be used for
the review.
Exit Criteria
Exit criteria include the filled & completed checklist with the reviewers’ comments &
suggestions and the re-verification whether they are incorporated in the documents.
Input Criteria
Source file is the essential document for the review. A checklist can be used for the
review.
Exit Criteria
Exit criteria include the filled & completed checklist with the reviewers’ comments &
suggestions and the re-verification whether they are incorporated in the documents.
9.2 Walkthrough
A static analysis technique in which a designer or programmer leads members of the
development team and other interested parties through a segment of documentation or
code, and the participants ask questions and make comments about possible errors,
violation of development standards, and other problems.
http://www.SofTReL.org 30
The walk-through shall be considered complete when
a) The entire software product has been examined
b) Recommendations and required actions have been recorded
c) The walk-through output has been completed
9.3 Inspection
A static analysis technique that relies on visual examination of development products to
detect errors, violations of development standards, and other problems. Types include
code inspection; design inspection.
The participants in Inspections assume one or more of the following roles:
a) Inspection leader
b) Recorder
c) Reader
d) Author
e) Inspector
All participants in the review are inspectors. The author shall not act as inspection
leader and should not act as reader or recorder. Other roles may be shared among the
team members. Individual participants may act in more than one role.
Individuals holding management positions over any member of the inspection team
shall not participate in the inspection.
http://www.SofTReL.org 31
Additional reference material may be made available by the individuals responsible for
the software product when requested by the inspection leader.
The purpose of the exit criteria is to bring an unambiguous closure to the inspection
meeting. The exit decision shall determine if the software product meets the inspection
exit criteria and shall prescribe any appropriate rework and verification. Specifically,
the inspection team shall identify the software product disposition as one of the
following:
a) Accept with no or minor rework. The software product is accepted as is or with only
minor rework. (For example, that would require no further verification).
b) Accept with rework verification. The software product is to be accepted after the
inspection leader or
a designated member of the inspection team (other than the author) verifies rework.
c) Re-inspect. Schedule a re-inspection to verify rework. At a minimum, a re-inspection
shall examine the software product areas changed to resolve anomalies identified in the
last inspection, as well as side effects of those changes.
Testing types refer to different approaches towards testing a computer program, system
or product. The major two types of testing are black box testing and white box testing,
which would both be discussed in detail in this chapter. A minor type, termed as grey
box testing or hybrid testing is evolving presently and it combines the features of the two
major types.
Testing Techniques
Some of these and many others would be discussed in the sections of this chapter.
Testing types deal with what aspect of the computer software would be tested, while
testing techniques deal with how a specific part of the software would be tested.
That is, testing types mean whether we are testing the function or the structure of the
software. In other words, we may test each function of the software to see if it is
operational or we may test the internal components of the software to if its internal
workings are according to specification.
On the other hand, Testing techniques means what methods or ways would be applied
or calculations would be done to test a particular feature of a software ( Sometimes we
test the interfaces, sometimes we test the segments, sometimes loops etc. )
http://www.SofTReL.org 33
White box testing is much more expensive than black box testing. It requires the source
code to be produced before the tests can be planned and is much more laborious in the
determination of suitable input data and the determination if the software is or is not
correct. The advice given is to start test planning with a black box test approach as
soon as the specification is available. White box planning should commence as soon as
all black box tests have been successfully passed, with the production of flowgraphs
and determination of paths. The paths should then be checked against the black box
test plan and any additional required test runs determined and applied.
The consequences of test failure at this stage may be very expensive. A failure of a white
box test may result in a change which requires all black box testing to be repeated and
the re-determination of the white box paths. The cheaper option is to regard the process
of testing as one of quality assurance rather than quality control. The intention is that
sufficient quality will be put into all previous design and production stages so that it
can be expected that testing will confirm that there are very few faults present, quality
assurance, rather than testing being relied upon to discover any faults in the software,
quality control. A combination of black box and white box test considerations is still not
a completely adequate test rationale.
White box testing basically involves looking at the structure of the code. When you know
the internal structure of a product, tests can be conducted to ensure that the internal
operations performed according to the specification. And all internal components have
been adequately exercised. In other word WBT tends to involve the coverage of the
specification in the code.
Code coverage is defined in terms of six types listed below. Loop testing is also a part of
WBT.
http://www.SofTReL.org 34
• Compound Condition Coverage – When there are multiple conditions, you must
test not only each direction but also each possible combinations of conditions,
which is usually done by using a ‘Truth Table’
• Basis Path Testing – Each independent path through the code is taken in a pre-
determined order. This point will further be discussed in other section.
• Data Flow Testing (DFT) – In this approach you track the specific variables
through each possible calculation, thus defining the set of intermediate paths
through the code i.e., those based on each peace of data chosen to be tracked.
Even though the paths are considered independent, dependencies across
multiple paths are not really tested for by this approach. DFT does tend to
reflect dependencies but it is mainly through sequences of data manipulation.
This approach tends to uncover bugs like variables used but not initialize, or
declared but not used, and so on.
• Path Testing – Path testing is where all possible paths through the code are
defined and covered. This testing is actually extremely laborious and time
consuming.
• Loop Testing – In addition top above measures, there are testing strategies based
on loop testing. These strategies relate to testing single loops, concatenated
loops, and nested loops. Loops are fairly simple to test unless dependencies exist
among the loop or b/w a loop and the code it contains.
What do we do in WBT?
In WBT, we use the control structure of the procedural design to derive test cases
.Using WBT methods a tester can derive the test cases that
• Guarantee that all independent paths within a module have been exercised
at least once.
• Exercise all logical decisions on their true and false sides
• Execute all loops at their boundaries and within their operational bounds
• Exercise internal data structures to assure their validity.
White box testing (WBT) is also called Structural or Glass box testing.
Why WBT?
http://www.SofTReL.org 35
• Logic errors and incorrect assumptions are inversely proportional to the
probability that a program path will be executed. Error tend to creep into our
work when we design and implement functions, conditions or controls that
are out of the mainstream of the program
Skills Required
Talking theoretically, all we need to do in WBT is to define all logical paths, develop
test cases to exercise them and evaluate results i.e. generate test cases to exercise
program logic exhaustively.
For this we need to know the program well i.e. We should know about the
specification, the code to be tested and related documents should be available too
us .We must be able to tell the expected status of the program versus the actual
status found at some point during the testing process.
Limitations
http://www.SofTReL.org 36
white and black box testing techniques can be coupled to provide an approach that
that validates the softawre interface selectively assuring the correction of internal
working of the software.
http://www.SofTReL.org 37
It usually focuses on the functionality part of the module.
Some people like to call black box testing as behavioral, functional, opaque-box, and
closed-box. While the term black box is most popular use, many people prefer the
terms "behavioral" and "structural". Behavioral test design is slightly different from
black-box test design because the use of internal knowledge isn't strictly forbidden, but
it's still discouraged.
Personally I as a test engineer feel that there is a trade off between the approach used to
test a product say white box and black box.
There are some bugs that cannot be found using only black box or only white box. If the
test cases are extensive and the test inputs are also from a large sample space then Its
always possible to find the majority of the bugs through black box testing.
Tools used for Black Box testing:
Rational Software has been producing tools for automated black box and automated
white box testing for several years. Rational's functional regression testing tools capture
the results of black box tests in a script format. Once captured, these scripts can be
executed against future builds of an application to verify that new functionality hasn't
disabled previous functionality.
Advantages of Black Box Testing
- Tester can be non-technical.
- This testing is most likely to find the bugs as ill the user find.
- Testing helps to identify the vagueness and contradiction in functional specifications.
- Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
- Chances of having repetition of tests that are already done by programmer.
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases
is slow and difficult
Chances of having unidentified paths during this testing
http://www.SofTReL.org 38
10.2.3 Boundary Value Analysis
BVA focuses on the boundary of the input space to identify test cases
Rational is that errors tend to occur near the extreme values of an input
variable
http://www.SofTReL.org 39
Limitations of Boundary Value Analysis
BVA works best when the program is a function of several independent variables that
represent bounded physical quantities
5. Independent Variables
o NextDate test cases derived from BVA would be inadequate: focusing
on the boundary would not leave emphasis on February or leap years
o Dependencies exist with NextDate's Day, Month and Year
o Test cases derived without consideration of the function
6. Physical Quantities
o An example of physical variables being tested, telephone numbers -
what faults might be revealed by numbers of 000-0000, 000-0001,
555-5555, 999-9998, 999-9999?
http://www.SofTReL.org 40
Using ‘Program specifications’ as an input, Programmer prepares ‘Unit Test Cases’
document for that Unit. A ‘Unit Test Cases Checklist’ may be used to check the
completeness of Unit Test Cases document.
‘Program Specifications’ and ‘Unit Test Cases’ are reviewed and approved by Quality
Assurance Analyst or by peer programmer.
Programmer writes code for the Unit.
Programmer tests the Unit using ‘Unit Test Cases’ document. Defects found are
recorded in Defect Recording System by the Programmer. Programmer then corrects
these defects and again tests the Unit using the same test cases document. If more
defects are found, he records them, and corrects them. This cycle goes on until all Unit
Test Cases are tested ok. The Unit Testing is then said to be complete for that Unit.
http://www.SofTReL.org 41
with every possible test case, leads to complete Unit Testing and thus gives an
assurance of defect-free Unit at the end of Unit Testing stage. So lets discuss about how
to prepare a UTC.
Think of following aspects while preparing Unit Test Cases –
Expected Functionality: Write test cases to test each functionality that is expected
from the Unit.
e.g. If an SQL script contains commands for creating one table and altering another
table then test cases should be written for testing creation of one table and
alteration of another.
It is important that User Requirements should be traceable to Functional
Specifications, Functional Specifications be traceable to Program Specifications and
Program Specifications be traceable to Unit Test Cases. Maintaining such
traceability ensures that the application fulfills User Requirements.
Input values:
o Every input value: Write test cases for each of the inputs accepted by the
Unit.
e.g. If a Data Entry Form has 10 fields on it, write test cases for all 10 fields.
o Validation of input: Every input has certain validation rule associated with
it. Write test cases to validate this rule. Also, there can be cross-field
validations in which one field is enabled depending upon input of another
field. Test cases for these should not be missed.
e.g. A combo box or list box has a valid set of values associated with it.
A numeric field may accept only positive values.
An email address field must have ampersand (@) and period (.) in it.
A ‘Sales tax code’ entered by user must belong to the ‘State’ specified by
the user.
o Boundary conditions: Inputs often have minimum and maximum possible
values. Do not forget to write test cases for them.
e.g. A field that accepts ‘percentage’ on a Data Entry Form should be able to
accept inputs only from 1 to 100.
o Limitations of data types: Variables that hold the data have their value limits
depending upon their data types. In case of computed fields, it is very
important to write cases to arrive at an upper limit value of the variables.
o Computations: If any calculations are involved in the processing, write test
cases to check the arithmetic expressions with all possible combinations of
values.
http://www.SofTReL.org 42
Output values: Write test cases to generate scenarios, which will produce all types
of output values that are expected from the Unit.
e.g. A Report can display one set of data if user chooses a particular option and
another set of data if user chooses a different option. Write test cases to check each
of these outputs.
Screen / Report Layout: Screen Layout or web page layout and Report layout must
be tested against the requirements. It should not happen that the screen or the
report looks beautiful and perfect, but user wanted something entirely different!
Path coverage: A Unit may have conditional processing which results in various
paths the control can traverse through. Test case must be written for each of these
paths.
Assumptions: A Unit may assume certain things for it to function. For example, a
Unit may need a database to be open. Then test case must be written to check that
the Unit reports error if such assumptions are not met.
Transactions: In case of database applications, it is important to make sure that
transactions are properly designed and no way inconsistent data gets saved in the
database.
Abnormal terminations: Behavior of the Unit in case of abnormal termination
should be tested.
Error messages: Error messages should be short, precise and self-explanatory. They
should be properly phrased and should be free of grammatical mistakes.
UTC Document
Given below is a simple format for UTC document.
Example:
Lets say we want to write UTC for a Data Entry Form below:
Given below are some of the Unit Test Cases for the above Form:
Test Test Case Procedure Expected Result Actual
Case purpose result
No.
1 Item no. to 1.Create a new record. 2,3. Should get
start by ‘A’ or 2.Type Item no. accepted and control
‘B’. starting with ‘A’. should move to next
3.Type item no. field.
starting with ‘B’. 4. Should not get
4.Type item no. accepted. An error
starting with any message should be
character other than displayed and control
‘A’ and ‘B’. should remain in Item
no. field.
2. Item Price to 1.Create a new record 2,3.Error should get
be between with Item no. starting displayed and control
1000 to 2000 if with ‘A’. should remain in Price
Item no. starts 2.Specify price < 1000 field.
with ‘A’. 3.Specify price >2000. 4,5,6.Should get
4.Specify price = 1000. accepted and control
5.Specify price = 2000. should move to next
6.Specify price between field.
1000 and 2000.
UTC Checklist
UTC checklist may be used while reviewing the UTC prepared by the programmer. As
any other checklist, it contains a list of questions, which can be answered as either a
‘Yes’ or a ‘No’. The ‘Aspects’ list given in Section 4.3 above can be referred to while
preparing UTC checklist.
e.g. Given below are some of the checkpoints in UTC checklist –
1. Are test cases present for all form field validations?
2. Are boundary conditions considered?
3. Are Error messages properly phrased?
Defect Recording
Defect Recording can be done on the same document of UTC, in the column of
‘Expected Results’. This column can be duplicated for the next iterations of Unit
Testing.
http://www.SofTReL.org 44
Defect Recording can also be done using some tools like Bugzilla, in which defects are
stored in the database.
Defect Recording needs to be done with care. It should be able to indicate the problem
in clear, unambiguous manner, and reproducing of the defects should be easily possible
from the defect information.
Conclusion
Exhaustive Unit Testing filters out the defects at an early stage in the Development Life
Cycle. It proves to be cost effective and improves Quality of the Software before the
smaller pieces are put together to form an application as a whole. Unit Testing should
be done sincerely and meticulously, the efforts are paid well in the long run.
http://www.SofTReL.org 45
12.3.3 Usability Testing
Usability is the degree to which a user can successfully learn and use a product to
achieve a goal. Usability testing is the system testing which attempts to find any
human-factor problems. A simpler description is testing the software from a users’
point of view. Essentially it means testing software to prove/ensure that it is user-
friendly, as distinct from testing the functionality of the software. In practical terms it
includes ergonomic considerations, screen design, standardization etc.
The idea behind usability testing is to have actual users perform the tasks for which the
product was designed. If they can't do the tasks or if they have difficulty performing the
tasks, the UI is not adequate and should be redesigned. It should be remembered that
usability testing is just one of the many techniques that serve as a basis for evaluating
the UI in a user-centered approach. Other techniques for evaluating a UI include
inspection methods such as heuristic evaluations, expert reviews, card-sorting,
matching test or Icon intuitiveness evaluation, cognitive walkthroughs. Confusion
regarding usage of the term can be avoided if we use ‘usability evaluation’ for the
generic term and reserve ‘usability testing’ for the specific evaluation method based on
user performance. Heuristic Evaluation and Usability Inspection or cognitive
walkthrough does not involve real users.
It often involves building prototypes of parts of the user interface, having representative
users perform representative tasks and seeing if the appropriate users can perform the
tasks. In other techniques such as the inspection methods, it is not performance, but
someone's opinion of how users might perform that is offered as evidence that the UI is
acceptable or not. This distinction between performance and opinion about performance
is crucial. Opinions are subjective. Whether a sample of users can accomplish what
they want or not is objective. Under many circumstances it is more useful to find out if
users can do what they want to do rather than asking someone.
1. Get a person who fits the user profile. Make sure that you are not getting
someone who has worked on it.
2. Sit them down in front of a computer, give them the application, and tell them a
small scenario, like: “Thank you for volunteering making it easier for users to
find what they are looking for. We would like you to answer several questions.
There is no right or wrong answers. What we want to learn is why you make the
choices you do, what is confusing, why choose one thing and not another, etc.
http://www.SofTReL.org 46
Just talk us through your search and let us know what you are thinking. We
have a recorder which is going to capture what you say, so you will have to tell
us what you are clicking on as you also tell us what you are thinking. Also think
aloud when you are stuck somewhere”
3. Now don’t speak anything. Sounds easy, but see if you actually can shut up.
4. Watch them use the application. If they ask you something, tell them you're not
there. Then shut up again.
5. Start noting all the things you will have to change.
6. Afterwards ask them what they thought and note them down.
7. Once the whole thing is done thank the volunteer.
• DRUM from Serco Usability Services is a tool, which has been developed by
close cooperation between Human Factors professionals and software engineers
to provide a broad range of support for video-assisted observational studies.
USABILITY LABS
http://www.SofTReL.org 47
• The Usability Center (ULAB) is a full service organization, which provides a
"Street-Wise" approach to usability risk management and product usability
excellence. It has custom designed ULAB facilities.
• Lodestone Research has usability-testing laboratory with state of the art audio
and visual recording and testing equipment. All equipment has been designed to
be portable so that it can be taken on the road. The lab consists of a test room
and an observation/control room that can seat as many as ten observers. A-V
equipment includes two (soon to be 3) fully controllable SVHS cameras,
capture/feed capabilities for test participant's PC via scan converter and direct
split signal (to VGA "slave" monitors in observation room), up to eight video
monitors and four VCA monitors for observer viewing, mixing/editing
equipment, and "wiretap" capabilities to monitor and record both sides of
telephone conversation (e.g., if participant calls customer support).
• Online Computer Library Center, Inc provides insight into the usability test
laboratory. It gives an overview of the infrastructure as well as the process being
used in the laboratory.
END GOALS OF USABILITY TESTING
To summarise the goals, it can be said that it makes the software more user friendly.
The end result will be:
• Better quality software.
• Software is easier to use.
• Software is more readily accepted by users.
http://www.SofTReL.org 48
• Shortens the learning curve for new users.
Performance testing of a Web site is basically the process of understanding how the
Web application and its operating environment respond at various user load levels. In
general, we want to measure the Response Time, Throughput, and Utilization of the
Web site while simulating attempts by virtual users to simultaneously access the site.
One of the main objectives of performance testing is to maintain a Web site with low
response time, high throughput, and low utilization.
Response Time
Response Time is the delay experienced between the point when a request is made and
the server's response at the client is received. It is usually measured in units of time,
such as seconds or milliseconds. Generally speaking, Response Time increases as the
inverse of unutilized capacity. It increases slowly at low levels of user load, but
increases rapidly as capacity is utilized. Figure 1 demonstrates such typical
characteristics of Response Time versus user load.
The sudden increase in response time is often caused by the maximum utilization of
one or more system resources. For example, most Web servers can be configured to
start up a fixed number of threads to handle concurrent user requests. If the number of
http://www.SofTReL.org 49
concurrent requests is greater than the number of threads available, any incoming
requests will be placed in a queue and will wait for their turn to be processed. Any time
spent in a queue naturally adds extra wait time to the overall Response Time.
To better understand what Response Time means in a typical Web farm, we can divide
response time into many segments and categorize these segments into two major types:
network response time and application response time. Network response time refers to
the time it takes for data to travel from one server to another. Application response time
is the time required for data to be processed within a server. Figure 2 shows the
different response time in the entire process of a typical Web request.
Figure 2 shows the different response time in the entire process of a typical Web
request.
http://www.SofTReL.org 50
Network Response Times N2 and N3 usually depend on the performance of the
switching equipment in the server farm. When traffic to the back-end database grows,
consider upgrading the switches and network adapters to boost performance.
Reducing application Response Times (A1, A2, and A3) is an art form unto itself
because the complexity of server applications can make analyzing performance data
and performance tuning quite challenging. Typically, multiple software components
interact on the server to service a given request. Response time can be introduced by
any of the components. That said, there are ways you can approach the problem:
• First, your application design should minimize round trips wherever possible.
Multiple round trips (client to server or application to database) multiply
transmission and resource acquisition Response time. Use a single round trip
wherever possible.
• You can optimize many server components to improve performance for your
configuration. Database tuning is one of the most important areas on which to
focus. Optimize stored procedures and indexes.
• Look for contention among threads or components competing for common
resources. There are several methods you can use to identify contention
bottlenecks. Depending on the specific problem, eliminating a resource contention
bottleneck may involve restructuring your code, applying service packs, or
upgrading components on your server. Not all resource contention problems can be
completely eliminated, but you should strive to reduce them wherever possible.
They can become bottlenecks for the entire system.
• Finally, to increase capacity, you may want to upgrade the server hardware (scaling
up), if system resources such as CPU or memory are stretched out and have become
the bottleneck. Using multiple servers as a cluster (scaling out) may help to lessen
the load on an individual server, thus improving system performance and reducing
application latencies.
Throughput
Throughput refers to the number of client requests processed within a certain unit of
time. Typically, the unit of measurement is requests per second or pages per second.
From a marketing perspective, throughput may also be measured in terms of visitors
per day or page views per day, although smaller time units are more useful for
performance testing because applications typically see peak loads of several times the
average load in a day.
As one of the most useful metrics, the throughput of a Web site is often measured and
analyzed at different stages of the design, develop, and deploy cycle. For example, in the
process of capacity planning, throughput is one of the key parameters for determining
http://www.SofTReL.org 51
the hardware and system requirements of a Web site. Throughput also plays an
important role in identifying performance bottlenecks and improving application and
system performance. Whether a Web farm uses a single server or multiple servers,
throughput statistics show similar characteristics in reactions to various user load
levels. Figure 3 demonstrates such typical characteristics of throughput versus user
load.
In many ways, throughput and Response time are related, as different approaches to
thinking about the same problem. In general, sites with high latency will have low
throughput. If you want to improve your throughput, you should analyze the same
criteria as you would to reduce latency. Also, measurement of throughput without
consideration of latency is misleading because latency often rises under load before
throughput peaks. This means that peak throughput may occur at a latency that is
http://www.SofTReL.org 52
unacceptable from an application usability standpoint. This suggests that Performance
reports include a cut-off value for Response time, such as:250 requests/second @ 5
seconds maximum Response time
Utilization
Utilization refers to the usage level of different system resources, such as the server's
CPU(s), memory, network bandwidth, and so forth. It is usually measured as a
percentage of the maximum available level of the specific resource. Utilization versus
user load for a Web server typically produces a curve, as shown in Figure 4.
http://www.SofTReL.org 53
Figure 5. An example of Response Time versus utilization
As Figure 5 demonstrates, monitoring the CPU or memory utilization alone may not
always indicate the true capacity level of the server farm with acceptable performance.
Applications
While most traditional applications are designed to respond to a single user at any time,
most Web applications are expected to support a wide range of concurrent users, from a
dozen to a couple thousand or more. As a result, performance testing has become a
critical component in the process of deploying a Web application. It has proven to be
most useful in (but not limited to) the following areas:
• Capacity planning
• Bug fixing
Capacity Planning
How do you know if your server configuration is sufficient to support two million
visitors per day with average response time of under than five seconds? If your company
is projecting a business growth of 200 percent over the next two months, do you know if
you need to upgrade your server or add more servers to the Web farm? Can your server
and application support a six-fold traffic increase during the Christmas shopping
season?
Capacity planning is about being prepared. You need to set the hardware and software
requirements of your application so that you'll have sufficient capacity to meet
anticipated and unanticipated user load.
One approach in capacity planning is to load-test your application in a testing (staging)
server farm. By simulating different load levels on the farm using a Web application
performance testing tool such as WAS, you can collect and analyze the test results to
better understand the performance characteristics of the application. Performance
charts such as those shown in Figures 1, 3, and 4 can then be generated to show the
expected Response Time, throughput, and utilization at these load levels.
In addition, you may also want to test the scalability of your application with different
hardware configurations. For example, load testing your application on servers with
http://www.SofTReL.org 54
one, two, and four CPUs respectively would help to determine how well the application
scales with symmetric multiprocessor (SMP) servers. Likewise, you should load test
your application with different numbers of clustered servers to confirm that your
application scales well in a cluster environment.
Although performance testing is as important as functional testing, it’s often overlooked
.Since the requirements to ensure the performance of the system is not as
straightforward as the functionalities of the system, achieving it correctly is more
difficult.
The effort of performance testing is addressed in two ways:
• Load testing
• Stress testing
Load testing
Load testing is a much used industry term for the effort of performance testing. Here
load means the number of users or the traffic for the system. Load testing is defined as
the testing to determine whether the system is capable of handling anticipated number
of users or not.
In Load Testing, the virtual users are simulated to exhibit the real user behavior as
much as possible. Even the user think time such as how users will take time to think
before inputting data will also be emulated. It is carried out to justify whether the
system is performing well for the specified limit of load.
The objective of load testing is to check whether the system can perform well for
specified load. The system may be capable of accommodating more than 1000
concurrent users. But, validating that is not under the scope of load testing. No attempt
is made to determine how many more concurrent users the system is capable of
servicing. Table<##> illustrates the example specified.
http://www.SofTReL.org 55
Stress testing
Stress testing is another industry term of performance testing. Though load testing &
Stress testing are used synonymously for performance–related efforts, their goal is
different.
Unlike load testing where testing is conducted for specified number of users, stress
testing is conducted for the number of concurrent users beyond the specified limit. The
objective is to identify the maximum number of users the system can handle before
breaking down or degrading drastically. Since the aim is to put more stress on system,
think time of the user is ignored and the system is exposed to excess load. Refer table
<##>
Let us take the same example of online shopping application to illustrate the objective
of stress testing. It determines the maximum number of concurrent users an online
system can service which can be beyond 1000 users (specified limit). However, there is
a possibility that the maximum load that can be handled by the system may found to
be same as the anticipated limit. The Table<##>illustrates the example specified.
Stress testing also determines the behavior of the system as user base increases. It
checks whether the system is going to degrade gracefully or crash at a shot when the
load goes beyond the specified limit.
Table<##> load and stress testing of illustrative example
Types of Number of Concurrent users Duration
Testing
http://www.SofTReL.org 56
capable of handling load under
specified limit
Stress testing • Testing beyond the anticipated
user base
• Identifies the maximum load a
system can handle
• Checks whether the system
degrades gracefully or crashes
at a shot
Conducting performance testing manually is almost impossible. Load and stress tests
are carried out with the help of automated tools. Some of the popular tools to automate
performance testing are listed as below.
Table<##>Load and stress testing tools
Tools Vendor
http://www.SofTReL.org 57
ANTS Red Gate Software
OpenSTA Open source
Astra Loadtest Mercury interactive Inc
WAPT Novasoft Inc
Sitestress Webmaster solutions
Quatiumpro Quatium technologies
Easy WebLoad PrimeMail Inc
Bug Fixing
Some errors may not occur until the application is under high user load. For Example,
memory leaks can exacerbate server or application problems sustaining high load.
Performance testing helps to detect and fix such problems before launching the
application. It is therefore recommended that developers take an active role in
performance testing their applications, especially at different major milestones of the
development cycle.
Regression testing as the name suggest is used to test / check the effect of changes
made in the code.
Most of the time the testing team is asked to check the last minute changes in the code
just before making a release to the client, in this situation the testing team needs to
check only the affected areas.
So in short for the regression testing the testing team should get the input from the
development team about the nature / amount of change in the fix so that testing team
can first check the fix and then the affected areas.
In fact the regression testing is the testing in which maximum automation can be done.
The reason being the same set of test cases will be run on different builds multiple
times.
But again the extent of automation depends on whether the test cases will remain
applicable over the time, In case the automated test cases do not remain applicable for
http://www.SofTReL.org 58
some amount of time then test engineers will end up in wasting time to automate and
don’t get enough out of automation.
What is Regression testing?
Regression Testing is retesting unchanged segments of application. It involves
rerunning tests that have been previously executed to ensure that the same
results can be achieved currently as were achieved when the segment was last
tested.
The selective retesting of a software system that has been modified to ensure
that any bugs have been fixed and that no other previously working functions
have failed as a result of the reparations and that newly added features have not
created problems with previous versions of the software. Also referred to as
verification testing, regression testing is initiated after a programmer has
attempted to fix a recognized problem or has added source code to a program
that may have inadvertently introduced errors. It is a quality control measure to
ensure that the newly modified code still complies with its specified
requirements and that unmodified code has not been affected by the
maintenance activity.
http://www.SofTReL.org 59
o To verify that the data dictionary of data elements that have been
changed is correct
Regression testing as the name suggests is used to test / check the effect of changes
made in the code.
Most of the time the testing team is asked to check the last minute changes in the code
just before making a release to the client, in this situation the testing team needs to
check only the affected areas.
So in short for the regression testing the testing team should get the input from the
development team about the nature / amount of change in the fix so that testing team
can first check the fix and then the affected areas.
In fact the regression testing is the testing in which maximum automation can be done.
The reason being the same set of test cases will be run on different builds multiple
times.
But again the extent of automation depends on whether the test
cases will remain applicable over the time, In case the
automated test cases do not remain applicable for some amount
of time then test engineers will end up in wasting time to
automate and don’t get enough out of automation.
http://www.SofTReL.org 60
Aim
A through understanding of the product is done now. During this phase, the test plan
and test cases for the beta phase (the next stage) is created. The errors reported are
documented internally for the testers and developers reference. No issues are usually
reported and recorded in any of the defect management/bug trackers
Aim
http://www.SofTReL.org 61
• to judge if the indented functionalities are implemented
• to provide to the customer the feel of the software
A through understanding of the product is done now. During this phase, the test plan
and test cases for the beta phase (the next stage) is created. The errors reported are
documented internally for the testers and developers reference. No issues are usually
reported and recorded in any of the defect management/bug trackers
Beta testing
A software had reached beat stage when most of the functionalities are operating.
The software is tested in customer’s environment, giving user the opportunity to
excersise the software, find the errors so that they could be fixed before product release.
Beta testing is a detailed testing and needs to cover all the functionalities of the product
and also the dependent functionality testing. It also involves the UI testing and
documentation testing. Hence it is essential that this is planned well and he task
accomplished. The test plan document prepared before the testing phase starts which
clearly lays down the objectives, scope of test, tasks to be performed and the test matrix
which lays down the schedule of testing.
http://www.SofTReL.org 62
Role of a Test Lead
• Provide Test Instruction Sheet that describes items such as testing objectives,
steps to follow, data to enter, functions to invoke.
• Provide feedback forms and comments.
Role of a tester
http://www.SofTReL.org 63
10. Try installing on different operating system.
Try installing on system having non-compliant configuration such as less memory /
RAM / HDD.
Exploratory testing is defined as simultaneous test design, test execution and bug
reporting. In this approach the tester explores the system (finding out what it is and
then testing it) without having any prior test cases or test scripts. Because of this
reason it also called as ad hoc testing, guerrilla testing or intuitive testing. But there is
some difference between them. In operational terms, exploratory testing is an
interactive process of concurrent product exploration, test design, and test execution.
The outcome of an exploratory testing session is a set of notes about the product,
failures found, and a concise record of how the product was tested. When practiced by
trained testers, it yields consistently valuable and auditable results. Every tester
performs this type of testing at one point or the other. This testing totally depends on
the skill and creativity of the tester. Different testers can explore the system in different
ways depending on their skills. Thus the tester has a very vital role to play in
exploratory testing.
This approach of testing has also been advised by SWEBOK for testing since it might
uncover the bugs, which the normal testing might not discover. A systematic approach
of exploratory testing can also be used where there is a plan to attack the system under
test. This systematic approach of exploring the system is termed Formalized exploratory
testing.
Exploratory testing is a powerful approach in the field of testing. Yet this has
approach has not got the recognition and is often misunderstood and not gained the
http://www.SofTReL.org 64
respect it needs to. In many situations it can be more productive than the scripted
testing. But the real fact is that all testers do practice this methodology sometime or the
other, most often unknowingly!
Exploratory testing believes in concurrent phases of product exploration, test
design and test execution. It is categorized under Black-box testing. It is basically a
free-style testing approach where you do not begin with the usual procedures of
elaborate test plans and test steps. The test plan and strategy is very well in the tester’s
mind. The tester asks the right question to the product / application and judges the
outcome. During this phase he is actually learning the product as he tests it. It is
interactive and creative. A conscious plan by the tester gives good results.
Human beings are unique and think differently, with a new set of ideas
emerging. A tester has the basic skills to listen, read, think and report. Exploratory
testing is just trying to exploit this and structure it down. The richness of this process
is only limited to the breadth and depth of our imagination and the insight into the
product under test.
Exploratory testing should not be confused with “ad-hoc” testing too. Ad hoc
testing normally refers to a process of improvised, impromptu bug searching. By
definition, anyone can do ad hoc testing. The term “exploratory testing”-- by Cem Kaner,
in Testing Computer Software--refers to a sophisticated, systematic, thoughtful
approach to ad hoc testing.
What is formalized ET
Using the systematic approach (i.e. the formalize approach) an outline of what to
attack first, its scope, the time required to be spent etc is achieved. The
approach might be using simple notes to more descriptive charters to some
http://www.SofTReL.org 65
vague scripts. By using the systematic approach the testing can be more
organized focusing at the goal to be reached. Thus solving the problem where
the pure ET might drift away from the goal.
The formalized approach used for the ET can vary depending on the various
criteria like the resource, time, the knowledge of the application available etc.
Depending on these criteria, the approach used to attack the system will also
vary. It may involve creating the outlines on the notepad to more sophisticated
way by using charters etc. Some of the formal approaches used for ET can be
summarized as follows.
For example, consider software has been built to generate the invoices
for its customers depending on the number of the units of power that
has been consumed. In such a case exploratory testing can be done by
identifying the domain of the application. A tester who has experience of
the billing systems for the energy domain would fit better than one who
has none. The tester who has knowledge in the application domain
knows the terminology used as well the scenarios that would be critical
to the system. He would know the ways in which various computations
are done. In such a case, tester with good knowledge would be familiar to
the terms like to line item, billing rate, billing cycle and the ways in
which the computation of invoice would be done. He would explore the
system to the best and takes lesser time. If the tester does not have
http://www.SofTReL.org 66
domain knowledge required, then it would take time to understand the
various workflows as well the terminology used. He might not be able to
focus on critical areas rather focus on the other areas.
Thus by identifying the primary function and secondary functions for the
system, testing can be done where more focus and effort can be given to
Primary functions compared to the secondary functions.
Try to input 500 characters in the txt box of the web application.
http://www.SofTReL.org 69
comfortable with any of the screens that he working. These aspects helps
the end user to accept the system more faster.
http://www.SofTReL.org 70
In the age of component development and maximum reusability,
developers try to pick up the already developed components and
integrate them. Thus achieving the desired result in short time. In cases
it would help the tester explore the areas where the components are
coupled. The output of one component should be correctly sent to other
component. Hence such scenarios or workflows need to be identified and
explored more. More focus some shown on those areas which are more
error prone.
Example: consider the online shopping application. The user adds the
items to his cart and proceeds to the payments details page. Here the
items added, their quantity etc should be properly sent to the next
module. If there is any error in any of the data transfer process, the pay
details will not be correct and the user will be billed wrong. There by
leading to a major error. In such a scenario, more focus is required in the
interfaces.
Record failures
In exploratory testing, we do the testing without having any documented
test cases. If a bug has been found, it is very difficult for us to test it after
fix. This is because there are no documented steps to navigate to that
particular scenario. Hence we need to keep track of the flow required to
reach where a bug has been found. So while testing, it is important that
at least the bugs that have been discovered are documented. Hence by
recording failures we are able to keep track of work that has been done.
This would also help even if the tester who was actually doing ET is not
available. Since the document can be referred and list all the bugs that
have been reported as well the flows for the same can be identified.
Example: for example consider the online shopping site. A bug has been
found while trying the add the items of given category into the cart. If the
tester can just document the flow as well as the error that has occurred,
http://www.SofTReL.org 71
it would help the tester himself or any other tester. It can be referred
while testing the application after a fix.
Decompose the main task into smaller tasks .The smaller ones to still smaller
activities.
Its always easier to work with the smaller tasks when campared to large
tasks. This is very useful in doing ET because lack of test cases might
lead us to different routes. By having a smaller task, the scope as well as
the boundary are confined which will help the tester to focus on his
testing and plan accordingly.
If a big task is taken up for testing, as we explore the system, we might
get deviated from our main goal or task. It might be hard define
boundaries if the application is a new one. With smaller tasks, the goal is
known and hence the focus and the effort required can be properly
planned.
Charter Summary:
http://www.SofTReL.org 72
o “Architecting the Charters” i.e. Test Planning
o Brief information / guidelines on:
o Mission: Why do we test this?
o What should be tested?
o How to test (approach)?
o What problems to look for?
o Might include guidelines on:
o Tools to use
o Specific Test Techniques or tactics to use
o What risks are involved
o Documents to examine
o Desired output from the testing.
A charter can be simple one to more descriptive giving the strategies and
outlines for the testing process.
Or.
Test the application if the report is being generated for the date before
01/01/2000.Use the use cases models for identifying the workflows.
http://www.SofTReL.org 73
is completed, each session is debriefed. The primary objective in the
debriefing is to understand and accept the session report. Another
objective is to provide feedback and coaching to the tester. The
debriefings would help the manager to plan the sessions in future and
also to estimate the time required for testing the similar functionality.
The time spent “on charter” and “on opportunity” is also noted.
Opportunity testing is any testing that doesn’t fit the charter of the
session. The tester is not restricted to his charter, and hence allowed to
deviate from the goal specified if there is any scope of finding an error.
Session test up: Time required to set up the application under test.
Test design and execution: Time required to scan the product and test.
Bug investigation and reporting: Time required to find the bugs and
report to the concerned.
http://www.SofTReL.org 74
Data files
Test notes
Issues
Bugs
In Procedural testing, the tester executes readily available test cases, which are
written based on the requirement specifications. Although the test cases are
executed completely, defects were found in the software while doing exploratory
testing by just wandering through the product blindly. Just exploring the
product without sight was akin to groping in the dark and did not help the
testers unearth all the hidden bugs in the software as they were not very sure
about the areas that needed to be explored in the software. A reliable basis was
needed for exploring the software. Thus Defect driven exploratory testing is an
idea of exploring that part of the product based on the results obtained during
procedural testing. After analyzing the defects found during the DDET process,
http://www.SofTReL.org 75
it was found that these were the most critical bugs, which were camouflaged in
the software and which if present could have made the software ‘Not fit for Use’.
There are some pre requisites for DDET:
o In-depth knowledge of the product.
o Procedural Testing has to be carried out.
o Defect Analysis based on Scripted Tests.
Advantages of DDET:
o Tester has clear clues on the areas to be explored.
o Goal oriented approach , hence better results
.
o No wastage of time.
http://www.SofTReL.org 76
More prone to human error.
Mission
The goal of testing needs to be understood first before the work begins. This
could be the overall mission of the test project or could be a particular functionality /
scenario. The mission is achieved by asking the right questions about the product,
designing tests to answer these questions and executing tests to get the answers. Often
the tests do not completely answer, in such cases we need to explore. The test
procedure is recorded (which could later form part of the scripted testing) and the result
status too.
Tester
The tester needs to have a general plan in mind, though may not be very
constrained. The tester needs to have the ability to design good test strategy, execute
good tests, find important problems and report them. He simply has to think out of the
box.
Time
Time available for testing is a critical factor. Time falls short due to the following
reasons :
o Many a time in project life cycles, the time and resources required in creating
the test strategy, test plan and design, execution and reporting is overlooked.
Exploratory testing becomes useful since the test plan, design and execution
happen together.
o Also when testing is essential on a short period of notice
o A new feature is implemented
o Change request come in much later stage of the cycle when much of the testing
is done with
http://www.SofTReL.org 77
In such situations exploratory testing comes handy.
Test Strategy
It is important to identify the scope of the test to be carried. This is dependent on
the project approach to testing. The test manager / test lead can decide the scope
and convey the same to the test team.
Test design and execution
The tester crafts the test by systematically exploring the product. He defines his
approach, analyze the product, and evaluate the risk
Documentation
The written notes / scripts of the tester are reviewed by the test lead / manager.
These later form into new test cases or updated test materials.
Exploratory testing fits almost in any kind of testing projects, projects with
rigorous test plans and procedures or in projects where testing is not dictated
completely in advance. The situations where exploratory testing could fit in are:
http://www.SofTReL.org 78
The basic rule is this: exploratory testing is called for any time the next test you should
perform is not obvious, or when you want to go beyond the obvious.
A good exploratory tester always asks himself, what’s the best test I can perform now?
They remain alert for new opportunities.
Advantages
Exploratory testing is advantageous when
• Rapid testing is essential
• Test case development time not available
• Need to cover high risk areas with more inputs
• Need to test software with little knowledge about the specifications
• Develop new test cases or improve the existing
• Drive out monotony of normal step – by - step test execution
Drawbacks
• Skilled tester required
• Difficult to quantize
Balancing Exploratory Testing With Scripted Testing
http://www.SofTReL.org 79
Exploratory testing relies on the tester and the approach he proceeds with. Pure
scripted testing doesn’t undergo much change with time and hence the power fades
away. In test scenarios where in repeatability of tests are required, automated scripts
have an edge over exploratory approach. Hence it is important to achieve a balance
between the two approaches and combine the two to get the best of both.
That is, while there is value in the items on the right, we value the items on the left
more." - http://www.agilemanifesto.org/
1) Agile testers treat the developers as their customer and follow the agile
manifesto. The Context driven testing principles (explained in later part) act as a
set of principles for the agile tester.
http://www.SofTReL.org 80
QA people seem to love documentation.
QA people want to see the written specification.
And where is testing without a PLAN?
There answer is maybe but the roles and tasks are different.
In the first definition of Agile testing we described it as one following the Context driven
principles.
The context driven principles which are guidelines for the agile tester are:
2. There are good practices in context, but there are no best practices.
3. People, working together, are the most important part of any project’s context.
4. Projects unfold over time in ways that are often not predictable.
5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
7. Only through judgment and skill, exercised cooperatively throughout the entire
project, are we able to do the right things at the right times to effectively test our
products.
http://www.context-driven-testing.com/
http://www.SofTReL.org 81
Agile Development Methodologies
In a fast paced environment such as in Agile development the question then arises
as to what is the “Role” of testing?
Testing is the Headlight of the agile project showing where the project is standing
now and the direction it is headed.
Testing provides the required and relevant information to the teams to take
informed and precise decisions.
The testers in agile frameworks get involved in much more than finding “software
bugs”, anything that can “bug” the potential user is a issue for them but testers
don’t make the final call, it’s the entire team that discusses over it and takes a
decision over a potential issues.
A firm belief of Agile practitioners is that any testing approach does not assure
quality it’s the team that does (or doesn’t) do it, so there is a heavy emphasis on the
skill and attitude of the people involved.
Agile Testing is not a game of “gotcha”, it’s about finding ways to set goals rather
than focus on mistakes.
http://www.SofTReL.org 82
Test- First Programming
Pair Programming
Short Iterations & Releases
Refactoring
"User Stories"
Acceptance Testing
Test-First Programming:
Developers write unit tests before coding. It has been noted that this kind of
approach motivates the coding, speeds coding and also and improves design
results in better designs (with less coupling and more cohesion)
It supports a practice called Refactoring (discussed later on).
Agile practitioners prefer Tests (code) to Text (written documents) for
describing system behavior. Tests are more precise than human language
and they are also a lot more likely to be updated when the design changes.
How many times have you seen design documents that no longer accurately
described the current workings of the software? Out-of-date design
documents look pretty much like up-to-date documents. Out-of-date tests
fail.
Many open source tools like xUnit have been developed to support this
methodology.
Refactoring:
Refactoring is the practice changing a software system in such a way that
it does not alter the external behavior of the code yet improves its internal
structure.
Traditional development tries to understand how all the code will work
together in advance. This is the design. With agile methods, this difficult
process of imagining what code might look like before it is written is
avoided. Instead, the code is restructured as needed to maintain a
coherent design. Frequent refactoring allows less up-front planning of
design.
http://www.SofTReL.org 83
Agile methods replace high-level design with frequent redesign
(refactoring). Successful refactoring But it also requires a way of ensuring
checking whether that the behavior wasn’t inadvertently changed. That’s
where the tests come in.
Make the simplest design that will work and add complexity only when
needed and refactor as necessary.
Refactoring requires unit tests to ensure that design changes (refactorings)
don’t break existing code.
Acceptance Testing
Make up user experiences or User stories which are short descriptions of the
features to be coded.
Acceptance tests verify the completion of user stories.
Ideally they are written before coding.
With all these features and process included we can define a practice for Agile testing
encompassing the following features.
Looking deep into each of these practices we can describe each of them as:
Conversational Test Creation
Test case writing should be a collaborative activity including majority of the
entire team. As the customers will be busy we should have someone
representing the customer.
Defining tests is a key activity that should include programmers and
customer representatives.
Don't do it alone.
Coaching Tests
http://www.SofTReL.org 84
A way of thinking about Acceptance Tests.
Turn user stories into tests.
Tests should provide Goals and guidance, Instant feedback and Progress
measurement
Tests should be in specified in a format that is clear enough that users/
customers can understand and that is specific enough that it can be
executed
Specification should be done by example.
Exploratory Learning
Plan to explore, learn and understand the product with each iteration.
Look for bugs, missing features and opportunities for improvement.
We don’t understand software until we have used it.
http://www.SofTReL.org 85
We believe that Agile Testing is a major step forward . You may disagree. But regardless
Agile Programming is the wave of the future. These practices will develop and some of
the extreme edges may be worn off, but it’s only growing in influence and attraction.
Some testers may not like it, but those who don’t figure out how to live with it are
simply going to be left behind.
Some testers are still upset that they don’t have the authority to block the release . Do
they think that they now have the authority to block the adoption of these new
development methods? They’ll need to get on this ship and if they want to try to keep it
from the shoals. Stay on the dock if you wish. Bon Voyage!
Each API is supposed to behave the way it is coded, i.e it is functionality specific.
These APIs may offer different results for different type of the input provided. The errors
or the exceptions returned may also vary. However once integrated within a product,
the common functionality covers a very minimal code path of the API and the
functionality testing / integration testing may cover only those paths. By considering
each API as a black box, a generalized approach of testing can be applied. But, there
may exist some paths which are not tested and lead to bugs in the application.
Applications can be viewed and treated as APIs from a testing perspective.
There are some distinctive attributes that make testing of APIs slightly different from
testing other common software interfaces like GUI testing.
Testing APIs requires a thorough knowledge of its inner workings - Some APIs may
interact with the OS kernel, other APIs, with other software to offer their
functionality. Thus an understanding of the inner workings of the interface
would help in analyzing the call sequences and detecting the failures caused.
http://www.SofTReL.org 86
Adequate programming skills - API tests are generally in the form of sequences of
calls, namely, programs. Each tester must possess expertise in the programming
language(s) that are targeted by the API. This would help the tester to review and
scrutinize the interface under test when the source code is available.
Lack of Domain knowledge – Since the testers may not be well trained in using
the API, a lot of time might be spent in exploring the interfaces and their usage.
This problem can be solved to an extent by involving the testers from the initial
stage of development. This would help the testers to have some understanding
on the interface and avoid exploring while testing.
Access to source code – The availability of the source code would help tester to
understand and analyze the implementation mechanism used; and can identify
the loops or vulnerabilities that may cause errors. Thus if the source code is not
available then the tester does not have a chance to find anomalies that may
exist in the code.
Testing of API calls can be done in isolation or in Sequence to vary the order in which
the functionality is exercised and to make the API produce useful results from these
tests. Designing tests is essentially designing sequences of API calls that have a
potential of satisfying the test objectives. This in turn boils down to designing each call
with specific parameters and to building a mechanism for handling and evaluating
return values.
Thus designing of the test cases can depend on some of the general questions like
http://www.SofTReL.org 87
Which value should a parameter take?
What values together make sense?
What combination of parameters will make APIs work in a desired manner?
What combination will cause a failure, a bad return value, or an anomaly in the
operating environment?
Which sequences are the best candidates for selection?.. etc.
By analyzing the problems listed above, a strategy needs to be formulated for testing the
API. The API to be tested would require some environment for it to work. Hence it is
required that all the conditions and prerequisites understood by the tester. The next
step would be to identify and study its points of entry. The GUIs would have items like
menus, buttons, check boxes, and combo lists that would trigger the event or action to
be taken. Similarly, for APIs, the input parameters, the events that trigger the API
would act as the point of entry. Subsequently, a chief task is to analyze the points of
entry as well as significant output items. The input parameters should be tested with
the valid and invalid values using strategies like the boundary value analysis and
equivalence partitioning. The fourth step is to understand the purpose of the routines,
the contexts in which they are to be used. Once all this parameter selections and
combinations are designed, different call sequences need to be explored.
http://www.SofTReL.org 88
3. Identify the combination of parameters – pick out the possible and applicable
parameter combinations with multiple parameters.
4. Identify the order to make the calls – deciding the order in which to make the calls
to force the API to exhibit its functionality.
5. Observe the output.
Mandatory pre-setters.
Behavioral pre-setters.
Mandatory Pre-setters
The execution of an API would require some minimal state, environment. These type of
initial conditions are classified under the mandatory initialization (Mandatory pre-
setters) for the API. For example, a non-static member function API requires an object
to be created before it could be called. This is an essential activity required for invoking
the API.
Behavioral pre-setters
To test the specific behaviour of the API ,some additional environmental state is
required. These types of initial conditions are called the behavioral pre-setters category
of Initial condition. These are optional conditions required by the API and need to be
set before invoking the API under test thus influencing its behavior. Since these
influence the behavior of the API under test, they are considered as additional inputs other
than the parameters
Thus to test any API, the environment required should also be clearly understood and
set up. Without these criteria, API under test might not function as required and leave
the tester’s job undone.
http://www.SofTReL.org 89
there is no method that ensures this behavior will be tested completely, using inputs
that return quantifiable and verifiable results is the next best thing. The different
possible input values (valid and invalid) need to be identified and selected for testing.
The techniques like the boundary values analysis and equivalence partitioning need to
used while trying to consider the input parameter values. The boundary values or the
limits that would lead to errors or exceptions need to be identified. It would also be
helpful if the data structures and other components that use these data structures
apart from the API are analyzed. The data structure can be loaded by using the other
components and the API can be tested while the other component is accessing these
data structures. Verify that all other dependent components functionality are not
affected while the API accesses and manipulates the data structures
The availability of the source code to the testers would help in analyzing the various
inputs values that could be possible for testing the API. It would also help in
understanding the various paths which could be tested. Therefore, not only are testers
required to understand the calls, but also all the constants and data types used by the
interface.
The API needs to be tested taking into consideration the combination of different
parameter. The number of possible combinations of parameters for each call is typically
large. For a given set of parameters, if only the boundary values have been selected, the
number of combinations, while relatively diminished, may still be prohibitively large.
For example, consider an API which takes three parameters as input. The various
combinations of different values for the input values and their combinations needs to be
identified.
http://www.SofTReL.org 90
can also be tested to check that there are no memory leaks after they are called. This
can be verified by continuously calling the API and observing the memory utilization.
5.Observe the output: The outcome of an execution of an API depends upon the
behavior of that API, the test condition and the environment. The outcome of an API can
be at different ways i.e., some could generally return certain data or status but for some
of the API's, it might not return or shall be just waiting for a period of time, triggering
another event, modifying certain resource and so on.
The tester should be aware of the output that needs to be expected for the API under
test. The outputs returned for various input values like valid/invalid, boundary values
etc needs to be observed and analysed to validate if they are as per the functionality. All
the error codes returned and exceptions returned for all the input combinations should
be evaluated.
API Testing Tools: There are many testing tools available. Depending on the level of
testing required, different tools could be used. Some of the API testing tools available
are mentioned here.
JVerify: This is from Man Machine Systems.
JVerify is a Java class/API testing tool that supports a unique invasive testing
model.The invasive model allows access to the internals (private elements) of any Java
object from within a test script. The ability to invade class internals facilitates more
effective testing at class level, since controllability and observability are enhanced. This
can be very valuable when a class has not been designed for testability.
JavaSpec: JavaSpec is a SunTest's API testing tool. It can be used to test Java
applications and libraries through their API. JavaSpec guides the users through the
entire test creation process and lets them focus on the most critical aspects of testing.
Once the user has entered the test data and assertions, JavaSpec automatically
generates self-checking tests, HTML test documentation, and detailed test reports.
http://www.SofTReL.org 91
Assumptions: -
1. Test engineer is supposed to test some API.
2. The API’s are available in form of library (.lib).
3. Test engineer have the API document.
By black box testing of the API mean that we have to test the API for outputs. In simple
words when we give the know input (parameters to the API) then we also knows the idle
output. So we have to check for the actual out put against the idle output.
For this we can write a simple c program that will do the following: -
a) Take the parameters from a text file (this file will contain many of such
input parameters).
b) Call the API with these parameters.
c) Match the actual and idle output and also check the parameters for good
values that are passed with reference (pointers).
d) Log the result.
----------------------------------------------------------------------------------------------------------
Secondly we have test the integration of the API’s.
For example there are two API’s say
Handle h = handle createcontext(void);
When the handle to the device is to be closed then the corresponding function
Bool bishandledeleted = bool deletecontext(handle &h);
The we have to call these two api’s and check if the handle created by createcontext()
can be deleted by the deletecontext().
This will ensure that these two api’s are working fine.
For this we can write a simple c program that will do the following: -
the example is over simplified but this works as we are using this kind of test tool for
extensive regression testing of our api library.
http://www.SofTReL.org 94
companies will have a practice of collecting the status on a daily basis or weekly basis.
This has to be mentioned clearly.
For testing at each level, we may have to address the requirements. One integration and
the system test case may address multiple requirements.
http://www.SofTReL.org 95
1.11 Test Summary
The senior management may like to have test summary on a weekly or monthly basis. If
the project is very critical, they may need it on a daily basis also. This section must
address what kind of test summary reports will be produced for the senior management
along with the frequency.
The test strategy must give a clear vision of what the testing team will do for the whole
project for the entire duration. This document will/may be presented to the client also,
if needed. The person, who prepares this document, must be functionally strong in the
product domain, with a very good experience, as this is the document that is going to
drive the entire team for the testing activities. Test strategy must be clearly explained to
the testing team members tight at the beginning of the project.
The plans are to be prepared by experienced people only. In all test plans, the ETVX
{Entry-Task-Validation-Exit} criteria are to be mentioned. Entry means the entry point
to that phase. For example, for unit testing, the coding must be complete and then only
one can start unit testing. Task is the activity that is performed. Validation is the way in
which the progress and correctness and compliance are verified for that phase. Exit
tells the completion criteria of that phase, after the validation is done. For example, the
exit criterion for unit testing is all unit test cases must pass.
http://www.SofTReL.org 96
of the times, the input units will be tested for their format, alignment, accuracy and the
totals. The UTP will clearly give the rules of what data types are present in the system,
their format and their boundary conditions. This list may not be exhaustive; but it is
better to have a complete list of these details.
http://www.SofTReL.org 97
When there are multiple modules present in an application, the sequence in which they
are to be integrated will be specified in this section. In this, the dependencies between
the modules play a vital role. If a unit B has to be executed, it may need the data that is
fed by unit A and unit X. In this case, the units A and X have to be integrated and then
using that data, the unit B has to be tested. This has to be stated to the whole set of
units in the program. Given this correctly, the testing activities will lead to the product,
slowly building the product, unit by unit and then integrating them.
Apart from the above sections, the following sections are addressed, very specific to
integration testing.
• Integration Testing Tools
• Priority of Program interfaces
• Naming convention for test cases
• Status reporting mechanism
• Regression test approach
• ETVX criteria
• Build/Refresh criteria {When multiple programs or objects are to be linked to
arrived at single product, and one unit has some modifications, then it may
need to rebuild the entire product and then load it into the integration test
environment. When and how often, the product is rebuilt and refreshed is to be
mentioned}.
http://www.SofTReL.org 98
2.3.1 What is to be tested?
This section defines the scope of system testing, very specific to the project. Normally,
the system testing is based on the requirements. All requirements are to be verified in
the scope of system testing. This covers the functionality of the product. Apart from this
what special testing is performed are also stated here.
http://www.SofTReL.org 99
the system testing. Assume that all the rules, which are applicable to system test, can
be implemented to acceptance testing also.
Since this is just one level of testing done by the client for the overall product, it may
include test cases including the unit and integration test level details.
A sample Test Plan Outline along with their description is as shown below:
2. INTRODUCTION
http://www.SofTReL.org 100
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
15. SCHEDULE
16. RESOURCES
18. APPROVALS
The schedule details of the various test pass such as Unit tests, Integration tests,
System Tests should be clearly mentioned along with the estimated efforts.
http://www.SofTReL.org 101
messages generated by the IUT, exceptions, returned values, and resultant state
of the IUT and its environment. Test cases may also specify initial and resulting
conditions for other objects that constitute the IUT and its environment.”
What’s a scenario?
A scenario is a hypothetical story, used to help a person think through a complex
problem or system.
A scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c)
credible, (d) complex, and (e) easy to evaluate.
The primary objective of test case design is to derive a set of tests that have the highest
attitude of discovering defects in the software. Test cases are designed based on the
analysis of requirements, use cases, and technical specifications, and they should be
developed in parallel with the software development effort.
A test case describes a set of actions to be performed and the results that are expected.
A test case should target specific functionality or aim to exercise a valid path through a
use case. This should include invalid user actions and illegal inputs that are not
necessarily listed in the use case. A test case is described depends on several factors,
e.g. the number of test cases, the frequency with which they change, the level of
automation employed, the skill of the testers, the selected testing methodology, staff
turnover, and risk.
Test case ID - The test case id must be unique across the application
Test case description - The test case description must be very brief.
Test prerequisite - The test pre-requisite clearly describes what should be present in
the system, before the test can be executes.
Test Inputs - The test input is nothing but the test data that is prepared to be fed to
the system.
Test steps - The test steps are the step-by-step instructions on how to carry out the
test.
Expected Results - The expected results are the ones that say what the system must
give as output or how the system must react based on the test steps.
http://www.SofTReL.org 102
Actual Results – The actual results are the ones that say outputs of the action for the
given inputs or how the system reacts for the given inputs.
Pass/Fail - If the Expected and Actual results are same then test is Pass otherwise Fail.
The test cases are classified into positive and negative test cases. Positive test cases are
designed to prove that the system accepts the valid inputs and then process them
correctly. Suitable techniques to design the positive test cases are Specification derived
tests, Equivalence partitioning and State-transition testing. The negative test cases are
designed to prove that the system rejects invalid inputs and does not process them.
Suitable techniques to design the negative test cases are Error guessing, Boundary
value analysis, internal boundary value testing and State-transition testing. The test
cases details must be very clearly specified, so that a new person can go through the
test cases step and step and is able to execute it. The test cases will be explained with
specific examples in the following section.
For example consider online shopping application. At the user interface level the client
request the web server to display the product details by giving email id and Username.
The web server processes the request and will give the response. For this application we
will design the unit, Integration and system test cases.
These are very specific to a particular unit. The basic functionality of the unit is to be
understood based on the requirements and the design documents. Generally, Design
document will provide a lot of information about the functionality of a unit. The Design
document has to be referred before UTC is written, because it provides the actual
functionality of how the system must behave, for given inputs.
For example, In the Online shopping application, If the user enters valid Email id and
Username values, let us assume that Design document says, that the system must
display a product details and should insert the Email id and Username in database
table. If user enters invalid values the system will display appropriate error message
and will not store it in database.
http://www.SofTReL.org 103
Test Conditions for the fields in the Login screen
Test Prerequisite: The user should have access to Customer Login screen form screen
http://www.SofTReL.org 104
correct Username”
Before designing the integration test cases the testers should go through the Integration
test plan. It will give complete idea of how to write integration test cases. The main aim
of integration test cases is that it tests the multiple modules together. By executing
these test cases the user can find out the errors in the interfaces between the Modules.
For example, in online shopping, there will be Catalog and Administration module. In
catalog section the customer can track the list of products and can buy the products
online. In administration module the admin can enter the product name and
information related to it.
http://www.SofTReL.org 105
1 Check for Login Enter values in Email Inputs should
Screen and UserName. be accepted.
For Eg:
Email
=shilpa@yahoo.com
Username=shilpa
Backend Select email, username The entered
Verification from Cus; Email and
Username
should be
displayed at
sqlprompt.
2 Check for Click product information It should
Product link display
Information complete details
of the product
3 Check for admin Enter values in Product Inputs should
screen Id and Product name be accepted.
fields.
For Eg:
Product Id-245
Product name-Norton
Antivirus
Backend Select pid , pname from The entered
verification Product; Product id and
Product name
should be
displayed at the
sql prompt.
NOTE: The tester has to execute above unit and Integration test cases after
coding. And He/She has to fill the actual results and Pass/fail columns. If the test
cases fail then defect report should be prepared.
The system test cases meant to test the system as per the requirements; end-to end.
This is basically to make sure that the application works as per SRS. In system test
cases, (generally in system testing itself), the testers are supposed to act as an end
user. So, system test cases normally do concentrate on the functionality of the system,
inputs are fed through the system and each and every check is performed using the
http://www.SofTReL.org 106
system itself. Normally, the verifications done by checking the database tables directly
or running programs manually are not encouraged in the system test.
The system test must focus on functional groups, rather than identifying the program
units. When it comes to system testing, it is assume that the interfaces between the
modules are working fine (integration passed).
Ideally the test cases are nothing but a union of the functionalities tested in the unit
testing and the integration testing. Instead of testing the system inputs outputs through
database or external programs, everything is tested through the system itself. For
example, in a online shopping application, the catalog and administration screens
(program units) would have been independently unit tested and the test results would
be verified through the database. In system testing, the tester will mimic as an end user
and hence checks the application through its output.
There are occasions, where some/many of the integration and unit test cases are
repeated in system testing also; especially when the units are tested with test stubs
before and not actually tested with other real modules, during system testing those
cases will be reperformed with real modules/data in
18. Defect Management
But as for a test engineer all are same as the above definition is only for the purpose of
documentation or indicative.
http://www.SofTReL.org 107
Defects can be classified as: -
1. Conceptual bugs / Design bugs
2. Coding bugs
3. Integration bugs
4. GUI bugs
Test effectiveness: ‘t / (t+Uat) where t=total no. of defects reported during testing
and Uat = total no. of defects reported during User acceptance testing
User Acceptance Testing is generally carried out using the
Acceptance Test Criteria according to the Acceptance Test Plan.
http://www.SofTReL.org 109
References
• “ An API Testing Method” by Alan A Jorgensen and James A Whittaker.
• “API Testing Methodology” by Anoop Kumar P, working for Novell Software
Development (I) Pvt Ltd., Bangalore.
• “Why is API Testing Different “ by Nikhil Nilakantan , Hewlett Packard and
Ibrahim K. El-Far, Florida Institute of Technology.
• Test Strategy & Test Plan Preparation – Training course attended @ SoftSmith
• Designing Test Cases - Cem Kaner, J.D., Ph.D.
• Scenario Testing - Cem Kaner, J.D., Ph.D.
• Exploratory Testing Explained, v.1.3 4/16/03 by James Bach.
• Exploring Exploratory Testing by Andy Tinkham and Cem Kaner.
• Session-Based Test Management by Jonathan Bach (first published in Software
Testing and Quality Engineering magazine, 11/00).
• Defect Driven Exploratory Testing (DDET) by Ananthalakshmi.
• Software Engineering Body of Knowledge v1.0
(http://www.sei.cmu.edu/publications)
• Unit Testing guidelines by Scott Highet (http://www.Stickyminds.com)
• http://www.sasystems.com
• http://www.softwareqatest.com
• http://www.eng.mu.edu/corlissg/198.2001/KFN_ch11-tools.html
• http://www.ics.uci.edu/~jrobbins/ics125w04/nonav/howto-reviews.html
• IEEE SOFTWARE REVIEWS Std 1028-1997
• Effective Methods of Software Testing, William E Perry.
Remaining
12.3.7 Content Management Systems
http://www.SofTReL.org 110