0% found this document useful (0 votes)
45 views168 pages

STA Lecture Notes

The document outlines the course structure for 'Software Testing and Automation' at S.A Engineering College, detailing objectives, outcomes, and course content for the academic year 2025-2026. It emphasizes the importance of software testing, including various testing methodologies, planning, execution, and automation tools like Selenium and TestNG. Additionally, it highlights the consequences of software errors in real-life scenarios and the benefits of thorough testing practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views168 pages

STA Lecture Notes

The document outlines the course structure for 'Software Testing and Automation' at S.A Engineering College, detailing objectives, outcomes, and course content for the academic year 2025-2026. It emphasizes the importance of software testing, including various testing methodologies, planning, execution, and automation tools like Selenium and TestNG. Additionally, it highlights the consequences of software errors in real-life scenarios and the benefits of thorough testing practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

S.

A ENGINEERING COLLEGE, CHENNAI – 77


(An Autonomous Institution Affiliated to Anna University)
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
(CYBER SECURITY)

NAME OF FACULTY :KARTHICKA P G


YEAR/SEMESTER :III/VI
SUBJECT CODE/NAME :CB1610A SOFTWARE TESTING AND
AUTOMATION
ACADEMIC YEAR :2025-2026 (EVEN)

COURSE OBJECTIVES:
The students should be made:
 To understand the basics of software testing
 To learn how to do the testing and planning effectively
 To build test cases and execute them
 To focus on wide aspects of testing and understanding multiple facets of testing
 To get an insight about test automation and the tools used for test automation

COURSE OUTCOMES:
Upon the successful completion of the course ,the students would be able to
CB1610A.1 K2 Understand the basic concepts of software testing and
the need for software testing
CB1610A.2 K3 Design Test planning and different activities involved
in test planning
CB1610A.3 K4 Design effective test cases that can uncover critical
defects in the application
CB1610A.4 K3 Carry out advanced types of testing
CB1610A.5 Automate software testing using Selenium and
K3 TestNG

PO’s PSO’s
CO’s 1 2 3 4 5 6 7 8 9 10 11 12 1 2
1 1 3 - 2 - - - - - - - 1 - 2
2 2 3 - 2 - - - - - - - 2 - 3
3 2 2 1 2 1 - - - - - - 1 1 2
4 2 2 1 3 - - - - - - - 1 - 1
5 1 1 2 2 3 - - - - - - 2 2 1
AVg. 1.6 2.2 1.33333 2.2 2 - - - - - - 1.4 1.5 1.8

1 - low, 2 - medium, 3 - high, ‘-' - no correlation

JUSTIFICATION FOR LEVEL OF PO MAPPING


CB1610A.1 Since it provides basic understanding of engineering fundamentals related
to software testing concepts and there is strong emphasis on analysing the
need for software testing and identifying software problems as well as
there is moderate involvement in understanding investigation principles
related to defect identification this is mapped with PO1,PO2 and PO4

CB1610A.2 since it involves application of software engineering fundamentals in test


planning, requirement analysis and strategic test planning as core
outcomes and there is Moderate investigation of testing strategies this is
mapped with PO1,PO2 and PO4

CB1610A.3 Since it uses engineering fundamentals in test case development, Analyse


defects through execution of test cases, design of test cases, Interpretation
of test results and Limited use of testing tools this has been mapped with
PO1,PO2,PO3,PO4 & PO5

CB1610A.4 Since there is broad application of engineering knowledge, evaluation of


multiple testing perspectives, partial involvement in solution design and
there is strong investigation of complex testing scenarios this has been
mapped with PO1,PO2,PO3 and PO4

CB1610A.5 Since this involves basic understanding of automation concepts, there is


limited analysis in selecting automation strategies, Design of automation
scripts and frameworks, Evaluation of automated test results and
Extensive use of modern test automation tools it has been mapped with
PO1,PO2,PO3,PO4 and PO5

JUSTIFICATION FOR LEVEL OF PSO MAPPING


CB1610A.1 Since it involves understanding software reliability and secure testing
practices this has been mapped with PSO2.
CB1610A.2 Since there is strong emphasis on dependable and quality-driven
software testing strategies this has been mapped with PSO2.

CB1610A.3 Since it involves basic application of programming skills for testing and
ensures reliability and robustness of software systems this has been
mapped with PSO1 & PSO2
CB1610A.4 Since it Supports overall computing solution quality this has been
mapped with PSO1
CB1610A.5 Since there is strong alignment with software development and
automation skills as well as indirect contribution to secure software
systems this has been mapped with PSO1 & PSO2
CB1610A SOFTWARE TESTING AND AUTOMATION L T P C
3 0 0 3

COURSE OBJECTIVES:
 To understand the basics of software testing
 To learn how to do the testing and planning effectively
 To build test cases and execute them
 To focus on wide aspects of testing and understanding multiple facets of testing
 To get an insight about test automation and the tools used for test automation

UNIT I FOUNDATIONS OF SOFTWARE TESTING 9


Why do we test Software?, Black-Box Testing and White-Box Testing, Software Testing Life Cycle,
V-model of Software Testing, Program Correctness and Verification, Reliability versus Safety,
Failures, Errors and Faults (Defects), Software Testing Principles, Program Inspections, Stages of
Testing: Unit Testing, Integration Testing, System Testing, Acceptance Testing .

UNIT II TEST PLANNING 9


The Goal of Test Planning, High Level Expectations, Intergroup Responsibilities, Test Phases, Test
Strategy, Resource Requirements, Tester Assignments, Test Schedule, Test Cases, Bug Reporting,
Metrics and Statistics, Risk-Based Testing and Risk Management.

UNIT III TEST DESIGN AND EXECUTION 9


Test Objective Identification, Test Design Factors, Requirement identification, Testable Requirements,
Modeling a Test Design Process, Modeling Test Results, Boundary Value Testing, Equivalence Class
Testing, Path Testing, Data Flow Testing, Test Design Preparedness Metrics, Test Case Design
Effectiveness, Model-Driven Test Des.

UNIT IV ADVANCED TESTING CONCEPTS 9

Performance Testing: Load Testing, Stress Testing, Volume Testing, Fail-Over Testing, Recovery
Testing, Configuration Testing, Compatibility Testing, Usability Testing, Testing the Documentation,
Security testing, Testing in the Agile Environment, Testing Web and Mobile Applications.

UNIT V TEST AUTOMATION AND TOOLS 9


Automated Software Testing, Automate Testing of Web Applications, Selenium: Introducing Web
Driver and Web Elements, Locating Web Elements, Actions on Web Elements, Different Web Drivers,
Understanding Web Driver Events, Testing: Understanding Testing.xml, Adding Classes, Packages,
Methods to Test, Test Reports, Case Studies.

TOTAL :45 PERIODS

COURSE OUTCOMES:
Upon completion of the course, the students will be able to:
CO1: Understand the basic concepts of software testing and the need for software testing
CO2: Design Test planning and different activities involved in test planning
CO3: Design effective test cases that can uncover critical defects in the application
CO4: Carry out advanced types of testing
CO5: Automate the software testing using Selenium and TestNG

TEXTBOOKS
1. Yogesh Singh, “Software Testing”, Cambridge University Press, 2012
2. Unmesh Gundecha, Satya Avasarala, "Selenium WebDriver 3 Practical Guide"
- Second Edition 2018
3. "Foundations of Software Testing: ISTQB
Certification" Authors: Rex Black, Erik van Veenendaal,
Dorothy Graham Publisher: Cengage Learning

REFERENCES
1. Glenford J. Myers, Corey Sandler, Tom Badgett, The Art of
Software Testing, 3rd Edition, 2012, John Wiley & Sons, Inc.
2. Ron Patton, Software testing, 2nd Edition, 2006, Sams Publishing
3. Paul C. Jorgensen, Software Testing: A Craftsman’s Approach,
Fourth Edition, 2014, Taylor & Francis Group.
4. Carl Cocchiaro, Selenium Framework Design in Data-Driven
Testing, 2018, Packt Publishing.
5. Elfriede Dustin, Thom Garrett, Bernie Gaurf, Implementing
Automated Software Testing, 2009, Pearson Education, Inc.
6. Satya Avasarala, Selenium WebDriver Practical Guide, 2014, Packt Publishing.
7. Varun Menon, TestNg Beginner's Guide, 2013, Packt Publishing.
UNIT I
FOUNDATIONS OF SOFTWARE TESTING
Why do we test Software?, Black-Box Testing and White-Box Testing, Software Testing
Life Cycle, V-model of Software Testing, Program Correctness and Verification,
Reliability versus Safety, Failures, Errors and Faults (Defects), Software Testing
Principles, Program Inspections, Stages of Testing: Unit Testing, Integration Testing,
System Testing, Acceptance Testing
1.1 INTRODUCTION
Software testing is a method for finding out if the software meets requirements
and is error-free. It involves running software or system components manually or
automatically in order to evaluate one or more characteristics. Finding faults, unfulfilled
requirements in comparison to the documented specifications is the aim of software testing.
Some prefer to use the terms white box and black box testing to describe the
concept of software testing. To put it simply, software testing is the process of
validating an application that is being tested.

1.1.1 : What is Software Testing


Software testing is the process of determining if a piece of software is accurate by
taking into account all of its characteristics (reliability, scalability, portability, Re-
usability and usability) and analyzing how its various components operate in order to
detect any bugs, faults or flaws.
Software testing delivers assurance of the software's fitness and offers a detached
viewpoint and purpose of the programmer. It entails testing each component that makes up the
necessary services to see whether or not it satisfies the set criteria. Additionally, the procedure
informs the customer about the software's caliber.
In Simple words, “Testing is the process of executing a program with the intent of finding
faults.”
Testing is required because failure of the programmer at any point owing to a lack of
testing would be harmful. Software cannot be released to the end user without being tested.

1.1.2 : What is Testing


Testing is a collection of methods to evaluate an application's suitability for use in
accordance with a predetermined script, however testing is not able to detect every application
flaw. The basic goal of testing is to find application flaws so that they may be identified and
fixed. It merely shows that a product doesn't work in certain particular circumstances, not that it
works correctly under all circumstances.
Testing offers comparisons between software behaviour and mechanisms since
mechanisms may identify problems in software. The method may incorporate previous‟
iterations of the same or similar items, comparable goods, expected-purpose interfaces, pertinent
standards or other criteria, but is not restricted to these.
Testing includes both the analysis and execution of the code in different settings and
environments, as well as the whole code analysis. A testing team may be independent from the
development team in the present software development scenario so that information obtained
from testing may be utilized to improve the software development process.
The intended audience's adoption of the software, its user-friendly graphical user
interface, its robust functionality load test, etc., is all factors in its success. For instance, the
target „market for banking and a video game are very different. As a result, an organization can
determine if a software product it produces will be useful to its customers and other audience
members.

1.1.3 : Why Software Testing is Important? (What is the need of Software Testing?)
Software testing is a method for finding out if the software meets requirements and
is error-free Software testing is a very expensive and critical activity; but releasing the software
without testing is definitely more expensive and dangerous. We shall try to find more errors in
the early phases of software development. The cost of removal of such errors will be very
reasonable as compared to those errors which we may find in the later phases of software
development. The cost to fix errors increases drastically from the specification phase to the test
phase and finally to the maintenance phase as shown in Figure 1.1.

Figure 1.1 Phase wise cost of fixing an error

If an error is found and fixed in the specification and analysis phase, it hardly costs anything. We
may term this as “1 unit of cost” for fixing an error during specifications and analysis phase. The
same error, if propagated to design, may cost 10 units and if, further propagated to coding, may
cost 100 units. If it is detected and fixed during the testing phase, it may lead to 1000 units of
cost. If it could not be detected even during testing and is found by the customer after release,
the cost becomes very high. We may not be able to predict the cost of failure for a life critical
system’s software. The world has seen many failures and these failures have been costly to the
software companies.
The fact is that we are releasing software that is full of errors, even after doing sufficient
testing. No software would ever be released by its developers if they are asked to certify that the
software is free of errors. Testing, therefore, continues to the point where it is considered that the
cost of testing processes significantly outweighs the returns.
1.1.4. Consequences of errors in Software in real life situations
Software flaws may be costly or even harmful, thus testing instances when software
defects led to financial and personal loss is crucial. History is replete with
 Over 300,000 traders in the financial markets were impacted after a software error
caused the London Bloomberg terminal to collapse in April 2015. It made the
government delay a 3-billion-pound debt auction.
 Nissan recalled nearly 1 million vehicles from the market because the airbag sensory
detectors software was flawed. Due to this software flaw, two accidents have been
documented.
 Starbucks' POS system malfunctioned, forcing them to shut nearly 60 % of its
locations in the united states and Canada. The shop once provided free coffee since
they couldn't complete the purchase.
 Due to a technical error, some of Amazon's third-party sellers had their product
prices slashed to 1p. They suffered severe losses as a result.
 A weakness in windows 10. Due to a defect in the win32k system, users are able to
bypass security sandboxes.
 In 2015, a software flaw rendered the F-35 fighter jet incapable of accurately
detecting “targets. On April 26, 1994; an airbus A300 operated by China airlines
crashed due to a software error, killing 264 unintentional people.
 Three patients died and three others were badly injured in 1985 when a software
glitch caused Canada's Therac-25 radiation treatment system to fail and deliver
deadly radiation doses to patients.
 In May 1996, a software error led to the crediting of 920 million US dollars to the
bank accounts of 823 clients of a large U.S. bank.
 In April 1999, a software error resulted in the failure of a $1.2 billion military
satellite launch, making it the most expensive accident in history.

1.1.5 : What are the Benefits of Software Testing?


The following are advantages of employing software testing:
Cost Effective : One of the key benefits of software testing is that it is cost-effective. Timely
testing of any IT project enables you to make long-term financial savings. If flaws are found
sooner in the software testing process, fixing them is less expensive.
Security: This is important advantage of software testing. People are searching for reliable
goods. It assists in eradicating hazards and issues early.
Product quality: Any software product must meet these criteria. Testing guarantees that
buyers get a high-quality product.
Customer satisfaction: Providing consumers with contentment is the primary goal of every
product. The optimum user experience is made guaranteed of through UI/UX testing.

1.1.6 : Type of Software Testing


1. Manual testing:
The process of checking the functionality of an application as per the customer needs
without taking any help of automation tools is known as manual testing. While performing the
manual testing on any application, we do not need any specific knowledge of any testing tool,
rather than have a proper understanding of the product so we can easily prepare the test
document.
Manual testing can be further divided into three types of testing, which are as follows:
 White box testing
 Black box testing
 Grey box testing.

Figure : Types of Testing

2. Automation testing:
Automation testing is a process of converting any manual test cases into the test scripts
with the help of automation tools or any programming language. With the help of automation
testing, we can enhance the speed of our test execution because here, we do not require any
human efforts.
Manual Testing Vs Automation Testing
Manual Testing Automation Testing
In manual testing, the test cases are executed In automated testing, the test cases are
by the human tester. executed by the software tools.
Automation testing is faster than manual
Manual testing is time-consuming.
testing.
Automation testing takes up automation tools
Manual testing takes up human resources.
and trained employees.
Exploratory testing is possible in manual Exploratory testing is not possible in
testing. automation testing.
Initial Investment is less Initial Investment is more
1.2 WHITE-BOX TESTING , BLACK-BOX TESTING and GREY BOX TESTING
Black box testing (also called functional testing) is testing that ignores the internal mechanism of
a system or component and focuses solely on the outputs generated in response to selected inputs
and execution conditions. White box testing (also called structural testing and glass box testing)
is testing that takes into account the internal mechanism of a system or component.

1.2.1 What is White-Box Testing


 White box testing is a type of software testing that examines the internal
structure and design of a program or application.
 Because of the system's internal viewpoint, the phrase "white box" is employed. The
term “clear box," "white box" or "transparent box" refers to the capability of seeing
the software's inner workings through its exterior layer.
 Developers carry it out before sending the program to the testing team, who then
conducts black-box testing. Testing the infrastructure of the application is the
primary goal of white-box testing. As it covers unit testing and integration testing, it
is performed at lower levels. Given that it primarily focuses on the code structure,
pathways, conditions and branches of a programed or piece of software, it necessitates
programming skills. Focusing on the inputs and outputs via the program and
enhancing its security are the main objectives of white-box testing:
 It is also referred to as transparent testing, code-based testing, structural testing and
clear box testing. It is a good fit and is recommended for testing algorithms.

1.2.1.1 Types of White Box Testing in Software Testing


The following are some common types of white box testing:
 Unit testing: Tests individual units or components of the software to ensure they
function as intended.
 Integration testing: Tests the interactions between different units or components
of the software to ensure they work together correctly.
 Functional testing: Tests the functionality of the software to ensure it meets the
requirements and specifications.
 Performance testing: Tests the performance of the software under various loads
and conditions to ensure it meets performance requirements.
 Security testing: Tests the software for vulnerabilities and weaknesses to ensure it
is secure.
 Code coverage testing: Measures the percentage of code that is executed during
testing for ensure that all parts of the code are tested.
 Regression testing: Tests the software after changes have been made to ensure
that the changes did not introduce new bugs or issues.
1.2.1.2 Techniques of White Box Testing
There are some techniques which is used for white box testing -
 Statement coverage: This testing approach involves going over every statement in
the code to make sure that each one has been run at least once. As a result, the code is
checked line by line.
 Branch coverage: This is a testing approach in which test cases are created to ensure
that each branch is tested at least once. This method examines all potential
configurations for the system.
 Path coverage: Path coverage is a software testing approach that defines and covers
all potential pathways. From system entrance to exit points, pathways are statements
that may be executed. It takes a lot of time.
 Loop testing: With the help of this technique, loops and values in both independent
and dependent code are examined. Errors often happen at the start and conclusion of
loops. This method included testing loops, Concatenated loops, Simple loops, Nested
loops.
 Basis path testing: Using this methodology, control flow diagrams are created from
code and subsequently calculations are made for cyclomatic complexity. For the
purpose of designing the fewest possible test cases, cyclomatic complexity specifies
the quantity of separate routes.
o Cyclomatic complexity is a software metric used to indicate the complexity of
a program. It is computed using the Control Flow Graph of the program.
1.2.1.3 Advantages of White Box Testing
 Complete coverage.
 Better understanding of the system.
 Improved code quality.
 Increase efficiency.
 Early detection of error.

1.2.1.4 Disadvantages of White Box Testing


 This testing is very expensive and time-consuming.
 Redesign of code needs test cases to be written again.
 Missing functionalities cannot be detected.
 This technique can be very complex and at times not realistic.
 White-box testing requires a programmer with a high level of knowledge due for
the complexity of the level of testing that needs to be done.

1.2.2 What is Black Box Testing


Testing a system in a "black box" is doing so without knowing anything about
how it operates within. ie It is a form of testing that is performed with no knowledge of a
system's internals. A tester inputs data and monitors the output produced by the system
being tested. This allows for the identification of the system's reaction time, usability
difficulties and reliability concerns as well as how the system reacts to anticipated and
unexpected user activities.)
Because it tests a system from beginning to finish, black box testing is a powerful
testing method. A tester may imitate user action to check if the system fulfills its
promises. A black box test assesses every important subsystem along the route, including
the UI/UX, database, dependencies and integrated systems, as well as the web server or
application server.

1.2.2.1 Black Box Testing Pros and

Cons Advantages:
1. Testers do not require technical knowledge, programming of IT skills.
2. Testers do not need to learn implementation details of the system
3. Tests can be executed by outsourced testers.
4. Low chance of false positives.
5. Tests have lower complexity, since they simply model common user behavior
Dis-Advantages:
1. Difficult to automate.
2. Requires prioritization, typically infeasible to tests all user paths.
3. Difficult to calculate test coverage.
4. If a test fails, it can be difficult to understand the root cause of the issues.
5. Tests may be conducted at low scale or on a non-production like environment

1.2.2.2 Types of Black Box Testing


Black box testing can be applied to three main types of tests: Functional, non-
functional and regression testing.
1. Functional Testing:
Specific aspects or operations of the program that is being tested may be tested
via black box testing. For instance, make sure that the right user credentials may be used
to log in and that the incorrect ones cannot.
Functional testing concentrate on the most important features of the program on
how well the system works as a whole (system testing) with the integration of its
essential components.
2. Non-functional Testing:
 Beyond features and functioning, black box testing allows for the inspection of
extra software components. A non-functional test examines "how" rather than "if"
the program can carry out a certain task.
 Black box testing may determine whether software is:
a) Usable and simple for its users to comprehend;
b) Performance under predicted or peak loads; Compatible with relevant devices,
screen sizes, browsers or operating systems;
c) Exposed to security flaws or frequent security threats.
3. Regression Testing:
To determine if a new software version displays a regression or a decrease in
capabilities, from one version to the next, black box testing may be employed.
Regression testing may be used to evaluate both functional and non-functional features of
the program, such as when a particular feature no longer functions as anticipated in the
new version or when a formerly fast-performing action becomes much slower in the new
version.
1.2.2.3 Black Box Testing Techniques
1. Equivalence partitioning:
Testing professionals may organize potential inputs into "partitions" and test just
one sample input from each category. For instance, it is sufficient for testers to verify one
birth date in the "under 18" group and one date in the "over 18" group if a system asks
for a user's birth date and returns the same answer for users under the age of 18 and a
different response for users over 18.

2. Boundary value analysis:


Testers can determine if a system responds differently around a certain boundary
value. For instance, a particular field could only support values in the range of 0 and 99.
Testing personnel may concentrate on the boundary values (1, 0, 99 and 100) to
determine if the system is appropriately accepting and rejecting inputs.
3. Decision Table Testing
Numerous systems provide results depending on a set of parameters. Once rules that are
combinations of criteria have been identified, each rule's conclusion can then be determined and
test cases may then be created for each rule.
1.2.3 . Gray Box Testing :
 Gray Box Testing is a combination of the Black Box Testing technique and the White
Box Testing technique in software testing.
 The gray-box testing involves inputs and outputs of a program for the testing purpose but
test design is tested by using the information about the code.
 Gray-box testing is well suited for web application testing because it factors in a high-
level design environment and the inter-operability conditions.

1.2.4 Differences between Black Box Testing , Gray Box and White Box Testing:

Black Box Testing Gray Box Testing White Box Testing


This testing has Low This testing has a medium level This testing has high-level
granularity. of granularity. granularity.
It is done by end-users and It is done by end-users (called
It is generally done by testers and
also by the tester and user acceptance testing) and also
developers.
developers. by testers and developers.
Here, the Internal code of the
Here, Internals are not Here, Internals relevant to the
application and database is
required to be known. testing are known.
known.
It is based on requirements,
It provides better variety/depth
and test cases on the It can exercise code with a
in test cases on account of high-
functional specifications, as relevant variety of data.
level knowledge of the internals.
the internals are not known.
This testing involves Herein, we have a better variety
validating the outputs for of inputs and the ability to It involves structural testing and
given inputs, the application extract test results from the enables logic coverage, decisions,
being tested as a black-box database for comparison with etc. within the code.
technique. expected results.
This is also called Opaque- This is also called Glass-box
box testing, Closed-box testing, Clear-box testing, Design-
This is also called translucent
testing, input-output testing, based testing, Logic-based testing,
box testing
Data-driven testing, Structural testing, and Code-based
Behavioral, Functional testing testing.
Some White-box test design
Some Black-box test design Some Gray box test design
techniques-
techniques- techniques-
 Control flow testing
 Equivalence partitioning  Matrix testing
 Data flow testing
 Error guessing  Regression testing

Black Box testing provides Gray Box testing does not White Box testing does not
resilience and security against provide resilience and security provide resilience and security
viral attacks. against viral attacks. against viral attacks.
1.3 SOFTWARE TESTING LIFE CYCLE
The Software Testing Life Cycle (STLC) is a systematic approach to testing a software
application to ensure that it meets the requirements and is free of defects. It is a process that
follows a series of steps or phases, and each phase has specific objectives and deliverables. The
STLC is used to ensure that the software is of high quality, reliable, and meets the needs of the
end-users .The main goal of the STLC is to identify and document any defects or issues in the
software application as early as possible in the development process. This allows for issues to be
addressed and resolved before the software is released to the public.
The stages of the STLC include Requirement Analysis, Test Planning, Test case
Development, Test Environment Setup, Test Execution and Test Closure. Each of these stages
includes specific activities and deliverables that help to ensure that the software is thoroughly
tested and meets the requirements of the end users.
Overall, the STLC is an important process that helps to ensure the quality of software
applications and provides a systematic approach to testing. It allows organizations to release
high-quality software that meets the needs of their customers, ultimately leading to customer
satisfaction and business success.

Phases of STLC:

1. Requirement Analysis: Requirement Analysis is the first step of the Software Testing
Life Cycle (STLC). In this phase quality assurance team understands the requirements like
what is to be tested. If anything is missing or not understandable then the quality assurance
team meets with the stakeholders to better understand the detailed knowledge of
requirements.
The activities that take place during the Requirement Analysis stage include:
• Reviewing the software requirements document (SRD) and other related documents
• Interviewing stakeholders to gather additional information
• Identifying any ambiguities or inconsistencies in the requirements
• Identifying any missing or incomplete requirements
• Identifying any potential risks or issues that may impact the testing process
• Creating a requirement traceability matrix (RTM) to map requirements to test
cases
At the end of this stage, the testing team should have a clear understanding of the
software requirements and should have identified any potential issues that may impact the
testing process. This will help to ensure that the testing process is focused on the most
important areas of the software and that the testing team is able to deliver high-quality results.
2. Test Planning: Test Planning is the most efficient phase of the software testing life cycle
where all testing plans are defined. In this phase, manager of the testing team calculates the
estimated effort and cost for the testing work. This phase gets started once the requirement-
gathering phase is completed.
The activities that take place during the Test Planning stage include:
• Identifying the testing objectives and scope
• Developing a test strategy: selecting the testing methods and techniques
• Identifying the testing environment and resources needed
• Identifying the test cases that will be executed and the test data that will be used
• Estimating the time and cost required for testing
• Identifying the test deliverables and milestones
• Assigning roles and responsibilities to the testing team
• Reviewing and approving the test plan
At the end of this stage, the testing team should have a detailed plan for the testing activities that
will be performed, and a clear understanding of the testing objectives, scope, and deliverables.
This will help to ensure that the testing process is well-organized and that the testing team is able
to deliver high-quality results.

3. Test Case Development: The test case development phase gets started once the test
planning phase is completed. In this phase testing team notes down the detailed test cases.
The testing team also prepares the required test data for the testing. When the test cases are
prepared then they are reviewed by the quality assurance team.
The activities that take place during the Test Case Development stage include:
• Identifying the test cases that will be developed
• Writing test cases that are clear, concise, and easy to understand
• Creating test data and test scenarios that will be used in the test cases
• Identifying the expected results for each test case
• Reviewing and validating the test cases
• Updating the requirement traceability matrix (RTM) to map requirements to test
cases
At the end of this stage, the testing team should have a set of comprehensive and
accurate test cases that provide adequate coverage of the software or application. This will help
to ensure that the testing process is thorough and that any potential issues are identified and
addressed before the software is released.

4. Test Environment Setup: Test environment setup is a vital part of the STLC. Basically,
the test environment decides the conditions on which software is tested. This is independent
activity and can be started along with test case development. In this process, the testing team
is not involved - either the developer or the customer creates the testing environment.

5. Test Execution: After the test case development and test environment setup test execution
phase gets started. In this phase testing team starts executing test cases based on prepared test
cases in the earlier step.
The activities that take place during the test execution stage of the Software Testing Life
Cycle (STLC) include:
• Test execution: The test cases and scripts created in the test design stage are run
against the software application to identify any defects or issues.
• Defect logging: Any defects or issues that are found during test execution are
logged in a defect tracking system, along with details such as the severity, priority, and
description of the issue.
• Test data preparation: Test data is prepared and loaded into the system for test
execution
• Test environment setup: The necessary hardware, software, and network
configurations are set up for test execution
• Test execution: The test cases and scripts are run, and the results are collected
and analyzed.
• Test result analysis: The results of the test execution are analyzed to determine
the software‟s performance and identify any defects or issues.
• Defect retesting: Any defects that are identified during test execution are
retested to ensure that they have been fixed correctly.
• Test Reporting: Test results are documented and reported to the relevant
stakeholders.
It is important to note that test execution is an iterative process and may need to
be repeated multiple times until all identified defects are fixed and the software is
deemed fit for release.

6. Test Closure: Test closure is the final stage of the Software Testing Life Cycle (STLC)
where all testing-related activities are completed and documented. The main objective of the
test closure stage is to ensure that all testing-related activities have been completed and that
the software is ready for release.
At the end of the test closure stage, the testing team should have a clear
understanding of the software’s quality and reliability, and any defects or issues that were
identified during testing should have been resolved. The test closure stage also includes
documenting the testing process and any lessons learned so that they can be used to
improve future testing processes
Test closure is the final stage of the Software Testing Life Cycle (STLC) where
all testing-related activities are completed and documented. The main activities that take
place during the test closure stage include:
• Test summary report: A report is created that summarizes the overall testing
process, including the number of test cases executed, the number of defects found, and the
overall pass/fail rate.
• Defect tracking: All defects that were identified during testing are tracked and
managed until they are resolved.
• Test environment clean-up: The test environment is cleaned up, and all test data
and test artifacts are archived.
• Test closure report: A report is created that documents all the testing-related
activities that took place, including the testing objectives, scope, schedule, and resources
used.
• Knowledge transfer: Knowledge about the software and testing process is shared
with the rest of the team and any stakeholders who may need to maintain or support the
software in the future.
• Feedback and improvements: Feedback from the testing process is collected and
used to improve future testing processes

1.4 V-MODEL OF SOFTWARE TESTING


The V-Model provides a systematic and visual representation of the software
development process. V Model also referred to as the Verification and Validation
Model. Testing of the device is planned in parallel with a corresponding stage of
development.
Verification: It involves a static analysis method (review) done without
executing code. It is the process of evaluation of the product development process to
find whether
specified requirements are met. Verification ensures that we build the product right.
Validation: It involves dynamic analysis method (functional, non-functional), testing is
done by executing code. Validation is the process to check the software after the
completion of the development process to determine whether the software meets the
customer expectations and requirements. Validation ensures that we build the right product.
So V-Model contains Verification phases on one side of the Validation phases on the
other side. Verification and Validation process is joined by coding phase in V-shape. Thus it is
known as V-Model.

Verification
Phase

Validation
phase

There are the various phases of Verification Phase of V-model:

1. Business requirement analysis: This is the first step where product requirements
are understood from the customer's side. This phase contains detailed communication to
understand customer's expectations and exact requirements.
2. System Design: In this stage system engineers analyze and interpret the business
of the proposed system by studying the user requirements document.
3. Architecture Design: The baseline in selecting the architecture is that it should
understand all which typically consists of the list of modules, brief functionality of each
module, their interface relationships, dependencies, database tables, architecture diagrams,
technology detail, etc. The integration testing model is carried out in a particular phase.
4. Module Design: In the module design phase, the system breaks down into small
modules. The detailed design of the modules is specified, which is known as Low-Level
Design
5. Coding Phase: After designing, the coding phase is started. Based on the
requirements, a suitable programming language is decided. There are some guidelines and
standards for coding. Before checking in the repository, the final build is optimized for better
performance, and the code goes through many code reviews to check the performance.

There are the various phases of Validation Phase of V-model:


1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the
module design phase. These UTPs are executed to eliminate errors at code level or unit level.
A unit is the smallest entity which can independently exist, e.g., a program module. Unit
testing verifies that the smallest entity can function correctly when isolated from the rest of
the codes/ units.
2. Integration Testing: Integration Test Plans are developed during the Architectural
Design Phase. These tests verify that groups created and tested independently can coexist
and communicate among themselves.
3. System Testing: System Tests Plans are developed during System Design Phase.
Unlike Unit and Integration Test Plans, System Tests Plans are composed by the client’s
business team. System Test ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business requirement
analysis part. It includes testing the software product in user atmosphere. Acceptance tests
reveal the compatibility problems with the different systems, which is available within the
user atmosphere. It conjointly discovers the non-functional problems like load and
performance defects within the real user atmosphere.

When to use V-Model?


 When the requirement is well defined and not ambiguous.
 The V-shaped model should be used for small to medium-sized projects
where requirements are clearly defined and fixed.
 The V-shaped model should be chosen when sample technical resources
are available with essential technical expertise.

Advantages (Pros) of V-Model:


1. Easy to Understand.
2. Testing Methods like planning, test designing happens well before coding.
3. This saves a lot of time.
4. Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.

Disadvantages (Cons) of V-Model:


Very rigid and least flexible.
Not good for a complex project.
Software is developed during the implementation stage, so no early prototypes of
the software are produced.
If any changes happen in the midway, then the test documents along with the
required documents, has to be updated.

1.5 PROGRAM CORRECTNESS AND VERIFICATION


A software/program correctness is the condition or state of the software during
which it is able to perform as expected, and as per the user requirements. We discuss
software correctness from the two perspectives of the operational and the symbolic
approach. To show that a program is correct
• from the operational perspective, we use testing
• from the symbolic perspective, we use proof.
The two perspectives and with them testing and proof are tightly related and we
make ample use of this relationship.
Testing a Simple Fragment (Version 1)
Knowing about the relationship between values and facts, we can formulate a
simple testing method for program fragments. The fragments have the following general
shape, consisting of three parts
initialize variables
carry out computation
check condition
The initialize variables part sets up the input values for the fragment. Usually, the
input values are chosen taking into account, conditions concerning input values, e.g., to
avoid division by zero. The carry out computation part contains the “program”. The
check condition part specifies a condition to determine whether the program is correct.
Testing a Simple Fragment (Version 2)
Instead of giving an initialization as in Version 1, we can also use an assume statement
to impose an initial condition for a test. We can specify,
assume initial condition on variables
specify computation
assert final condition on variables
The fragment terminates gently if the initial condition is not met and aborts with an error
if the initial condition was met but the final condition is not. This way of specifying tests turns
out to be a foundation for deriving test cases. We can systematically develop test cases in this
method

Program Correctness
Following the preceding discussion, we base our notion of Programs correctness by
consider ing two variants:
• Pairs of initialize-Check-statements in program fragments and tests. These are
executable and can be evaluated during verfication.
• Pairs of assume-assert-statements in program fragments and tests. These are
executable and can be evaluated at run-time.
We call the first component, initialize, assume-statement as a pre-condition. We call its
second component, check , assert-statement a post- condition.

Program Verification
To demonstrate that a program is correct we verify it. We consider two principles
methods for verifying programs.
 Proof
Using logical deduction, we show that any execution of a program starting in a
state satisfying the pre-condition, it terminates in a state satisfying its post-condition. In
other words, we show that the program is correct.
 Testing
Executing a program for specific states satisfying the pre-condition, we check
whether on termination a state is reached that satisfies the post-condition. It is up to us to
determine suitable pairs of states, called test cases. This approach does not show that a
program is correct. In practice, we guess that programs that have been subjected to a
sufficient number of tests is correct. This kind of reasoning is called induction: from a
collection of tests that confirm correctness for precisely those tests, we infer that this is
the case for possible tests. Testing is a validation method: it is entirely possible that all
tests that we have provided appear to confirm correctness, but later we find a test case
that refutes the conclusion. Either the program contains an error or the test case is wrong.

Verification Vs Validation
Verification Validation
Verification refers to the set of activities that Validation refers to the set of activities that ensure
ensure software correctly implements the specific that the software that has been built is traceable to
function customer requirements.
It includes checking documents, designs, codes It includes testing and validating the actual product.
Verification is the static testing. Validation is dynamic testing.
Methods used in verification are Methods used in validation are Black Box Testing,
reviews, walkthroughs, inspections and desk- White Box Testing and non-functional testing
checking.
It checks whether the software meets the
It checks whether the software conforms to
requirements and expectations of a customer or
specifications or not.
not.
Verification means Are we building the product Validation means Are we building the right
right ? product?
1.6 RELIABILITY VERSUS SAFETY

1.6.1 Software Reliability


Software reliability is a measure of how the software is capable of maintaining
its level of performance under stated conditions for a stated period of time. Software
reliability engineering involves much more than analyzing test results, estimating remaining
faults, and modeling future failure probabilities.
Although in most organizations, software test is no longer an afterthought, management
is almost always surprised by the cost and schedule requirements of the test program, and it is
often downgraded in favor of design activities. Often adding a new feature will seem more
beneficial than performing a complete test on existing features. A good software reliability
engineering program, introduced early in the development cycle, will mitigate these problems by
reliability program tasks.
Reliability Program Tasks:
1. Reliability Allocation
Reliability allocation is the task of defining the necessary reliability of a software item.
The item may be a part of an integrated hardware/software system, may be a relatively
independent software application, or, more and more rarely, a standalone software program. In
any of these cases, goal is to bring system reliability within either a strict constraint required by a
customer or optimize reliability within schedule and cost constraints.
2. Defining and Analyzing Operational Profiles
The reliability of software is strongly tied to the operational usage of an application -
much stronger than the reliability of hardware. A software fault may lead to a system failure only
if that fault is encountered during operational usage. If a fault is not accessed in a specific
operational mode, it will not cause failures at all. It will cause failure more often if it is located in
code that is part of a frequently used "operation" (An operation is defined as a major logical task,
usually repeated multiple times within an hour of application usage). Therefore in software
reliability engineering, we focus on the operational profile of the software which weighs the
occurrence probabilities of each operation. Unless safety requirements indicate a modification of
this approach we will prioritize our testing according to this profile.
Software engineers have to complete the following tasks required to generate a useable
operational profile:
• Determine the operational modes (high traffic, low traffic,
high maintenance, remote use, local use, etc)
• Determine operation initiators (components that initiate the operations
in the system)
• Determine and group "Operations" so that the list includes only operations
that are significantly different from each other (and therefore may present different
faults)
• Determine occurrence rates for the different operations
• Construct the operational profile based on the individual operation
probabilities of occurrence.

3. Test Preparation and Plan


Test preparation is a crucial step in the implementation of an effective software
reliability program. A test plan that is based on the operational profile on the one hand, and
subject to the reliability allocation constraints on the other, will be effective in achieving the
program's reliability goals in the least amount of time and cost.
Software Reliability Engineering is concerned not only with feature and regression test,
but also with load test and performance test. All these should be planned based on the activities
outlined above. The reliability program will inform and often determine the following test
preparation activities:
• Assessing the number of new test cases required for the current release
• New test case allocation among the systems (if multi-system)
• New test case allocation for each system among its new operations
• Specifying new test cases
• Adding the new test cases to the existing test cases from previous releases

4. Software Reliability Models


Software reliability engineering is often identified with reliability models, in
particular reliability growth models. These models, when applied correctly, are
successful at providing guidance to management decisions such as:
• Test schedule
• Test resource allocation
• Time to market
• Maintenance resource allocation
The application of reliability models to software testing results allows us to infer
the rate at which failures are encountered (depending on usage profile) and, more
importantly, the changes in this rate (reliability growth). The ability to make these
inferences depends critically on the quality of test results. It is essential that testing be
performed in such a way that each failure incident is accurately reported.

1.6.2 Software Safety


Software safety is preventing a system from reaching dangerous states. As
systems and products become more and more dependent on software components it is no
longer realistic to develop a system safety program that does not include the software
elements.

Does software fail?


We tend to believe that well written and well tested safety critical software would
never fail. Experience proves otherwise with software making headlines when it actually
does fail, sometimes critically. Software does not fail the same way as hardware does,
and the various failure behaviors we are accustomed to from the world of hardware are
often not applicable to software. However, software does fail, and when it does, it can be
just as catastrophic as hardware failures.

Safety-critical software
Safety-critical software is very different from both non-critical software and
safety-critical hardware. The difference lies in the massive testing program that such
software undergoes.

What are "software failure modes"?


Software, especially in critical systems, tends to fail where least expected.
Software does not "break" but it must be able to deal with "broken" input and conditions,
which often cause the "software failures". The task of dealing with abnormal conditions
and inputs is handled by the exception code dispersed throughout the program. Setting up
a test plan and exhaustive test cases for the exception code is by definition difficult and
somewhat subjective.
Failures can be due to:
• failed hardware
• timing problems
• harsh/unexpected environmental conditions
• multiple changes in conditions and inputs that are beyond what the
hardware is able to deal with
• unanticipated conditions during software mode changes
• bad or unexpected user input
Often the conditions most difficult to predict are multiple, coinciding,
irregular inputs and conditions.
Safety-critical software is usually tested to the point that no new critical failures are
observed. This of course does not mean that the software is fault-free at this point, only that
failures are no longer observed in test.
Why are the faults leading to these types of failures overseen in test? These are faults
that are not tested for any of the following reasons:
• Faults in code that is not frequently used and therefore not well represented in
the operational profiles used for testing
• Faults caused by multiple abnormal conditions that are difficult to test
• Faults related to interfaces and controls of failed hardware
• Faults due to missing requirements
It is clear why these types of faults may remain outside of a normal, reliability focused, test
plan.

1.7 FAILURES, ERRORS AND FAULTS (DEFECTS)

Defect:
A defect refers to a situation when the application is not working as per the requirement and the
actual and expected result of the application or software is not in sync with each other.
 The defect is an issue in application coding that can affect the whole program.
 It represents the efficiency and inability of the application to meet the criteria and prevent
the software from performing the desired work.
 The defect can arise when a developer makes major or minor mistakes during the
development phase.

Error
Error is a situation that happens when the Development team or the developer fails to
understand a requirement definition and hence that misunderstanding gets translated into buggy
code. This situation is referred to as an Error and is mainly a term coined by the developers.
 Errors are generated due to wrong logic, syntax, or loop that can impact the end-user
experience.
 It is calculated by differentiating between the expected results and the actual results.
 It raises due to several reasons like design issues, coding issues, or system specification
issues and leads to issues in the application.

Fault:
Sometimes due to certain factors such as Lack of resources or not following proper steps, Fault
occurs in software which means that the logic was not incorporated to handle the errors in the
application. This is an undesirable situation, but it mainly happens due to invalid documented
steps or a lack of data definitions.
 It is an unintended behavior by an application program.
 It causes a warning in the program.
 If a fault is left untreated it may lead to failure in the working of the deployed code.
 A minor fault in some cases may lead to high-end error.
 There are several ways to prevent faults like adopting programming techniques,
development methodologies, peer review, and code analysis.

Failure:
Failure is the accumulation of several defects that ultimately lead to Software failure and results
in the loss of information in critical modules thereby making the system unresponsive. A failure
is the result of execution of a fault and is dynamic in nature. Generally, such situations happen
very rarely because before releasing a product all possible scenarios and test cases for the code
are simulated. Failure is detected by end-users once they face a particular issue in the software.
 Failure can happen due to human errors or can also be caused intentionally in the system by
an individual.
 It is a term that comes after the production stage of the software.
 It can be identified in the application when the defective part is executed.
Bug Defect Error Fault Failure

The Fault is a
The Defect is An Error is a
state that causes A failure is the
It is an informal the difference mistake made in
the software to result of
name specified between the the code; so that
fail to execution of a
to the defect. actual outcomes we cannot fault and is
accomplish its
and expected execute or dynamic in
essential
outputs. compile code. nature.
function.

1.8. SOFTWARE TESTING PRINCIPLES

Software testing is a process that involves putting software or an application to use


in order to find faults or flaws. Following certain guidelines can help testers to test
software without creating any problems. It will also save the test engineers' time and
effort. The seven different Software testing principles are given below:

1. Testing shows the presence of defects


2. Exhaustive testing is not possible
3. Early testing
4. Defect clustering
5. Beware of the pesticide paradox
6. Defect clustering
7. Absence-of-errors is a fallacy

1. Testing shows the presence of defects:

• The application will be put through testing by the test engineer to ensure that there
are no bugs or flaws. We can only pinpoint the existence of problems in the application or
program when testing. The main goal of testing is to find any flaws that might prevent the
product from fulfilling the client's needs by using a variety of methods and testing
techniques. Since the entire test should be able to be traced back to the customer
requirement.
• Testing reduces the amount of flaws in any program, but this does not imply that the
application is defect-free since sometimes software seems to be bug-free despite extensive
testing. But if the end-user runs into flaws that weren't discovered during testing, it's at the
point of deployment on the production server.

2. Exhaustive testing is not possible:

It might often appear quite difficult to test all the modules and their
features throughout the real testing process using effective and ineffective combinations
of the input data. Therefore, because it requires endless decisions and the majority of the
hard labour is unsuccessful. Extensive testing is preferred instead. As a result, we may
finish this sort of variation in accordance with the significance of the modules.
3. Early testing:

• Here, early testing refers to the idea that all testing activities should begin in the
early stages of the requirement analysis stage of the software development life cycle in order
to identify the defects. If we find the bugs at an early stage, we can fix them right away,
which could end up costing us much less than if they are discovered in a later phase of the
testing process.
• Since we will need the requirement definition papers in order to conduct testing, if
the requirements are mistakenly specified now, they may be corrected later, perhaps during
the development process.

4. Defect clustering:

• The defect clustering specified that we can identify the quantities of problems that
are associated to a limited number of modules during the testing procedure: We have a
number of explanations for this, including the possibility of intricate modules, difficult code
and more.

• According to the pareto principle, which suggests that we may determine that
approximately, these kinds of software or applications will follow, roughly? Twenty percent
of the modules contain eighty percent of the complexity. This allows us to locate the
ambiguous modules, but it has limitations if the same tests are run often since they will not
be able to spot any newly introduced flaws.

5. Beware of the pesticide paradox:

This is based on the theory that when you use pesticide repeatedly on crops, insects will
eventually build up an immunity, rendering it ineffective. Similarly, with testing, if the same
tests are run continuously then – while they might confirm the software is working – eventually
they will fail to find new issues. It is important to keep reviewing your tests and modifying or
adding to your scenarios to help prevent the pesticide paradox from occurring – maybe using
varying methods of testing techniques, methods and approaches in parallel.

6. Testing is context dependent:

Testing is ALL about the context. The methods and types of testing carried out can
completely depend on the context of the software or systems – for example, an e-commerce
website can require different types of testing and approaches to an API application, or a database
reporting application. What you are testing will always affect your approach.

7. Absence-of-errors is a fallacy (myth):

If your software or system is unusable (or does not fulfill users‟ wishes) then it does not
matter how many defects are found and fixed – it is still unusable. So in this sense, it is irrelevant
how issue- or error-free your system is; if the usability is so poor that users are unable to
navigate, or/and it does not match business requirements then it has failed, despite having few
bugs.
It is important, therefore, to run tests that are relevant to the system’s requirements. You
should also be testing your software with users – this can be done against early prototypes (at the
usability testing phase), to gather feedback that can be used to ensure and improve usability.
Remember, just because there might be a low number of issues, it does not mean your software
is shippable – meeting client expectations and requirements are just as important as ensuring
quality.
1.9. PROGRAM INSPECTIONS
Program or Software inspection refers to a peer review of software to identify
bugs or defects at the early stages of SDLC. It is a formal review that ensures the
documentation produced during a given stage is consistent with previous stages and
conforms to pre- established rules and standards.
Software inspection involves people examining the software product to discover defects
and inconsistencies. Since it doesn’t require system execution, inspection is usually done before
implementation.

Purpose / Advantages of software inspection:


Software inspection aims to identify software defects and deviations, ensuring the
product meets customer requirements, wants, and needs. Software inspection is designed to
unravel defects or bugs, unlike testing, which is done to make corrections. The purpose can be
given as below :
○ Identifying and resolving defects early
○ Enhancing code readability
○ Improving team collaboration
○ Enhancing code maintainability
○ Improving code efficiency
○ Enhancing security
○ Improving the overall quality of the software

Types of Software Inspections:


1. Document inspection: Here, the documents produced for a given phase are inspected,
further focusing on their quality, correctness, and relevance.
2. Code inspection: The code, program source files, and test scenarios are inspected and
reviewed.

Who are the key parties involved?


 Moderator: A facilitator who organizes and reports on inspection.
 Author: A person who produces the report.
 Reader: A person who guides the examination of software;
 Recorder: An inspector who logs all the defects.
 Inspector: The inspection team member responsible for identifying the defects.

Software Inspection Process:


Software inspection involves six steps – Planning, Overview, Preparation, Meeting,
Rework, and Follow-up.

1. Planning
The planning phase starts with the selection of a group review team. A moderator plans
the activities performed during the inspection and verifies that the software entry criteria are
met.
2. Overview
The overview phase intends to disseminate information regarding the background of the
product under review. Here, a presentation is given to the inspector with some background
information needed to review the software product properly.

3. Preparation
In the individual preparation phase, the inspector collects all the materials needed for
inspection. Each reviewer studies the project individually and notes the issues they encounter.

4. Meeting
The moderator conducts the meeting to collect and review defects. Here, the reader reads
through the product line by line while the inspector points out the flaws. All issues are raised,
and suggestions may be recorded.

5. Rework
Based on meeting notes, the author changes the work product.

6. Follow-up
In the last phase, the moderator verifies if necessary changes are made to the software
product, compiling a defect summary report.

Disadvantages of Software Inspection:


 It is a time-consuming process.
 Software inspection requires discipline.
 Can be subject to bias
 Limited to detecting syntax errors
 Can be costly

1.10. STAGES OF TESTING / LEVELS OF TESTING

1.10.1 UNIT TESTING


A software development-approach known as unit testing involves checking the
functionality of the smallest testable components or units, of an application one by one. Unit
tests are carried out by software developers and sometimes by QA personnel. A unit is a single
testable part of a software system and tested during the development phase of the application
software. Unit testing's primary goal is to separate written code for testing to see whether it
functions as intended.
 Unit tests should be run often by teams, whether manually or more frequently
automatically.
 Automated methods often create test cases using a testing framework. In addition to
presenting a summary of the test cases, these frameworks are also configured to flag
and report any failed test cases.

 Unit Test Lifecycle:

The life cycle of a unit test is to plan, implement, review and maintain
1. Review the code written:. According to the unit test life cycle, you first outline the
requirements of your code and then attempt to create a test case for each of them. You
review the code written.
2. Check in code from repository : The reviewed unit is put into the repository for
further testing
3. Check out code from repository : Select the Unit for which the testing has to be done
4. Make suitable changes: When the time comes, make suitable changes to the unit, after
analyzing each function or method . This will give the tester insight into what is going
on in that piece of code. Here is an example:
 Parameters being passed in
 Code doing its job
 Code returning something
5. Execute the test and compare the expected and actual results: This phase of the
Unit testing life cycle involves developing a test by creating a test object, selecting
input values to execute the test, executing the test, and comparing the expected and
actual results
6. Fix the detected bugs in the code: It also gives developers peace of mind when
adding or modifying code because they know if they break something, they will be
notified immediately during testing. This way, you can fix problems before they ever
reach
production and cause issues for end users
Re-execute the tests to verify them: Unit testing is a great way for developers to keep
track of their changes, which can be especially important when it comes to life cycle
methods that may not have a visual representation. Re-executing the tests to verify them
after each change can help ensure everything is still working as expected.

Unit testing advantages:


There are many advantages to unit testing, including the following:
 Compound mistakes happen less often the sooner an issue is discovered.
 Fixing issues as they arise is often less expensive than waiting until they become
serious
 Simplified debugging procedures.
 The codebase can be modified easily by developers,
 Code may be transferred to new projects and reused by developers.
Unit testing disadvantages:
While unit testing is integral to any software development and testing strategy; there are
some aspects to be aware of. Disadvantages to unit testing include the following:
 Not all bugs will be found during tests.
 Unit testing does not identify integration flaws; it just checks data sets and their
functionality.
 To test one line of code, more lines of test code may need to be developed, which
might require additional time.
 To successfully apply unit testing, developers may need to pick up new skills, such
as how to utilize certain automated software tools.

1.10.2 INTEGRATION TESTING


● The second stage of the software testing process, after unit testing, is known as
integration testing. Integration testing is the process of inspecting various parts or units
of a software project to reveal flaws and ensure that they function as intended.
● Integration testing is the process of testing the interface between two software units
or modules. It focuses on determining the correctness of the interface.
● The purpose of integration testing is to expose faults in the interaction between
integrated units.
● The typical software project often comprises of multiple software modules, many of
which were created by various programmers. Integration testing demonstrates to the
group how effectively these dissimilar components interact.
Why to perform integration testing?
 There are many particular reasons why developers should do integration testing, in
addition to the basic reality that they must test all software programs before making
them available to the general public.
 Errors might result from incompatibility between program components.
 Every software module must be able to communicate with the database and
requirements are subject to change as a result of customer feedback. Though if they
haven't been extensively tested yet, those additional needs should be.
 Every software developer has their own conceptual framework and coding logic.
Integrity testing guarantees that these diverse elements work together flawlessly.
 Modules often interface with third-party APIs or tools; thus we require integration
testing to confirm that the data these tools receive is accurate.
 There may be possible hardware compatibility issues.
 Types of Integration Testing :
Big bang Integration Testing :
 All the modules of the system are simply put together and tested.
 This approach is practicable only for very small systems. If an error is found during the
integration testing, it is very difficult to localize the error as the error may potentially belong
to any of the modules being integrated
Bottom-Up Integration Testing:
 In bottom-up testing, each module at lower levels are tested with higher modules until all
modules are tested
Top-Down Integration Testing:
 First, high-level modules are tested and then low-level modules and finally integrating the
low-level modules to a high level to ensure the system is working as intended.
Mixed/ Sandwich Integration Testing:
 A mixed integration testing follows a combination of top down and bottom-up testing
approaches

Advantages of Integration Testing


 Integration testing ensures that every integrated module functions correctly.
 Integration testing uncovers interface errors.
 Testers can initiate integration testing once a module is completed and doesn’t
require waiting for another module to be done and ready for testing.
 Testers can detect bugs, defects and security issues.
 Integration testing provides testers with a comprehensive analysis of the
whole system, dramatically reducing the likelihood of severe connectivity
issues.

Challenges of Integration Testing:


Unfortunately, integration testing has some difficulties to overcome as well.
 Questions will arise about how components from two distinct systems produced
by two different suppliers will impact and interact with one another during testing.
 Integrating new and old systems requires extensive testing and possible revisions.
 Integration testing needs testing not just the integration connections but the
environment itself, adding another level of complexity to the process.
 This is because integration testing requires testing not only the integration links
but the environment itself.

1.10.3 SYSTEM TESTING


 System testing is a type of software testing done on a whole integrated system to
determine if it complies with the necessary criteria.
 Integration testing successful components are used as input during system testing.
Integration testing's objective is to find any discrepancies between the Integrated
components.
 System testing finds flaws in the integrated modules as well as the whole system. A
component or system's observed behavior during testing is the outcome of system
testing. System testing is done on the whole system under the guidance of either
functional or system requirement specifications or under the guidance of both.
 The design, behavior and customer expectations of the system are all tested during
system testing. Beyond the parameters specified in the Software Requirements
Specification (SRS), it is used to test the system.
 In essence, system testing is carried out by a testing team that is separate from the
development team and helps to objectively assess the system's quality. It has been
tested in both functional and non- functional ways. Black-box testing is what system
testing is. After integration testing but before acceptance testing, system testing is
carried out.
Process for system testing:
The steps for system testing are as follows:
1. Setup of the test environment: Establish a test environment for higher-
quality testing.
2. Produce a test case: Produce a test case for the testing
3. Produce test data: Produce the data-that will be put to the test.
4. Execute test case: Test cases are carried out after the production of the test case
and the test data.
5. Defect reporting: System flaws are discovered.
6. Regression testing: This technique is used to examine the consequences of the
testing procedure's side effects.
7. Log defects: In this stage, defects are corrected.
8. Retest: If the first test is unsuccessful, a second test is conducted.

Main Types of System Testing:


Performance testing: is a sort of software testing used to evaluate the speed, scalability,
stability and dependability of software applications and products.
Load testing: This sort of software testing is used to ascertain how a system or software
product will behave under high loads.
Stress testing: Stress testing is a sort of software testing carried out to examine the system's
resilience under changing loads.

Advantages of system testing:


 The testers don't need to have programming experience to do this testing.
 It test the complete product or piece of software, allowing us to quickly find
any faults or flaws that slipped through integration and unit testing.
 The testing environment resembles a real-world production or commercial setting.
 It addresses the technical and business needs of customers and uses various
test scripts to verify the system's full operation.
 Following this testing, the product will have practically all potential flaws or faults
fixed, allowing the development team to safely go on to acceptance testing.

Disadvantages of system testing:


 Because this testing involves checking the complete product or piece of software,
it takes longer than other testing methods.
 As it involves testing the complete piece of software, the cost will be
considerable.
1.10.4 Acceptance testing
Acceptance Testing is an important aspect of Software Testing, which guarantees that
software aligns with user needs and business requirements. Acceptance testing is a quality
assurance (QA) process. The major aim of this test is to evaluate the compliance of the system
with the business requirements and assess whether it is acceptable for delivery or not.
Acceptance Testing is the last phase of software testing performed after System Testing and
before making the system available for actual use.
Some situations when acceptance testing is usually performed are mentioned as below:
Stage Description
End of Development After developers complete coding, acceptance testing is
performed to verify that all requirements are met.
Before User Acceptance Conducted before the software is released to end-users to
ensure it aligns with business objectives and user needs.
Pre-Release Performed as the final check to catch any last-minute issues or
defects before the software goes live.

Types of Acceptance Testing


 User Acceptance Testing (UAT)
o User acceptance testing is used to determine whether the product is
working for the user correctly.
 Business Acceptance Testing (BAT)
o BAT is used to determine whether the product meets the business goals
and purposes or not
 Contract Acceptance Testing (CAT)
o CAT is a contract that specifies that once the product goes live, within a
predetermined period, the acceptance test must be performed, and it
should pass all the acceptance use cases
 Regulations Acceptance Testing (RAT)
o RAT is used to determine whether the product violates the rules and
regulations that are defined by the government of the country where
it is being released
 Operational Acceptance Testing (OAT)
o OAT is used to determine the operational readiness of the product
and is non-functional testing.
 Alpha Testing
o Alpha testing is used to determine the product in the development
testing environment by a specialized testers team usually called alpha
testers.
 Beta Testing
o Beta testing is used to assess the product by exposing it to the real end-
users, typically called beta testers in their environment.

Advantages of Acceptance Testing


1. This testing helps the project team to know the further requirements from the users
directly as it involve the users for testing.
2. It brings confidence and satisfaction to the clients as they are directly involved in the
testing process.

Disadvantages of Acceptance Testing


1. Users should have basic knowledge about the product or application.
2. Sometimes, users don’t want to participate in the testing process.
3. The feedback for the testing takes a long time as it involves many users and the
opinions may differ from one user to another user.
UNIT II TEST
PLANNING
The Goal of Test Planning, High Level Expectations, Intergroup Responsibilities, Test
Phases, Test Strategy, Resource Requirements, Tester Assignments, Test Schedule, Test
Cases, Bug Reporting, Metrics and Statistics, Risk- Based Testing and Risk
Management.
2.1 THE GOAL OF TEST PLANNING
The testing process can’t operate without communication. Performing your testing tasks
would be very difficult if the programmers wrote their code without telling you what it does,
how it works,or when it will be complete. Likewise, if you and the other software testers
don’t communicatewhat you plan to test, what resources you need, and what your schedule is,
your project will have little chance of succeeding. The software test plan is the primary means
by which software testers communicate tothe product development team what they intend to
do. A test plan is a document that consists of all testing-related activities.
The IEEE Standard 829–1998 for Software Test Documentation states that the
purpose of a software test plan is as follows:
A Software test plan is used to describe the scope, approach, resources, and
schedule of the testing activities. It is used to identify the items being tested, the features
to be tested, the testing tasks to be performed, the personnel responsible for each task,
and the risks associated with the plan.
The test plan is a by-product of the detailed planning process that’s undertaken to
create it. It’s the planning process that matters, not the resulting document. The test Leader,
Test Manager and the Test Engineer are responsible for creating a test plan
The ultimate goal of the test planning process is communicating the software test
team’s intent, its expectations, and its understanding of the testing that’s to be
performed. The test lead, test Manager and test Engineers are responsible for creating a test
plan

2.1.1 Benefits of Test Plan : (Why are Test plans important ?)


● Defines Objectives: A test plan clearly outlines the testing objectives and the scope of
testing activities, ensuring that all team members understand what needs to be achieved.
● Structured Approach : It provides a systematic approach to testing, detailing the steps
and processes involved, which helps in organizing the testing effort.
● Resource Allocation : Helps in identifying the necessary resources, including
personnel, tools, and environments, ensuring they are available when needed.
● Identifies Risks : A test plan identifies potential risks and outlines mitigation strategies,
helping to address issues proactively rather than reactively.
● Contingency Plans : These include contingency plans for dealing with unexpected
events or issues that may arise during testing.
● Stakeholder Alignment : Facilitates communication among stakeholders, including
developers, testers, project managers, and clients, ensuring everyone is aligned on the
testing objectives, approach, and schedule.
● Documentation : Serves as a comprehensive document that can be referred to by all
team members, aiding in knowledge sharing and transparency.
● Resource Optimization : Helps in efficiently utilizing available resources,
including time and personnel, by providing a clear plan of action.
● Focus on Priorities : Ensures that testing efforts are focused on high-priority areas that
are critical to the success of the project.
Test Plan Attributes: (Test plan Template ) (Components of a Test Plan)
1. Objectives: The overall objective of the test is to find as many defects as possible and
to make software bug-free. The test objective must be broken into components and
subcomponents .In every component following activities should be performed.
 List all the functionality and performance to be tested.
 Make goals and targets based on the application feature.
2. Scope : It consists of information that needs to be tested concerning an application. The
scope can be divided into two parts:
In-Scope: The modules that are to be tested rigorously.
Out Scope: The modules that are not to be tested rigorously.
3. Testing Methodology : The testing methodology is decided based on the feature and
application requirements
4. Approach : The approach of testing different software is
different. It deals with the flow of applications for future reference.
5. Assumption: In this phase, certain assumptions will be made.
Example:
 The testing team will get proper support from the development team.
 The tester will get proper knowledge transfer from the development team.
 Proper resource allocation will be given by the company to the testing
department.
6. Risk : All the risks that can happen if the assumption is broken. For Example, in the
case of wrong budget estimation, the cost may overrun
7. Mitigation Plan: If any risk is involved then the company must have a backup plan, the
purpose is to avoid errors
8. Roles and Responsibilities : All the responsibilities and role of every member of a
particular testing team has to be recorded
9. Schedule : Under this, it will record the start and end date of every testing-related
activity. For Example, writing the test case date and ending the test case date
10. Defect Tracking : If there is any defect found while testing that defect must be given
to the developer team. There are the following methods for the process of defect
tracking:
Information Capture: In this, we take basic information to begin the process.
Prioritize: The task is prioritized based on severity and importance.
Communication: Communication between the identifier of the bug and the fixer of the
bug.
Environment: Test the application based on hardware and software
11. Test Environments: It is the environment that the testing team will use i.e. the list of
hardware and software
12. Entry and Exit Criteria: Entry and exit criteria in a test plan are the conditions that
must be met before and after a test phase. They help ensure that testing is done
effectively and that the final product meets quality standards.

Entry criteria :
 Prerequisites: Conditions that must be met before testing can begin
 Test environment: The hardware, software, and network configuration needed to run tests
 Test data: The data needed to run tests
 Trained testers: Testers who are trained to run tests
 Resources: The resources needed to run tests
Exit criteria:
 Quality: Conditions that determine if the software meets quality standards
 Defect metrics: The acceptable level of defects based on their severity and priority
 Approved results: The results of tests that have been approved
 Go/No-Go decision: The decision to launch the product based on the results of exit criteria
Entry and exit criteria are important for:
 Risk mitigation: Ensuring that testing starts under optimal conditions
 Quality assurance: Ensuring that the product meets quality standards
 Resource management: Ensuring that testing efforts are focused on the most critical areas
 Project control: Providing a structured approach to managing the testing phase

13. Test Automation: It consists of the features that are to be automated and which features
are not to be automated.
14. Effort Estimation: This involves planning the effort that needs to be applied by every
team member.
15. Test Deliverables: It is the outcome from the testing team that is to be given to the
customers at the end of the project.
Before the testing phase :
 Test plan document.
 Requirement Traceability Matrix (RTM)
 Test case document.
 Test design specification.
During the testing phase :
 Test scripts.
 Test data.
 Error logs.
After the testing phase :
 Test Reports.
 Defect Report.
 Installation Report
16. Template: This is followed by every kind of report that is going to be prepared by the
testing team. All the test engineers will only use these templates in the project to
maintain the consistency of the product

2.1.2 : Types of Test Plan:


The following are the three types of test plans:
● Master Test Plan: This type of test plan includes multiple test strategies and has
multiple levels of testing. It goes into great depth on the planning and management of
testing at the various test levels and thus provides a bird’s eye view of the important
decisions made, tactics used, etc. It includes a list of tests that must be executed, test
coverage, the connection between various test levels, etc.
● Phase Test Plan: In this type of test plan, emphasis is on any one phase of testing. It
includes further information on the levels listed in the master testing plan. Information
like testing schedules, benchmarks, activities, templates, and other information that is
not included in the master test plan is included in the phase test plan.
● Specific Test Plan: This type of test plan, is designed for specific types of testing
especially non-functional testing for example plans for conducting performance tests or
security tests.
2.1.3 : Steps for creating a Test Plan:
1. Analyze the product: This phase focuses on analyzing the product, Interviewing
clients, designers, and developers, and performing a product walkthrough. This stage
focuses on answering the following questions:
● What is the primary objective of the product?
● Who will use the product?
● What are the hardware and software specifications of the product?
● How does the product work?
2. Design the test strategy: The test strategy document is prepared by the manager and
details the following information:
● Scope of testing which means the components that will be tested and the ones
that will be skipped.
● Type of testing which means different types of tests that will be used in the
project.
● Risks and issues that will list all the possible risks that may occur during testing.
● Test logistics mentions the names of the testers and the tests that will be run by
them.
3. Define test objectives: This phase defines the objectives and expected results of the test
execution. Objectives include:
● A list of software features like functionality, GUI, performance standards, etc.
● The ideal expected outcome for every aspect of the software that needs testing.
4. Define test criteria: Two main testing criteria determine all the activities in the testing
project:
● Suspension criteria: Suspension criteria define the benchmarks for suspending
all the tests.
● Exit criteria: Exit criteria define the benchmarks that signify the successful
completion of the test phase or project. These are expected results and must
match before moving to the next stage of development.
5. Resource planning: This phase aims to create a detailed list of all the resources
required for project completion. For example, human effort, hardware and software
requirements, all infrastructure needed, etc.
6. Plan test environment: This phase is very important as the test environment is where
the ǪAs run their tests. The test environments must be real devices, installed with real
browsers and operating systems so that testers can monitor software behavior in real
user conditions.
7. Schedule and Estimation: Break down the project into smaller tasks and allocate time
and effort for each task. This helps in efficient time estimation. Create a schedule to
complete these tasks in the designated time with a specific amount of effort.
8. Determine test deliverables: Test deliverables refer to the list of documents, tools, and
other equipment that must be created, provided, and maintained to support testing
activities in the project.

2.1.4 : Best Practices for creating an effective Test Plan :

 Understand project requirements and map to test cases.


 Clearly state the objectives of the testing effort.
 Clearly define the scope of testing, outlining which features and functionalities will be
tested.
 Document the expected deliverables of the testing process.
 Define the test environment, detailing the hardware, software, and network
configurations.
 Identify potential risks associated with the testing process and the project.
 Create a detailed testing schedule with milestones and deadlines.
 Create a realistic and achievable testing schedule.
 Maintain flexibility to tweak the plan, if required.
 Include scope for retrospection and avoid pitfalls in the future.
 Define key metrics to be collected during testing.
 Conduct retrospectives to identify areas for improvement in the testing process.
2.1. 5 : Example : Develop a Test Plan for E-Commerce web and Mobile Application
The purpose of this test plan is to outline the testing approach for an e-commerce web/mobile
application, such as www.amazon.in. The goal is to ensure that the application functions as
intended, meets user requirements, and provides a seamless shopping experience.
Step 1 : Analyze the product - The details of working of the application has be analyzed.
Understand the users and their requirements
Step 2 : Test strategy (Scope) : The scope of this test plan includes testing the core
functionalities of the e-commerce application, such as browsing products, adding items to the
cart, placing orders, and managing user accounts. It also covers testing across different
platforms, including web and mobile.
Step 3 : Test Objectives

1. Validate the functionality of the e-commerce application.


2. Verify that the application is user-friendly and provides a smooth shopping experience.
3. Ensure that the application is secure and protects user data.
4. Test the application's compatibility across different browsers and devices.
5. Evaluate the performance and scalability of the application under various load conditions.
6. Identify and report any defects or issues found during testing.
Step 4 : Test Approach / planning / Test Environment

1. Requirements Analysis:
 Review the functional and non-functional requirements of the e-commerce application.
 Identify testable features and prioritize them based on criticality.
2. Test Design:
 Create test scenarios and test cases for each identified feature.
 Include positive and negative test cases to cover different scenarios.
 Define test data and test environment setup requirements.
3. Test Execution:
 Execute test cases manually or using test automation tools.
 Log defects in the test management tool and track their status.
 Perform regression testing after each bug fix or application update.
4. Performance Testing:
 Design and execute performance tests to evaluate the application's response time, throughput,
and scalability.
 Simulate different user loads and monitor system resources during testing.
 Identify performance bottlenecks and suggest improvements if needed.
5. Security Testing:
 Conduct security testing to identify vulnerabilities and ensure the application protects user
Data.
 Test for common security issues like SQL injection, cross-site scripting (XSS), and
authentication vulnerabilities.
 Implement security best practices and follow industry standards.
6. Compatibility Testing:
 Test the application on different browsers, versions, and mobile devices.
 Verify that the application functions as expected across various platforms.

Step 5 : Test resources


1. Web browsers: Chrome, Firefox, Safari, and Edge.
2. Mobile devices: iOS and Android.
3. Test management tool: Jira or any other preferred tool.
4. Test automation tool: Selenium WebDriver or any other preferred tool.
5. Load testing tool: JMeter or any other preferred tool.
Step 6 : Estimation
 Estimate the required time , no. of persons / effort and cost
Step 7 : Test Deliverables
1. Test plan document.
2. Test scenarios and test cases.
3. Test execution reports.
4. Defect reports with severity and priority.
5. Performance test reports.
6. Security test reports.

Conclusion : By following this test plan, we can ensure comprehensive testing of the e-
commerce web/mobile application ( Eg : www.amazon.in). It covers functional, performance,
security, and compatibility testing, enabling you to deliver a high-quality and reliable
application to end-users.

2.2 HIGH-LEVEL EXPECTATIONS


High-level expectations are fundamental ideas that must be agreed by everyone on the project
team. They are the overall outcomes or results that stakeholders anticipate from the testing
process. They are crucial because they set clear quality standards and help align testing
activities with project objectives. They might be considered “too obvious” and assumed to be
understood by everyone—but a good tester knew that he should never assume anything
Some of the high level expectations are listed below :
 What’s the purpose of the test planning process and the software test plan?
Testers know the reasons for test planning but do the programmers know, do the technical
writers know, does management know? More importantly, do they agreewith and support
the test planning process?
 What product is being tested?
For eg , say some software application with v8.0 is to be tested. Is this v8.0 release planned
to be a complete rewrite or a just a maintenance update? Is it one standalone program or
thousands of pieces? Is it being developed in house or by a third party? For the test effort to
be successful, there must be a complete understanding of what the product is, its magnitude,
and its scope.
 What are the quality and reliability goals of the product?
This area generates lots of discussion, but it’s imperative that everyone agrees to what
these goals are. A sales repwill tell you that the software needs to be as fast as possible. A
programmer will say that it needs to have the coolest technology. Product support will tell
you that it can’t have any crashing bugs. They can’t all be right. How do a tester measure
fast and cool? And how do a tester tell the product support engineer that the software will
ship with crashing bugs? Testing team will be testing the product’s quality and reliability,
So a tester need the following to satisfy the High level Expectations :
 The result of the test planning process must be a clear, concise, agreed-on definition of
the product’s quality and reliability goals. The goals must be absolute so that there’s no
dispute on whether they were achieved. If the salespeople want fast, have them define the
benchmark—able to process 1 million transactions per second or twice as fast as competitor
XYZ running similar tasks. If the programmers want better technology, state exactly what
the technology is. As for bugs, you can’t guarantee that they’ll all be found. A tester can
state, however, that the goal is for the test automation to run 24 hours without crashing or
that all test cases will be run without finding a new bug, and so on.

 As the product’s release date approaches, there should be no disagreement about what the
quality and reliability goals are. Everyone should know about the goals of the software.
 If hardware necessary for running the tests, where is it stored and how it is obtained
 If external test labs are necessary, where are they locates and how are they scheduled?
2.3 INTER-GROUP RESPONSIBILITIES
Inter-group responsibilities identify tasks and deliverables that potentially affect the test
effort. The test team’s work is driven by many other functional groups— programmers,
project managers, technical writers, and so on. If the responsibilities aren’t planned out, the
project—specifically the testing can become a comedy show with everyone pointing at
others , resulting in important tasks being forgotten.
The types of tasks that need to be defined aren’t the obvious ones. The troublesome
tasks potentially have multiple owners or sometimes no owner or a shared responsibility. The
easiest way to plan these and communicate the plan is with a simple table (see Figure below)
The tasks run down the left side and the possible owners are across the top. An X
denotes the owner of a task and a dash (—) indicates a contributor. A blank means that the
group has nothing to do with the task.
Deciding which tasks to list comes with experience. Ideally, several senior members of
the team can make a good first pass at a list, but each project is different and will have its own
unique inter-group responsibilities and dependencies. A good place to start is to question
people about past projects and what they can remember of neglected tasks.

Figure : Use a table to help organize inter-group responsibilities.

What Will and Won’t Be Tested


Everything included with a software product isn’t necessarily tested. There may be
components of the software that were previously released and have already been tested.
Content may be taken as is from another software company. An outsourcing company may
supply pre-tested portions of the product.
The planning process needs to identify each component of the software and make
known whether it will be tested. If it’s not tested, there needs to be a reason it won’t be
covered. It would be a disaster if a piece of code slipped through the development cycle
completely untested because of a misunderstanding.
2.4 TEST PHASES
To plan the test phases, the test team will look at the proposed development model and
decide whether unique phases, or stages, of testing should be performed over the course of the
project. In a code-and-fix model, there’s probably only one test phase. In the waterfall and
spiral models, there can be several test phases from examining the product spec to acceptance
testing. In the software testing life cycle, there are usually five phases of testing:
1. Static testing:
During static testing, developers work to avoid potential problems that might arise later.
Without executing the code, they perform manual or automated reviews of the supporting
documents for the software, such as requirement specifications, searching for any potential
ambiguities, errors or redundancies. The goal is to avoid defects before introducing them to
the software system.

2. Unit Testing :
The next phase of software testing is unit testing. During this phase, the software undergoes
assessments of its specific units, or its functions and procedures, to ensure that each works
properly on its own. The developers may use white box testing to evaluate the software's code
and internal structure, commonly before delivering the software for formal testing by testers.
Unit testing can occur whenever a piece of code undergoes change, which allows for quick
resolution of issues.

3. Integration Testing:
Integration testing involves testing all the units of a program as a group to find issues with
how the separate software functions interact with one another. Through integration testing,
the developers can determine the overall efficiency of the units as they run together. This
phase is important because the program's overall functionality relies on the units, operating
simultaneously as a complete system, not as isolated procedures.

4. System Testing:
In the system testing phase, the software undergoes its first test as a complete, integrated
application to determine how well it carries out its purpose. For this, the developers pass the
software to independent testers who had no involvement in its development to ensure that the
testing results stem from impartial evaluations. System testing is vital because it ensures that
the software meets the requirements as determined by the client.

5. Acceptance Testing:
Acceptance testing is the last phase of software testing. Its purpose is to evaluate the
software's readiness for release and practical use. Testers may perform acceptance testing
alongside individuals who represent the software's target audience. Acceptance testing aims
to show whether the software meets the needs of its intended users and that any changes the
software experiences during development are appropriate for use. The representative
individuals are crucial to this phase because they can offer insight into what customers may
want from the software. Once the software passes acceptance testing, it moves on to
production.
The test planning process should identify each proposed test phase and make each phase
known to the project team. This process often helps the entire team to understand the overall
development model.

Entrance and Exit Criteria:


Two very important concepts associated with the test phases are the entrance and exit criteria.
Each phase must have Entry and Exit criteria defined for it. Entry and exit criteria
are conditions that define when testing can start and when it can end. They help ensure that
testing is conducted effectively and efficiently.

Entry criteria
 The test environment is set up
 Test data is available
 Test cases are available
 The test plan is approved
 Adequate testing resources are allocated
Exit criteria
 All planned test cases are completed
 Critical defects are resolved
 Performance benchmarks are met
 Stakeholders or project managers approve the results
 All critical processes are working
 Approved results and reports are available
Benefits of entry and exit criteria
 They help ensure that testing activities are conducted effectively and efficiently
 They define the scope of each testing phase
 They help establish a baseline for quality
 They help ensure that the product aligns with user expectations
 They help decide whether the product is ready for launch
Entry and exit criteria should be defined for each phase of software testing.

2.5 TEST STRATEGY


The Test strategy document is a high-level document that outlines the testing technique used in the
Software Testing Life Cycle and confirms the tests that will be performed on the product.

The test strategy describes the approach that the test team will use to test the software both
overall andin each phase. If you were presented with a product to test, you’d need to decide if
it’s better to use black-box testing or white-box testing. If you decide to use a mix of both
techniques, when will you apply each andto which parts of the software?
It might be a good idea to test some of the code manually and other code with tools and
automation. If tools will be used, do they need to be developed or can existing commercial
solutions be purchased? If so, which ones? Maybe it would be more efficient to outsource the
entire test effort to a specialized testing company and require only a testing crew to oversee
their work.
Deciding on the strategy is a complex task—one that needs to be made by very
experienced testers because it can determine the success or failure of the test effort. The test
strategy specifies the following details that are necessary while we write the test document:
○ What is the procedure to be used?
○ Which module is going to be tested?
○ Which entry and exit criteria apply?
○ Which type of testing needs to be implemented?
The Test strategy plan outlines test effort, domain, setups, and tools for function verification
and validation. It includes schedules, resource allocations, and employee utilization
information. The test strategy plan should be communicated to the entire team so that the
team will be consistent on approach and responsibilities.

2.5.1 : Components of Test Strategy ( Test Strategy in STLC )


1. Scope and Overview: This is the first section of the test strategy paper. Any product’s
overview includes information about who should approve, review, and use the document.
It contains:
a. An overview of the project,
b. Include information such as who will evaluate and approve the document.
c. Define the testing activities and phases that will be performed, as well as the
timetables that will be followed in relation to the overall project timelines stated
in the test plan
2. Testing Methodology: Specifies testing methods, procedures, roles, and duties of team
members. Includes change management process including modification request
submission, pattern usage, and request management activity.
3. Testing Environment Specification: Specifies test data requirements. Includes
instructions on producing test data. Give details of environments and setup requirements.
Includes strategies for backup and restoration.
4. Testing Tools: Define the tools for test management and automation that will be utilized
to execute the tests. Describe the test approach and tools needed for performance, load,
and security testing
5. Release Control : Release Control is a crucial component of the test strategy document.
It’s used to make sure that test execution and release management strategies are
established in a systematic way.
6. Risk Analysis: Describes potential project hazards. Establishes a defined risk
management strategy. Establishes a contingency plan for real-time hazards. Lists
potential dangers and provides detailed risk management plan. Provides a backup plan for
potential hazards
7. Review and Approvals : When all of the testing activities as stated in the test strategy
document are done, it is evaluated by the persons who are involved, such as:
a. System Administration Team.
b. Project Management Team.
c. Development Team.
d. Business Team.
2.5.2 Test Strategy Vs Test Plan
Test Plan Test Strategy
Test plan is used to describe the scope, Test strategy document outlines the testing
approach, resources, and schedule of the testing technique used in the Software Testing
activities Life Cycle
The test plan happens independently. Test strategy is often found as a part of a test
plan
The test plan describes the details Test strategy describes
the general methodologies.
The test plan gives a broad idea for the project The strategy need to be simple and
uncomplicated
Primarily for the project team, including testers, Aimed at management, project leads, and
developers, managers, and stakeholders high-level stakeholders
2.5.3 : Types of Test Strategies
 Analytical strategy: The requirements are examined to determine the test
circumstances. Then tests are created, implemented, and run to ensure that the
requirements are met.
 Model-based strategy: The testing team selects an actual or anticipated circumstance
and constructs a model for it, taking into account inputs, outputs, processes, and
possible behavior
 Methodical strategy: In this case, test teams adhere to a quality standard (such as
ISO25000), checklists, or just a set of test circumstances. Specific types of testing
(such as security) and application domains may have standard checklists.
 Process-compliant strategy: The testers follow the methods or recommendations
established by the standards committee or a panel of enterprise specialists to
determine test conditions, identify test cases, and assemble the testing team
 Reactive strategy: Test charters are created based on the features and functionalities
that already exist. The outcomes of the testing by testers are used to update these test
charters.
 Regression-averse strategy: In this case, the testing procedures are aimed at
lowering the risk of regression for both functional and non-functional product aspects

2.6 RESOURCE REQUIREMENTS


Planning the resource requirements is the process of deciding what is necessary to
accomplishthe testing strategy. Resource requirement is a detailed summary of all types of
resources required to complete project task. Some of the resources are listed below:
 People. How many, what experience, what expertise? Should they be full-time, part-
time, contract, students?
 Equipment. Computers, test hardware, printers, tools.
 Office and lab space. Where will they be located? How big will they be? How will
they be arranged?
 Software. Word processors, databases, custom tools. What will be purchased, what
needs to be written?
 Outsource companies. Will they be used? What criteria will be used for choosing
them? How much will they cost?
 Miscellaneous supplies. Disks, phones, reference books, training material. What else
might be necessary over the course of the project?
● The specific resource requirements are varying based on the project, team, and
company, So the test plan effort will need to carefully evaluate what will be needed to
test the software.
● It is often difficult or even impossible to obtain resources late in the project that
weren’t budgeted for at the beginning, so it’s imperative to be thorough when creating
the list. When Test planner wants to specify resources, they specify it using four
characteristics
 Description of resource
 Resource availability
 Time of resource when it will be available
 Duration of resource availability
 Broad classification of Types of resources:
● Human Resources: Small projects require individual involvement, large projects
require a team. People are given jobs like Manager , Software developers, Software
Tester, Engineer, etc
● Reusable Components: Component Based Software Engineering emphasizes
reusability. Reusable components lower the cost of development of software
● Hardware and Software tools: Tools that are essential for project development.
Before beginning the Project development, this resource should be prepared to avoid
complications in the project.

2.7 TESTER ASSIGNMENTS


Tester assignments refer to the process of assigning individuals or teams to test specific components,
features of a product or system.
Once the test phases, test strategy, and resource requirements are defined, that information
can be used with the product specifications to break out the individual tester assignments. The
inter-group responsibilities dealt with what functional group (management, test,
programmers, and so on) is responsible for what high-level tasks. Planning the tester
assignments identifies the testers responsible for each area of the software and for each
testable feature. The following Table shows a greatly simplified example of a tester
assignment table for Windows WordPad

Table : High Level Tester assignments for word pad


A real-world responsibilities table would go into much more detail to assure that every
part of the software has someone assigned to test it. Each tester would know exactly what they
were responsible for and have enough information to go off and start designing test cases.

2.8 TEST SCHEDULE


A test schedule in software testing is a detailed plan that defines timelines and
milestones for testing. A timeline is a visual representation of events in order, while a
milestone is a specific point of progress in a project . The test schedule takes all the
information presented and maps it into the overall project schedule. This stage is often critical
in the test planning effort because a few highly desired features that were thought to be easy
to design and code may turn out to be very time consuming to test.
Completing a test schedule as part of test planning will provide the product team and
project manager with the information needed to better schedule the overall project. They may
even decide, based on the testing schedule, to cut certain features from the product or
postpone them to a later release.
An important consideration with test planning is that the amount of test work typically
isn’t distributed evenly over the entire product development cycle. Some testing occurs early
in the form of specifications and code reviews, tool development, and so on. But the number
of testing tasks and the number of people and amount of time spent testing often increases
over
the courseof the project, with the peak being a short time before the product is released.
Following Figure shows,what a typical test resource graph may look like below:

Fig: Amount of test resources on a project typically increases over the courseof the development schedule.

The effect of this gradual increase is that the test schedule is increasingly influenced
by what happens earlier in the project. If some part of the project is delivered to the test group
two weeks late and only three weeks were scheduled for testing, what happens? Does the
three weeks of testing now have to occur in only one week or does the project get delayed two
weeks? This might lead to working extra hours for extended periods of time in order to finish
a project or meet a deadline. This problem is known as schedule crunch.
Following Table is a test schedule that would surely get the team into a schedule
crunch.
Table : A Test Schedule Based on Fixed Dates

One way to help keep the testing tasks from being crunched is for the test schedule to avoid
absolute dates for starting and stopping tasks. If the test schedule instead uses relative dates
based on the entrance and exit criteria defined by the testing phases, it becomes clearer that
the testing tasks rely on some other deliverables being completed first. It’s also more
apparent how much time the individual tasks take.
The following Table shows an example of this.

Table : A Test Schedule Based on Relative Dates


The Test Schedule consists of start date, Finish date and responsibilities. Sample template
fpr test schedule is given in the next page
17
2.9 TEST CASES
 A test case is a set of inputs, and expected outputs that are designed to validate a particular feature
of software functionality
 The test case is also defined as a group of conditions under which a tester
determines whether a software application is working as per the customer's
requirements or not.
The test planning process will decide what approach will be used to write test case, where the
testcases will be stored, and how they’ll be used and maintained
Importance of test cases (Why do we need Test cases?):
○ To ensure Quality
○ To have consistency in test execution
○ To have better test coverage
○ It depends on the process rather than a person
○ To avoid training for every new test engineer on the product
○ To use as proof to client for test areas covered

Test Case Process or Activities in the Test Case Development

1) Understanding the Requirements: The testing team needs to understand the


requirements specified in the SRS document and based on that, they prepare possible test
scenarios and use cases.

2) Test Case Design: After gathering all the requirements, the testing team designs the test
case that covers all the possible use cases. It mainly covers the functionality, performance, and
usability aspects of the software product.

3) Test Case Preparation: After designing test cases, the testing team prepares the
test documents which cover test case ID, description, preconditions, test steps, and expected
and actual results.

4) Review and Approval: The test documents should be then reviewed by


the stakeholders, project managers, developers' team, and the testing team.

5) Test Case Execution: Once test cases are reviewed, the testing team needs to execute
the test cases. They report the bugs or defects found during testing.

6) Test Case Maintenance: The testing team maintains the test cases regularly making
sure that all the test cases are up-to-date.
 Sample Test case (Template)

The above Figure gives one Template for Test case and the Figure below shows an
example in the form of excel table. Someone reviewing the test cases could quickly read that
information and then review the table to check its coverage.

Figure: Test cases can be presented in the form of matrix or table.


Other options for presenting test cases are simple lists, outlines, or even graphical
diagrams such as state tables or data flow diagrams. Remember, tester should communicate
testcases to others and should use whichever method is most effective. Be creative, but stay
true to the purpose of documenting the test cases.
Best Practices for Writing Test Cases
• Keep test cases simple and transparent.
• Consider end user's perspective.
• Refer to specification document when preparing test cases.
• Ensure 100% test coverage.
• Avoid repeating test cases.
• Use test design techniques for maximum bug discovery
Types of test cases
• Function test cases : A functional test case is a set of instructions that outlines how to
test a specific function in a software application
• Integration test cases : Integration test cases are used to test the flow of data and
interfaces between test modules. They focus on the integration links between modules,
rather than the functionality of the modules themselves.
• System test cases : System test cases are a part of the software testing process that
involves testing the complete system
• Other types of test cases are : User Interface Test Case, Performance Test Case, Usability Test Case.,
Database Test Case ,Security Test Case, User Acceptance Test Case
Question : Model a Test case Deisgn process for the ATM system
The test case design process for an ATM system involves breaking down the system's
functionality into smaller components, understanding the various use cases, and then developing
test cases to verify that the system behaves as expected. The primary objective is to validate that
the ATM system functions correctly in various scenarios and meets both functional and non-
functional requirements.

1. Understanding the Use Cases


For the ATM system, the common use cases might include:
 Login Authentication: User must authenticate using a card and PIN.
 Cash Withdrawal: User can withdraw money from their account.
 Balance Inquiry: User can check the balance of their account.
 Funds Transfer: User can transfer funds between accounts.
 Deposit: User can deposit cash or checks into their account.
 PIN Change: User can change the PIN.
 Session Timeout: If the user is inactive for too long, the session should time out.

2. Identifying Test Scenarios


For each of the use cases, we identify different test scenarios. Test scenarios are general
descriptions of what should be tested. Below are some example scenarios for each use case:
a. Login Authentication
 Correct card number.
 Incorrect card number.
 Correct PIN
 Incorrect PIN.
 Expired card.
 Blocked card.
b. Cash Withdrawal
 Sufficient balance.
 Insufficient balance.
 Valid amount requested.
 Invalid amount requested (e.g., non-multiple of the ATM's withdrawal limit).
 Transaction canceled midway.
 Network failure during transaction.
 ATM out of cash.
c. Balance Inquiry
 Correct account balance displayed.
 Account balance after withdrawal.
d. Funds Transfer
 Transfer within the same bank.
 Transfer between different banks.
 Insufficient funds for transfer.
 Maximum transfer limit exceeded.
 Incorrect account number.
e. Deposit
 Cash deposit successfully processed.
 Deposit of invalid items (e.g., checks or counterfeit money).
 ATM malfunction during deposit.
f. PIN Change
 Correct old PIN, valid new PIN.
 Incorrect old PIN.
 New PIN not matching the confirmation PIN.
 Invalid new PIN format (e.g., too short).
 PIN change failure due to system error.
g. Session Timeout
 User is inactive for a set period.
 User performs any operation before session times out.

3. Test Case Design


Once the scenarios are identified, we write detailed test cases. Each test case includes a
description, inputs, expected results, and any preconditions or postconditions.

Sample Test Cases


Test Case 1: Valid Login Authentication
 Test Case ID: TC_001
 Description: Verify that the user can log in successfully with a valid card number and
PIN.
 Preconditions: User has an active bank account, valid ATM card, and PIN.
 Test Steps:
1. Insert ATM card into the card reader.
2. Enter correct PIN.
3. Press "Enter" to authenticate.
 Expected Result: User is logged in successfully and the main menu is displayed.
 Postconditions: User is ready to make further transactions.

Test Case 2: Invalid PIN Entry


 Test Case ID: TC_002
 Description: Verify that the system rejects an invalid PIN entry.
 Preconditions: User has a valid ATM card.
 Test Steps:
1. Insert ATM card into the card reader.
2. Enter incorrect PIN.
3. Press "Enter" to authenticate.
 Expected Result: The system displays an error message stating "Invalid PIN" and
allows the user to try again.
 Postconditions: User can attempt to re-enter the correct PIN or exit.

Test Case 3: Cash Withdrawal with Sufficient Funds


 Test Case ID: TC_003
 Description: Verify that the user can withdraw money if the balance is sufficient.
 Preconditions: User is authenticated and has sufficient balance in their account.
 Test Steps:
1. Select the "Withdraw" option.
2. Enter the amount to withdraw (e.g., Rs.2000).
3. Confirm the withdrawal.
 Expected Result: The ATM dispenses the correct amount and the user’s balance is
updated.
 Postconditions: User receives cash and the balance is deducted.

Test Case 4: Insufficient Funds for Withdrawal


 Test Case ID: TC_004
 Description: Verify that the user cannot withdraw more money than the available
balance.
 Preconditions: User is authenticated and has insufficient balance.
 Test Steps:
1. Select the "Withdraw" option.
2. Enter an amount greater than the available balance (e.g., Rs.10000).
3. Confirm the withdrawal.
 Expected Result: The ATM displays an error message stating "Insufficient funds."
 Postconditions: No money is dispensed, and the balance remains unchanged.

Test Case 5: PIN Change


 Test Case ID: TC_005
 Description: Verify that the user can change their PIN.
 Preconditions: User is authenticated and wishes to change their PIN.
 Test Steps:
1. Select the "Change PIN" option.
2. Enter the correct old PIN.
3. Enter the new PIN and confirm.
4. Press "Enter" to confirm the change.
 Expected Result: The system accepts the new PIN, confirms the change, and the user is
logged out for security reasons.
 Postconditions: The user’s PIN is updated.

4. Test Case Prioritization


Based on the risk and importance of the use cases, the test cases can be prioritized. Critical
functionalities like login authentication, cash withdrawal, and balance inquiry should be tested
first. Next, secondary functionalities like funds transfer and PIN change should be tested. Non-
critical features like session timeout can be tested later.

5. Test Data Preparation


For each test case, we need to prepare the test data:
 Valid data (e.g., correct card number, PIN, and sufficient balance).
 Invalid data (e.g., incorrect card number, PIN, insufficient funds).
 Boundary data (e.g., maximum withdrawal limit, minimum PIN length).

6. Execution and Reporting


Once the test cases are ready, the next step is to execute them on the ATM system and
document the results. Each test result should be categorized as Pass, Fail, or Blocked. If the test
case fails, the issue should be logged, and further debugging should take place.

7. Regression Testing
After any bug fixes or enhancements, regression testing should be performed to ensure that no
other part of the system is broken due to the changes made.

8. Performance and Security Testing


Besides functional testing, performance and security testing should be conducted:
 Performance testing: Test the ATM under different loads, such as multiple users
attempting to withdraw money simultaneously.
 Security testing: Check for vulnerabilities, such as card data being exposed or PIN
being compromised.

Conclusion : The test case design process for an ATM system ensures that all aspects of the
system, including usability, performance, security, and functionality, are thoroughly tested.
2.10 :BUG Life cycle
A software bug is a defect in computer software. Bugs can occur due to the following reasons:
 Wrong Coding – The developer has coded the program incorrectly
 Missing Coding – The developer may not have written the code for that functionality
 Extra Coding – The developer would have added features that are not needed as per the
client’s specification
A Bug’s Life Cycle:

Figure : Software Bug Life Cycle


This example shows that when a bug is first found by a software tester, a report is
logged and assigned to a programmer to be fixed. This state is called the open state.
Once the programmer fixes the code, he assigns the report back to the tester and thebug
enters the resolved state.
The tester then performs a verification test to confirm that the bug is indeed fixed and,if it
is, closes the report. The bug then enters its final state, the closed state.
Detailed explanation of Defect / Bug life cyle

Figure : This generic bug life-cycle state figure covers most of the possible situations
thatcan occur.
Steps involved in bug life cycle are :
1. The tester finds a defect.
2. The defect status is changed to New.
3. The development manager will then analyze the defect.
4. The manager determines if the defect is valid or not.
5. If the defect is not valid then the development manager assigns the status Rejected to defect.
6. If the defect is valid then it is checked whether the defect is in scope or not. If no then the
defect status is changed to Deferred.
7. If the defect is in scope then the manager checks whether a similar defect was raised earlier.
If yes then the defect is assigned a status duplicate.
8. If the defect is not already raised then the manager starts fixing the code and the defect is
assigned the status In-Progress.
9. Once the defect is fixed the status is changed to fixed.
10. The tester retests the code if the test is passed then the defect status is changed to closed.
11. If the test fails again then the defect is assigned status reopened and assigned to the
developer.

Benefits of Bug Lifecycle


 Deliver High-Quality Product: The defect lifecycle helps in identifying and fixing bugs
throughout the development process, that helps in delivering high-quality product.
 Better Communication: The defect lifecycle provides a structured process for logging,
tracking, and resolving defects, which provides better communication and collaboration
among team members.
 Early Issue Detection: The defect lifecycle process allows for early detection of defects,
enabling the team to understand patterns and trends, and take preventive measures for future
development.

Limitations in Bug Lifecycle


 Time-Consuming: Tracking and managing defects can be lengthy and complex.
 Resource Intensive: Requires dedicated resources for effective defect management.
 Delayed Releases: Extensive defect management might delay product release timelines.

2.11 Bug Reporting


Bug reporting in software testing is the process of identifying, documenting, and
communicating issues found in software applications during testing. It's a critical step in
ensuring that the final product meets quality standards and functions as intended

2.11.1 : The fundamental principles for reporting a bug:

1. Report bugs as soon as possible : The earlier you find a bug, the more time that remains
in the schedule to get it fixed. The relationship between time of reporting bug and bug fixing
is shown in the following graph
2. Effectively describe the bugs: The bug and its behavior should be clearly described.
3. Be nonjudgmental in reporting bugs: Bug reports should be written against the
product, not the person, and state only the facts.
4. Follow up on your bug reports: A great tester finds and logs lots of bugs but also
continues to monitor them through the process of getting them fixed

All Bugs are not treated Equal:


Tester should classify the bugs and identify in a short, concise way what their impact
is. The common method for doing this is to give the bugs a severity and a priority level. The
specifications of the method vary among companies, but the general concept is the same:
Severity indicates how bad the bug is; the likelihood and the degree of impact when the
user encounters the bug.
Priority indicates how much emphasis should be placed on fixing the bug and the
urgency of making the fix.
The following list of common classification of severity and priority help to understand the
difference between the two. Some companies use up to ten levels and others use just three.
No matter how many levels are used,though, the goals are the same.
Severity
1. System crash, data loss, data corruption, security breach
2. Operational error, wrong result, loss of functionality
3. Minor problem, misspelling, UI layout, rare occurrence
4. Suggestion
Priority
1. Immediately fix the blocks before further testing, very visible
2. Must fix before the product is released
3. Should fix when time permits
4. Would like to fix but the product can be released as it is
Some Egs: A data corruption bug that happens rarely might be classified as Severity 1,
Priority 3. A misspelling in the setup instructions mightbe classified as Severity 3, Priority 2.
If the software crashes as soon as the tester starts it up is classified as Severity 1, Priority 1. If
a tester thinks that a button should be moved a little further down on the page, it might be
classified as Severity 4, Priority 4

The Standard: The Test Incident Report


The IEEE 829 Standard for Software Test Documentation, defines a document called
the Test Incident Report whose purpose is “to document any event that occurs during the
testing process which requires investigation.” In short, to log a bug.
The following list shows the areas that the standard defines, adapted and updated a
bit, to reflect more current terminology.
 Identifier. Specifies an ID that’s unique to this bug report that can be used to
locate and refer to it.
 Summary. Summarizes the bug into a short, concise statement of fact. References
to the software being tested and its version, the associated test procedure, test case,
and the test specification should also be included.
 Incident Description. Provides a detailed description of the bug with the following
information:
Date and time
Tester’s name
Hardware and software configuration used
Inputs
Procedure Steps
Expected results
Actual results
Attempts to reproduce and description of what was tried
Other observations or information that may help the programmer locate the bug
 Impact. The severity and priority as well as an indication of impact to the test plan,
test specs, test procedures, and test cases.

2.11.2 : Bug-Tracking Systems:


Bug Reporting and tracking can be Manual and Automatic.
Manual Bug Reporting and Tracking:
Following Figure shows an example of bug reporting document defined by the IEEE
829 standard. This one-page form can hold all the information necessary to identify and
describe a bug. It also contains fields that we can use to track a bug through its life cycle.
Once the form is filed by the tester, it can be assigned to a programmer to be fixed.
The programmer has fields where he can enter information regarding the fix, including
choices for the possible resolutions. There’s also an area where, once the bug is resolved, the
tester can supply information about his efforts in retesting and closing out the bug. At the
bottom of the form is an area for signatures. The tester can put his name on the line to reflect
that a bug has been resolved to his satisfaction.

Fig: A sample bug report form shows how the details of a bug can be condensed to a single
page of data.
The problem with paper forms is that, it is difficult to store and handle the papers. The
alternate forms are spreadsheets and databases.

Automated Bug Reporting and Tracking:


The following Figure shows a top-level view of a bug database of Automated bug reporting
tool. The individual bugs, their IDs, titles, description, assignee, status , resolution etc are
shown in a simple listing . At a glance we can see who opened the bug, who resolved it, and
who closed it. We can also scroll through details that were entered about the bug as it went
through its life cycle.
Figure: The main window of a typical bug-reporting database shows what an automated system can
provide.
There may be a series of buttons that a tester can click to create (open) a new bug or to edit,
resolve, close, or reactivate (reopen) an existing bug.

2.11.3 : Benefits of a Good Bug Report


 Detailed Problem Description: It provides a clear and thorough explanation of the
issue encountered, helping developers understand the nature and scope of the problem.
 Enables Teamwork: It enables collaboration and shared understanding among team
members by providing a common reference point for discussing and addressing the bug.
 Saves Time and Money: By providing essential information upfront, a good bug report
reduces the time spent on debugging and troubleshooting, ultimately saving resources
and costs.
 Streamlines Development Procedures: It helps prioritize and allocate resources
effectively, leading to more efficient development processes and faster bug resolution.

2.11.4 : Best Practices for bug reporting


 Be clear and concise
 Include all relevant information
 Prioritize bugs
 Track the status of bugs
 Communicate with the development team
2.11.5 : Bug tracking tools
o Jira - Jira is one of the most important bug tracking tools. Jira is an open-source tool
that is used for bug tracking, project management, and issue tracking in manual testing.
Jira includes different features, like reporting, recording, and workflow.
o Bugzilla - It is an open-source tool, which is used to help the customer, and the client to
maintain the track of the bugs. It is also used as a test management tool. It supports
various operating systems such as Windows, Linux, and Mac
o BugNet - It is an open-source defect tracking and project issue management tool, which
was written in ASP.NET and C# programming language and support the
Microsoft SQL database.
o Trac- Trac is helpful in tracking the issues for software development projects. It is
written in the Python programming language
o Mantis- It is a web-based bug tracking system used to follow the software defects. It is
executed in the PHP programing language.

2.12 : METRICS AND STATISTICS


Metrics and statistics are the methods by which the progress and the success of the project, and
the testing, are tracked.
Importance of Metrics and Statistics in Software Testing:
Test metrics and Statistics are essential in determining the software’s quality and performance.
Developers may use the right software testing metrics to improve their productivity.
 Early Problem Identification: By measuring metrics such as defect density and defect
arrival rate, testing teams can spot trends and patterns early in the development process.
 Allocation of Resources: Metrics identify regions where testing efforts are most needed,
which help with resource allocation optimization. By ensuring that testing resources are
concentrated on important areas, this enhances the strategy for testing as a whole.
 Monitoring Progress: Metrics are useful instruments for monitoring the advancement of
testing. They offer insight into the quantity of test cases that have been run, their completion
rate, and if the testing effort is proceeding according to plan.
 Continuous Improvement: Metrics offer input on the testing procedure, which helps to
foster a culture of continuous development.
Examples of test metrics are :
• Total bugs found daily over the course of the project
• List of bugs that still need to be fixed
• Current bugs ranked by how severe they are
• Total bugs found per tester
• Number of bugs found per software feature

Types of Software Testing Metrics:


Software testing metrics are divided into three categories:
1. Process Metrics: A project’s characteristics and execution are defined by process metrics.
These features are critical to the SDLC process’s improvement and maintenance. Eg :
Defect Metrics, Test Coverage, Test case effectiveness
2. Product Metrics: A product’s size, design, performance, quality, and complexity are
defined by product metrics. Developers can improve the quality of their software
development by utilizing these features.
3. Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used to
estimate a project’s resources and deliverables, as well as to determine costs, productivity,
and flaws.
Common Project-Level Metrics:
The following Figure is an example in which a pie chart that shows a breakout of the
bugs. In this chart, the bugs are separated into the major functional areas of the software in
which they were found.
Three areas—the user interface, integer math, and floating-point math—make up 60
percent of all the bugs found. If the test effort has been consistent across the entire product,
there’s a good chance that these three areas are indeed buggy and probably still have more
bugs to find.
This data tells tester and management a great deal about the project and is a good
example of how lots of bug information can be distilled down to something simple and easily
understood.

Figure : A project-level pie chart shows how many bugs were found in each major
functional area of
the software.

Metrics and Statistics in Testing


The most frequently used feature of a bug-tracking database is performing queries to obtain
specific lists of bugs that you’re interested in. The following Figure shows a typical query
building window with a sample query ready to be entered.
Fig: Most bug-tracking databases have a means to build queries

The types of queries we can build are bounded only by the database’s fields and the
values they can hold. It is possible to answer just about any question we might have regarding
our testing and how it relates to the project.
For example, here’s a list of questions easily answered through queries:
• What are the IDs for the resolved bugs currently assigned to a tester for closing?
• How many bugs have a particular tester say ‘X’ entered on this project? In the
previous week? Over the lastmonth? Between April 1 and July 31?
• What bugs have Tester ‘X’ entered against the user interface that were resolved as
“won’t fix?”
• How many of my bugs were Severity 1 or Severity 2?
• Of all the bugs I’ve entered, how many were fixed? How many were deferred? How
many were duplicates?
The results of the query will be a list of bugs as shown in the bug-tracking database window. By
exporting the data, we can pick and choose the exact fields we want to save to a file. If we’re
going to a meeting to discuss open bugs, we might want to save the bug ID number, its title,
priority, severity, and who it’s assigned to. Such a list might look like Table below.

Table : Open Bugs for Bug Committee Meeting


If exported, the results of a query with the bug severity data field, we could also
generate a graph such as the one shown in Figures
below.

Figure : A bug-tracking database can be used to create individualized graphs showing


the details of your testing.

Figure : Different queries can generate different views of the bug data. In this case,you
can see how one tester’s bugs were resolved.

Thus Metrics and Statistics in Software Testing help to analyze the bugs. This it helps in the
success of the project.

2.13: Risk-Based Testing and Risk Management:


Risk Based Testing (RBT) is a software testing type which is based on the probability of risk.
It involves assessing the risk based on software complexity, criticality of business, frequency of use,
possible areas with Defect etc. Risk based testing prioritizes testing of features and functions of the
software application which are more impactful and likely to have defects.
Risk is the occurrence of an uncertain event with a positive or negative effect on the measurable
success criteria of a project. It could be events that have occurred in the past or current events or
something that could happen in the future. These uncertain events can have an impact on the cost,
business, technical and quality targets of a project.

Risks can be positive or negative.


 Positive risks are referred to as opportunities and help in business sustainability. For example
investing in a New project, Changing business processes, Developing new products.
 Negative Risks are referred to as threats and recommendations to minimize or eliminate them
must be implemented for project success.

When to implement Risk based Testing


Risk based testing can be implemented in

 Projects having time, resource, budget constraints, etc.


 Projects where risk based analysis can be used to detect vulnerabilities to SQL injection
attacks.
 Security Testing in Cloud Computing Environments.
 New projects with high risk factors like Lack of experience with the technologies used, Lack of
business domain knowledge.
 Incremental and iterative models, etc.

Risk Management Process


Let’s now understand the steps involved in Risk Management Process

Risk Identification
Risk identification can be done through risk workshops, checklists, brainstorming, interviewing,
Delphi technique, cause and effect diagrams, lessons learnt from previous projects, root cause analysis,
contacting domain experts and subject matter experts.
Risk Register is a spreadsheet which has a list of identified risks, potential responses, and root causes.
It is used to monitor and track the risks (both threats and opportunities) throughout the life of the
project. Risk response strategies can be used to manage positive and negative risks.
Risk breakdown structure plays an important role in risk planning. The Risk Breakdown structure
would help in identifying the risk prone areas and helps in effective evaluation and risk monitoring
over the course of the project. It helps in providing sufficient time and resources for risk management
activities. It also helps in categorizing many sources from which the project risks may arise.
Risk Breakdown structure sample
Risk Analysis (Includes Quantitative and Qualitative Analysis)
Once the list of potential risks has been identified, the next step is to analyze them and to filter the risk
based on the significance. One of the qualitative risk analysis technique is using Risk Matrix (covered
in the next section). This technique is used to determine the probability and impact of the risk.

Risk Response planning


Based on the analysis, we can decide if the risks require a response. For example, some risks will
require a response in the project plan while some require a response in the project monitoring, and
some will not require any response at all.
The risk owner is responsible for identifying options to reduce the probability and impact of the
assigned risks.
Risk mitigation is a risk response method used to lessen the adverse impacts of possible threats. This
can be done by eliminating the risks or reducing them to an acceptable level.

Risk Contingency
Contingency can be described as a possibility of an uncertain event, but the impact is unknown or
unpredictable. A contingency plan is also known as the action plan/back up plans for the worst case
scenarios. In other words, it determines what steps could be taken when an unpredictable event
materializes.

Risk Monitoring and Control


Risk control and monitor process are used to track the identified risks, monitor residual risks, identify
new risks, update the risk register, analyze the reasons for the change, execute risk response plan and
monitor risk triggers, etc. Evaluate their effectiveness in reducing risks.
This can be achieved by risk reassessments, risk audits, variance and trend analysis, technical
performance measurement, status update meetings and retrospective meetings.

Benefits of Risk Based Testing


The benefits of Risk Based Testing is given below

 Improved productivity and cost reduction


 Improved Market opportunity (Time to market) and On time delivery.
 Improved service performance
 Improved quality as all of the critical functions of the application are tested.
 Gives clear information on test coverage. Using this approach, we know what has/have not
been tested.
 Test effort allocation based on risk assessment is the most efficient and effective way to
minimize the residual risk upon release.
 Test result measurement based on risk analysis enables the organization to identify the residual
level of quality risk during test execution, and to make smart release decisions.
 Optimized testing with highly defined risk evaluation methods.
 Improved customer satisfaction – Due to customer involvement and good reporting and
progress tracking.
 Early detection of the potential problem areas. Effective preventive measures can be taken to
overcome these problems
 Continuous risk monitoring and assessment throughout the project’s entire lifecycle helps in
identification and resolution of risks and address the issues that could endanger the
achievement of overall project goals and objectives.

UNIT III
TEST DESIGN AND EXECUTION
Test Objective Identification, Test Design Factors, Requirement identification, Testable
Requirements, Modeling a Test Design Process, Modeling Test Results, Boundary Value
Testing, Equivalence Class Testing, Path Testing, Data Flow Testing, Test Design
Preparedness Metrics, Test Case Design Effectiveness, Model Driven Test Design, Test
Procedures, Test Case Organization and Tracking, Bug Reporting, Bug Life Cycle.
3.1 TEST OBJECTIVE IDENTIFICATION
 Test Objective Identification in software testing refers to the process of defining clear and specific goals
for a testing effort. It involves determining what aspects of the software need to be tested, what specific
functionalities should be validated and what quality attributes should be evaluated.
 Test Objective Identification also defines the goals that a test case is intended to achieve.
This is an important step in software testing as it helps to ensure that the test cases are
targeted at the correct areas of the software and they are effective in finding faults.
 Test Objective Identification phase is crucial for planning and designing effective test case
and test suites. It helps testers and stakeholders align their understanding of the testing
scope and expectations. This ensures that the testing effort is focused and purposeful. By
defining clear objectives, the testers can prioritize their testing activities and allocate
resources effectively.
 We cannot test the system comprehensively if we do not understand it. Therefore, the first
step in identifying the test objective is to read, understand, and analyze the functional
specification. It is essential to have a background familiarity with the subject area, the
goals of the system, business processes, and system users for a successful analysis.
 We have to understand the explicit requirements and also critically analyze requirements
to extract the inferred requirements that are embedded in the requirements.
 An inferred requirement is one that a system is expected to support but is not explicitly
stated. Inferred requirements need to be tested just like the explicitly stated requirements.
As an example, let us consider the requirement that the system must be able to sort a list of
items into a desired order.
There are several unstated requirements not being verified by the above test objective. Many
more test objectives can be identified for the requirement:
• Verify that the system produces the sorted list of items when an already sorted list of
items is given as input.
• Verify that the system produces the sorted list of items when a list of items with
varying length is given as input.
• Verify that the number of output items is equal to the number of input items.
• Verify that the contents of the sorted output records are the same as the input record
contents.
• Verify that the system produces an empty list of items when an empty list of items is
given as input.
• Check the system behavior and the output list by giving an input list containing one
or more empty (null) records.
• Verify that the system can sort a list containing a very large number of unsorted items.

The test objectives are put together to form a test group or a subgroup after they have been
identified. A set of (sub) groups of test cases are logically combined to form a larger group. A
hierarchical structure of test groups is called a test suite.
Figure - Test suite structure.
It is necessary to identify the test groups based on test categories and refine the test groups
into sets of test objectives. Individual test cases are created for each test objective within the
subgroups. Test groups may be nested to an arbitrary depth. They may be used to help system
test planning and execution.

5 Typical Objectives of Testing


Delivering quality products is the ultimate objective of testing. The various objectives of testing are:
 Identification of Bugs and Errors
 Delivering Quality Product
 Justification with Requirement
 Increasing Confidence in the Product
 Enhanced Growth

3.1.1 Key steps involved in test objective Identification :


 Requirement Analysis : Understanding the functional and non-functional requirements of the
software or the system under test. This involves analyzing project documentation, user stories,
use case and other relevant sources
 Risk assessment : Identifying potential risks and their impact on the software. This includes
evaluating the criticality of various functionalities and determining which areas requires more
rigorous testing
 Defining testing goals : Establishing specific goals for the testing effort, such as validating
certain functionalities , ensuring system stability under load, assessing performance metrics or
confirming compliance with industry standards
 Prioritization : Determining the order of testing activities based on the factors like risk,
importance, dependencies and project timelines. This helps allocate testing resources
efficiently and ensures that critical aspects are tested first
 Test scope Definition : Clearly defining the boundaries of the testing effort including which
components, modules or functionalities will be covered and which ones are out of scope
 Test Coverage planning : Identifying the necessary test types such as functional ,
performance, security, usability, etc to ensure comprehensive coverage of the identified
objectives

Once the test objectives have been identified, they should be documented in the test plan. This will
help to ensure that the test cases are developed and executes in a way that meets the specific goals
of the testing.

3.1.2 Some of the benefits of the test objective identification:


 It helps to ensure that the test cases are targeted at the correct areas of the software
 It helps to ensure that the test cases are effective in finding faults
 It helps to improve the efficiency of the testing process
 It helps to ensure that the software meets the requirements
3.1.3 Some of the tips for identifying the test objectives:
 Start by reviewing the software requirements
 Consider the risks associated with the software
 Think about the test environment
 Be specific and measurable
 Document the test objectives in the test plan

3.1.4 The objectives of software testing vary depending on the level of testing being performed:
 Unit testing - The goal of unit testing is to ensure that each unit of code performs as
expected and is free of bugs.
 Integration testing - The goal of integration testing is to identify issues with how different
units of the software interact.
 System testing - The goal of system testing is to verify that the complete software system
meets all of its requirements.
 Acceptance testing- The goal of acceptance testing is to ensure that the software is ready to
be delivered to the users.

The main objectives of software testing are to ensure that the software is reliable, efficient, and
meets the user's requirements.

3.2 TEST DESIGN FACTORS


Test Design factors in Software testing refers to the various considerations and factors
that influence the design of the test cases and test suites. The test design activities must be
performed in a planned manner in order to meet some technical criteria, such as effectiveness,
and economic criteria, such as productivity. Therefore,the following factors during test design
need to be considered:
1. Coverage metrics,
2. Effectiveness,
3. Productivity,
4. Validation,
5. Maintenance, and
6. User skill.
 Coverage Metrics : Coverage metrics concern the extent to which the Device Under
Test (DUT) is examined by a test suite (or test case) designed to meet certain criteria.
Coverage metrics lend us two advantages.
1. First, these allow us to quantify the extent to which a test suite covers certain
aspects, such as functional, structural, and interface of a system.
2. Second, these allow us to measure the progress of system testing. The criteria
may be path testing, branch testing, or a feature identified from a requirement
specification.
Each test case is given an identifier(s) to be associated with a set of requirements. This
association is done by using the idea of a coverage matrix. A coverage matrix [Aij] is

generated for the above idea of coverage metrics. The general structure of the coverage
matrix [Aij] is represented as shown in the Table, where Ti stands for the ith test case and
Nj stands for the jth requirement to be covered; [Aij] stands for coverage of the test case
Ti over the tested element Nj.
The complete set of test cases, that is, a test suite, and the complete set of tested
elements of the coverage matrix are identified as Tc ={T1,T2,..,Tq} and Nc
={N1,N2,..,Np}, respectively.
 Effectiveness : A structured test case development methodology must be used as much
as possible to generate a test suite. A structured development methodology
minimizes maintenance work and improves productivity. Careful design of test cases in
the early stages of test suite development ensures their maintainability as new
requirements emerge.
The correctness of the requirements is very critical in order to develop effective test
cases to reveal defects. Therefore, emphasis must be put on identification and analysis
of the requirements from which test objectives are derived.
 Productivity :Test cases are created based on the test objectives (productivity).
 Validation : Another aspect of test case production is validation of the test cases to
ensure that thoseare reliable. It is natural to expect that an executable test case meets
its specification before it is used to examine another system. This includes ensuring that
test cases have adequate error handling procedures and precise pass–fail criteria.
 Maintenance : We need to develop a methodology to assist the production, execution,
and maintenance of the test suite.
 User Skill : Another factor to be aware is the potential users of the test suite. The test
suite should be developed with these users in mind; the test suite must be easy to
deploy and execute in other environments, and the procedures for doing so need to be
properly documented. A test suite production life cycle should consider all six factors
discussed above.

3.2.1 :Other Factors that are considered for designing Test Cases :
1. Correctness
2. Negatives
3. User Interface
4. Usability
5. Performance
6. Security
7. Integration
8. Reliability
9. Compatibility

Correctness : Correctness is the minimum requirement of software, the essential purpose of


testing. The tester may or may not know the inside details of the software module under test e.g.
control flow, data flow etc.
Negatives : In this factor we can check what the product it is not supposed to do.
User Interface : In UI testing we check the user interfaces. For example in a web page we may
check for a button. In this we check for button size and shape. We can also check the navigation
links.
Usability : Usability testing measures the suitability of the software for its users, and is directed at
measuring the efficiency of the software with which specified users can achieve specified goals in
particular environments.
Performance : In software engineering, performance testing is testing that is performed from one
perspective to determine how fast some aspect of a system performs under a particular workload.
Security: Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security testing
are the Confidentiality, Integrity, Authentication and Authorization.
Integration : Integration testing is a logical extension of unit testing. In its simplest form, two
units that have already been tested are combined into a component and the interface between them
is tested.
Reliability : Reliability testing is to monitor a statistical measure of software maturity over time
and compare this to a desired reliability goal.
Compatibility : Compatibility testing is a part of software's non-functional tests. This testing is
conducted on the application to evaluate the application's compatibility with the computing
environment. Browser compatibility testing can be more appropriately referred to as user
experience testing. This requires that the web applications are tested on various web browsers to
ensure that the Users have the same visual experience irrespective of the browsers through which
they view the web application.

3.3 REQUIREMENT IDENTIFICATION


Requirements are a description of the needs or desires of users that a system is
supposed to implement. There are two main challenges in defining requirements.
First is to ensure that the right requirements are captured, which is essential for
meeting the expectations of the users. Requirements must be expressed in such a form that the
users can easily review and confirm their correctness.
Second is to ensure that the requirements are communicated unambiguously to the
developers and testers so that there are no surprises when the system is delivered.

Requirement Life Cycle:

Figure - State transition diagram of requirement.


Figure shows a state diagram of a simplified requirement life cycle starting from the
submit state to the closed state. This transition model provides different phases of a
requirement, where each phase is represented by a state. This model represents the life of a
requirement from its inception to completion through the following states: submit,
open, review, assign, commit, implement, verification, and finally closed. At each of these
states certain actions are taken by the owner, and the requirement is moved to the next state
after the actions are completed.
The requirements traceability is the ability to describe and follow the life of a
requirement, in both forward and backward direction, i.e., from its origins, through its
development and specification, to its subsequent deployment and use, and through periods of
ongoing refinement and iteration in any of these phases.

A traceability matrix finds two applications:


(i) identify and track the functional coverage of a test and
(ii) identify which test cases must be exercised or updated when a system evolves.
Submit State: A new requirement is put in the submit state to make it available to others. The
owner of this state is the submitter. A new requirement may come from different sources:
customer, marketing manager, and program manager.
A program manager oversees a software release starting from its inception to its
completion and is responsible for delivering it to the customer. A software release is the
release of a software providing new features. Usually, the requirements are generated from the
customers and marketing managers.
The following fields are filled out when a requirementis submitted:
requirement_id: A unique identifier associated with the requirement.
priority: A priority level of the requirement—high or normal.
title: A title for the requirement.
description: A short description of the requirement.
product: Name of the product in which the requirement is desired.
customer: Name of the customer who requested this requirement.

Open State: In this state, the marketing manager is in charge of the requirement and
coordinates the following activities.
 Reviews the requirement to find duplicate entries. The marketing manager can move the
duplicate requirement from the open state to the decline state with an explanation and a
pointer to the existing requirement. Also, he or she may ensure that there are no
ambiguities in the requirement and, if there is any ambiguity, consult with the submitter
and update the description and the note fields of the requirement.
 Reevaluates the priority of the requirement assigned by the submitter and either accepts it
or modifies it. Determines the severity of the requirement. There are two levels of
severity defined for each requirement: normal and critical.
 The marketing manager may decline a requirement in the open state and terminate the
development process, thereby moving the requirement to the decline state with a proper
explanation.
The following fields may be updated by the marketing manager, who is the owner of the
requirement in the open state:
priority: Reevaluate the priority—high or normal—of this requirement.
severity: Assign a severity level—normal or critical—to the requirement.
decline_note: Give an explanation of the requirement if declined.
software_release: Suggest a preferred software release for the requirement.
Review State: The director of software engineering is the owner of the requirement in the
review state. The software engineering director reviews the requirement to understand it and
estimate the time required to implement this. The director thus prepares a preliminary version
of the functional specification for this requirement. This scheme provides a framework to map
the requirement to the functional specification which is to be implemented.
The director of software engineering can move the requirement from the review state
to the assign state by changing the ownership to the marketing manager. Moreover, the
director may decline this requirement if it is not possible to implement.
The following fields may be updated by the director:

eng_comment: Comments generated during the review are noted in this field.
time_to_implement: This field holds the estimated time in person-weeks to
implement the requirement.
attachment: An analysis document, if there is any, including figures and descriptions
that are likely to be useful in the future development of functional specifications.
eng_assigned: Name of the engineer assigned by the director to review the
requirement.

Assign State: The marketing manager is the owner of the requirement in the assign state. A
marketing manager assigns the requirement to a particular software release and moves the
requirement to the commit state by changing the ownership to the program manager, who
owns that particular software release. The marketing manager may decline the requirement
and terminate the development process, thereby moving the requirement to the decline state.
The following fields are updated bythe marketing manager:
decline_note and software_release.
The former holds an explanation for declining, if it is moved to the decline state. On
the other hand, if the requirement is moved to the commit state, the marketing manager
updates the latter field to specify the software release in which the requirement will be
available.

Commit State: The program manager is the owner of the requirement in the commit state.
The requirement stays in this state until it is committed to a software release. The program
managerreviews all the requirements that are suggested to be in a particular release which is
owned by him.
The requirement may be moved to the implement state by the program manager after
it is committed to a particular software release. The test engineers must complete the review
of the requirement and the relevant functional specification from a testability point of view.
Next, the test engineers can start designing and writing test cases for this requirement.
The only field to be updated by the program manager, who is the owner of the
requirement in the commit state, is committed_release. The field holds the release number
for this requirement.

Implement State: The director of software engineering is the owner of the requirement in the
implement state. This state implies that the software engineering group is currently coding
and unit testing the requirement.
The following fields may be updated by the director, since he or she is the owner of
a requirement in the implement state:
decline_note: An explanation of the reasons the requirement is moved to decline state.

Verification State: The test manager is the owner of the requirement in the verification state.
The test manager verifies the requirement and identifies one or more methods for assigning a
test verdict: (i) testing, (ii) inspection, (iii) analysis, and (iv) demonstration.
If testing is a method for verifying a requirement, then the test case identifiers and
their results are provided. This information is extracted from the test factory. Inspection
means review of the code. Analysis means mathematical and/or statistical analysis.
Demonstration means observing the system in a live operation. A verdict is assigned to the
requirement by providing the degree of compliance information: full compliance, partial
compliance, or noncompliance.
The test manager may move the requirement to the closed state after it has been
verified and the value of the verification_status field set to “passed.”
The following are some of the fields that are updated by the test manager since he or
she is the owner of therequirement at the verification state:
decline_note: The reasons to decline this requirement.
verification_method: Can take one of the four values from the set {Testing, Analysis,
Demonstration, Inspection}.
verification_status: Can take one of the three values from the set {Passed, Failed,
Incomplete}, indicating the final verification status of the requirement.

Closed State: The requirement is moved to the closed state from the verification state by the
test manager after it is verified.

Decline State: In this state, the marketing department is the owner of the requirement. A
requirement comes to this state because of some of the following reasons:
• The marketing department rejected the requirement.
• It is technically not possible to implement this requirement and, possibly, there is
an associated EC ( Engineering Change) number.
• The test manager declines the implementation with an EC number.
The marketing group may move the requirement to the submit state after reviewing it
with the customer.

3.3.1 : Key aspects involved in Requirement Identification are :


1. Requirement Gathering – Involves collecting information about the software from various sources
2. Requirement Analysis- Gathered information should be analyzed to ensure clarity
3. Requirement Documentation – The identified requirements should be documented to serve as a
reference to the testing team
4. Requirement Prioritization – Determine the relative importance of each requirement based on
factors such as business values, risk, customer expectation, etc
5. Requirement Traceability – Should be able to track and link the test cases back to the specific
requirements
6. Requirement Validation – Involves confirming that the identified requirements reflect the needs of
the stake holders
7. Requirement Change Management – Requirements can change through the software development
life cycle due to evolving business needs, customer feedback or other factors. So there should be
mechanisms to access the impact of requirement changes and update the testing approach when
needed.

3.3.2. : Some of the Techniques used for Requirement Identification :


 Talking to the stakeholders
 Reviewing the Software Requirement Document
 Using Use cases
 Executing Exploratory testing

3.3.3 : Benefits of Requirement Identification:


 Helps to ensure that the test cases are targeted at the correct areas of the software
 Helps to ensure that the test cases are effective in finding defects
 Helps to improve the efficiency of the testing process
 Helps to ensure that the software meets the requirements
3.4 TESTABLE REQUIREMENTS
3.4.1 : System level testable Requirements :
System-level tests are designed based on the requirements to be verified. Testable
requirements are the requirements that can be tested to determine whether they have been
met. A test engineer analyzes the requirement, the relevant functional specifications, and the
standards to determinethe testability of the requirement. The above task is performed in the
commit state. Testability analysis means assessing the static behavioral characteristics of the
requirement to reveal test objectives.
One way to determine the requirement description is testable is as follows:
• Take the requirement description X
• The system must perform X.
• Then encapsulate the requirement description to create a test objective: Verify that the
system performs X correctly.
• Review this test objective by asking the question: Is it workable? In other words, find out if
it is possible to execute it assuming that the system and the test environment are available.
• If the answer to the above question is yes, then the requirement description is clear and
detailed for testing purpose. Otherwise, more work needs to be done to revise or supplement
the requirement description.
As an example, let us consider the following requirement: The software image must be
easy to upgrade/downgrade as the network grows. This requirement is too broad and vague to
determine the objective of a test case. In other words, it is a poorly crafted requirement. One
can restate the previous requirement as: The software image must be easy to
upgrade/downgrade for 100 network elements. Then one can easily create a test objective:
Verify that the software image can be upgraded/downgraded for 100 network elements. It
takes time, clear thinking, and courage to change things.

In addition to the testability of the requirements, the following items must be analyzed by
the system test engineers during the review:
• Safety: Have the safety-critical requirements been identified? The safety-critical
requirements specify what the system shall not do, including means for eliminating and
controlling hazards and for limiting any damage in the case that a mishap occurs.
• Security: Have the security requirements, such as confidentiality, integrity, and
availability, been identified?
• Completeness: Have all the essential items been completed? Have all possible
situations been addressed by the requirements? Have all the irrelevant items been omitted?
• Correctness: Are the requirements understandable and have they been stated
without error? Are there any incorrect items?
• Consistency: Are there any conflicting requirements?
• Clarity: Are the requirement materials and the statements in the document clear,
useful, and relevant? Are the diagrams, graphs, and illustrations clear? Have those been
expressed using proper notation to be effective? Do those appear in proper places?
• Relevance: Are the requirements pertinent to the subject?
• Feasibility: Are the requirements implementable?
• Verifiable: Can tests be written to demonstrate conclusively and objectively that the
requirements have been met?
• Traceable: Can each requirement be traced to the functions and data related to it so
that changes in a requirement can lead to easy reevaluation?
3.4.2 Functional Specification
A functional specification provides:
i. A precise description of the major functions the system must fulfill the requirements,
description of the implementation of the functions, and explanation of the technological
risks involved
ii. External interfaces with other software modules
iii. Data flow such as flowcharts, transaction sequence diagrams, etc describing
the sequence of activities
iv. Fault handling, memory utilization and performance estimates
The functional specification must be reviewed from the point of view of testability.
Common problems with functional specifications include lack of clarity, ambiguity, and
inconsistency.
The following are the Objectives that are kept in mind while reviewing a functional
specification:
• Correctness: Whenever possible, the specification parts should be compared directly
to an external reference for correctness.
• Extensible: The specification is designed to easily accommodate future extensions
that can be clearly envisioned at the time of review.
• Comprehensible: The specification must be easily comprehensible. By the end of
the review process, if the reviewers do not understand how the system works, the
specification or its documentation is likely to be flawed. Such specifications and
documentations need to be reworked to make them more comprehensible.
 Necessity: Each item in the document should be necessary.
 Sufficiency: The specification should be examined for missing or incomplete items.
All functions must be described as well as important properties of input and output
data such as volume and magnitude.
 Implementable: It is desirable to have a functional specification that is
implementable within the given resource constraints that are available in the target
environment such as hardware, processing power, memory, and network bandwidth.
 Efficient: The functional specification must optimize those parts of the solution that
contribute most to the performance of the system.
 Simplicity: In general, it is easier to achieve and verify requirements stated in the
form of simple functional specifications.
 Reusable Components: The specification should reuse existing components as
much as possible and be modular enough that the common components can be
extracted to be reused.
 Limitations: The limitations should be realistic and consistent with the requirements.

Benefits of testable requirements:


 Effective testing
 Early defect detection
 Improved Communication

Steps to ensure Requirements are testable :


 Clear and concise writing
 Use of examples
 Review by Stake holders
3.5 MODELING A TEST DESIGN PROCESS
Test design is the process of creating a strategic plan for test cases, scenarios, and conditions
to ensure it meets the performance and reliability of software or systems. It aims to ensure
test cases effectively uncover software defects and behave as expected under various
conditions. Test objectives are identified from a requirement specification, and one test case

Figure : State transition diagram of a test case.


is created for each test objective. Each test case is designed as a combination of modular
components called test steps. Test cases are clearly specified so that testers can quickly
understand, borrow, and reuse the test cases.
The Figure above illustrates the life-cycle model of a test case in the form of a state
transitiondiagram. The state transition model shows the different phases, or states, in the life
cycle of a test case from its inception to its completion through the following states: create,
draft, review, deleted, released, update, and deprecated. Certain actions are taken by the
“owner” of the state,and the test case moves to a next state after the actions are completed.
One can easily implement a database of test cases using the test case schema shown
in Table below. We refer to such a database of test cases as a test factory.

Table -Test Case Schema Summary


Create State A test case is put in this initial state by its creator, called the owner, who initiates the
design of the test case. The creator initializes the following mandatory fields associated with the
test case such as requirement_ids, tc_id, tc_title, originator_group, creator, and test_category. The
test case is expected to verify the requirements referred to in the requirement_ids fields. The
originator_ group is the group who found a need for the test. The creator may assign the test case to
a specific test engineer, including himself, by filling out the eng_assigned field, and move the test
case from the create to the draft state.
Draft State The owner of this state is the test group, that is, the system test team. In this state, the
assigned test engineer enters the following information: tc_author, objective, setup, test_steps,
cleanup, candidate_for_automation, automation_priority. After completion of all the mandatory
fields, the test engineer may reassign the test case to the creatorto go through the test case. The test
case stays in this state until it is walked through by the creator. After that, the creator may move the
state from the draft state to the review state by entering all the approvers’ names in the
approver_names field.
Review and Deleted States The owner of the review state is the creator of the test case. The
owner invites test engineers and developers to review and validate the test case. They ensure
that the test case is executable, and the pass–fail criteria are clearly specified.
Action items are created for the test case if any field needs a modification. Action
items from a review meeting are entered in the review_actions field, and the action items are
executed by the owner to effect changes to the test case.
The test case moves to the released state after all the reviewers approve the changes. If
the reviewers decide that this is not a valid test case or it is not executable, then the test case is
moved to the deleted state. A review action item must tell to delete this test case for a test case
to be deleted.
Released and Update States A test case in the released state is ready for execution, and it
becomes a part of a test suite. On the other hand, a test case in the update state implies that it is in
the process of being modified to enhance its reusability, being fine-tuned with respect to its pass–fail
criteria, and/orhaving the detailed test procedure fixed. For example, a reusable test case should be
parameterized rather than hard coded with data values.
Moreover, a test case should be updated to adapt it to changes in system functionality
or the environment.
One can improve the repeatability of the test case so that others can quickly
understand, borrow, and reuse it bymoving a test case in the released–update loop a small
number of times.
Also, this provides the foundation and justification for the test case to be automated. A
test case should be platform independent. If an update involves a small change, the test
engineer may move the test case back to the released state after the fix. Otherwise, the test
case is subject to a further review, which is achieved by moving it to the review state. A test
case may be revised once every time it is executed.
Deprecated State An obsolete test case may be moved to a deprecated state. Ideally, if it has
not been executed for a year, then the test case should be reviewed for its continued
existence.
A test case may become obsolete over time because of the following reasons.
 First, the functionality of the system being tested has much changed, and due to a lack
of test case maintenance, a test case becomes obsolete.
 Second, as an old test case is updated, some of the requirements of the original test case
may no longer be fulfilled.
 Third, reusability of test cases tends to degrade over time as the situation changes. This
is especially true of test cases which are not designed with adequate attention to
possible reuse.
 Finally, test cases may be carried forward carelessly long after their original
justifications have disappeared. Nobody may know the original justification for a
particular test case,
so it continues to be used.
Benefits of Modeling a test design process
 Improved Communication
 Improved Efficiency
 Improved Effectiveness

Challenges of modeling a test design process


 Complexity: Process can be complex that it is difficult to create a model
 Time : It can take time to create a model
 Cost : Can be expensive to create a model

3.6 MODELING TEST RESULTS


Test engineers execute test cases from a selected test suite using different test methods. The
results of executing those test cases are recorded in the test factory database for gathering and
analyzing test metrics A test suite schema can be used by a test manager to design a test suite
after a test factory is created. A test suite schema, as shown in Table below, is used to group
test cases for testing a particular release.

Table - Test Suite Schema Summary

The schema requires a test suite ID, a title, an objective, and a list of test cases to be managed
by the test suite. One also identifies the individual test cases to be executed (test cycles 1, 2, 3
and/or regression) and the requirements that the test cases satisfy.
The idea here is to gather a selected number of released test cases and repackage them
to form a test suite for a new project..
In a large, complex system with many defects, there are several possibilities of the
result of a test execution, not merely passed or failed. Therefore, we model the results of test
execution by using a state transition diagram as shown in Figure below, and the
corresponding schema is given in Table following the figure.
Figure : State transition diagram of test case result.
Above figure illustrates a state diagram of a test case result starting from the untested
stateto four different states: passed, failed, blocked, and invalid.

Table : Test Result Schema Summary

 The execution status of a test case is put in its initial state of untested after designing or
selecting a test case.
 If the test case is not valid for the current software release, the test case result is moved to
the invalid state.
 In the untested state, the test suite identifier is noted in a field called test_suite_id. The
state of the test result, after execution of a test case is started, may change to one of the
following states:
passed, failed, invalid, or blocked.
 A test engineer may move the test case result to the passed state from the untested state
if the test case execution is complete and satisfies the pass criteria.
 If the test execution is complete and satisfied the fail criteria, a test engineer moves the
test result to the failed state from the untested state and associates the defect with the test
case by initializing the defect_ids field.
 The test case must be reexecuted when a new build containing a fix for the defect is
received. If the reexecution is complete and satisfies the pass criteria, the test result is
moved to the passed state.
 The test case result is moved to a blocked state if it is not possible to completely execute
it. If known, the defect number that blocks the execution of the test case is recorded in the
defect_ids field. The test case may be reexecuted when a new build addressing a blocked
test case is received.
 If the execution is complete and satisfies the pass criteria, the test result is moved to the
passed state. On the other hand, if it satisfies the fail criteria, the test result is moved to
the failed state. If the execution is unsuccessful due to a new blocking defect, the test
result remains in the blocked state and the new defect that blocked the test case is listed
in the defect_ids field.
 The benefits of Modeling test results are : Improved Analysis, Improved decision
making and Improved Communication
 Challenges in Modelling test results :
• Complexity : Can be difficult to create a model if the results are complex
• Time : It can take time to create a model of the test results
• Cost : Can be expensive to create a model
 Common metric used to Model the test results:
• No. of defects found: Gives the count of no. of defects found during testing
• Severity of defects: Severity of defects can be classified as major, Minor or critical
• Time to find the defects: This can be used to identify the area that are difficult to test
• Coverage : Gives the percentage of software that was tested

3.7 BOUNDARY VALUE TESTING


Boundary Value Testing is one of the popular software testing mechanism, where
testing of data is done based on boundary values or between two opposite ends where the
ends may be like from start to end, or lower to upper or from maximum to minimum. This
testing process was introduced to select boundary values that came from the boundary based
on the inputs at different ends of testing values. This black box testing strategy was
introduced after equivalence class partitioning where the partition of classes takes place first
followed by a partition at the boundaries.
Example:
Let us assume a test case that takes the value of age from 21 to 65.

BOUNDARY VALUE TEST


CASE

INVALID TEST CASE VALID TEST CASES INVALID TEST CASE


(Min Value – 1) (Min, +Min, Max, -Max) (Max Value + 1)

20 21, 22, 65, 64 66


Test Case Scenarios
3.7.1 Input: Enter the value of age as: 20 (ie., 21-1) Output: Invalid
3.7.2 Input: Enter the value of age as 21 Output: Valid
3.7.3 Input: Enter the value of age as 22 (ie., 21+1) Output: Valid
3.7.4 Input: Enter the value of age as 65 Output: Valid
3.7.5 Input: Enter the value of age as 64 (ie., 65-1) Output: Valid
3.7.6 Input: Enter the value of age as 66 (65+1) Output: Invalid

Importance:
 This is done when there is a huge number of test cases are available for testing
purposes and for checking them individually, this testing is of great use.
 The analysis of test data is done at the boundaries of partitioned data after equivalence
class partitioning happens and analysis is done.
 This testing process is actually known as black-box testing that focuses on valid and
invalid test case scenarios and helps in finding the boundary values at the extreme
ends without hampering any effective test data valuable for testing purpose.
 This is also responsible for testing where lots of calculations are required for any kind
of variable inputs and for using in varieties of applications.
 The testing mechanism also helps in detecting errors or faults at the boundaries of the
partition that is a plus point as most errors occur at the boundaries before the
applications are submitted to the clients.

The following are the key steps involved in performing Boundary value testing:
 Identify the boundaries
 Identify the valid and invalid boundaries
 Select test cases
 Execute the test cases
 Analyze the results

 Guidelines for BVA:


o If an input condition specifies a range bounded by values a and b, test cases should be
designed with values a and b and just above and just below a and b.
o If an input condition specifies a number of values, test cases should be developed that
exercise the minimum and maximum numbers. Values just above and below minimum
and maximum are also tested.
o If internal program data structures have prescribed boundaries (e.g., an array has a
defined limit of 100 entries), be certain to design a test case to exercise the data structure
at its boundary
o If the input is a Boolean value (T/F) , test cases are designed to test both values.

Advantages of Boundary Value Analysis:


 Effective defect identification: BVA focuses on the edges or boundaries of input domains,
making it effective at identifying issues related to these critical points.
 Increased test Coverage : It provides comprehensive test coverage for values near the
boundaries, which are often more likely to cause errors.
 Simple : BVA is simple to understand and implement, making it suitable for both
experienced and inexperienced testers.
 Early defect Identification : It can detect defects in the early stages of development,
lowering the cost of later problem resolution.
Disadvantages of boundary value analysis:
 Limited Scope : BVA’s scope is limited to addressing boundary-related defects and
potentially missing issues that occur within the input domain.
 Combinatorial Explosion: BVA can result in a large number of test cases for systems with
multiple inputs, increasing the testing effort.
 Time Consuming: Can be time consuming especially when dealing with the complex input
ranges or multiple boundary conditions.
 BVA may not cover all possible scenarios or corner cases: While it is effective in many
cases, BVA may not cover all possible scenarios or corner cases.

3.8 EQUIVALENCE CLASS TESTING


Equivalence Partitioning or Equivalence Class Partitioning is type of black box testing
technique which can be applied to all levels of software testing like unit, integration, system,
etc. In this technique, input data units are divided into equivalent partitions that can be used to
derive test cases which reduces time required for testing because of small number of test
cases.
 It divides the input data of software into different equivalence data classes.
 We can apply this technique, where there is a range in the input
field. Example:
Let’s consider the behavior of Order Pizza Text Box Below.

Pizza values 1 to 10 is considered valid. A success message is shown.


While value 11 to 99 are considered invalid for order and an error message will appear,
“Only 10 Pizza can be ordered”.
Here is the test condition:
3.8.1 Any Number greater than 10 entered in the Order Pizza field (let say 11) is
considered invalid.
3.8.2 Any Number less than 1 that is 0 or below, then it is considered invalid.
3.8.3 Numbers 1 to 10 are considered valid
3.8.4 Any 3 Digit Number say -100 is invalid.
We cannot test all the possible values because if done, the number of test cases will be
more than 100. To address this problem, we use equivalence partitioning hypothesis where
we divide the possible values of tickets into groups or sets as shown below where the system
behavior can be considered the same.

The divided sets are called Equivalence Partitions or Equivalence Classes. Then we
pick only one value from each partition for testing. The hypothesis behind this technique
is that if one condition/value in a partition passes all others will also pass. Likewise, if
one condition in a partition fails, all other conditions in that partition will fail.
Guidelines for identifying the Equivalence classes :
 Valid Equivalence Classes: Represents the inputs that are valid and expected to produce
the same behaviour
 Invalid Equivalence Classes : Represents the inputs that are invalid and outside the
expected range
 Special Equivalence Classes : Represents the special or extreme conditions .

Advantages :
 It is process-oriented
 It helps to decrease the general test execution time
 Reduce the set of test data.
Disadvantages :
 All necessary inputs may not cover
 This technique will not consider the condition for boundary value analysis
 The test engineer might assume that the output for all data set is right, which leads to
the problem during the testing process

Difference between Equivalence Partitioning and Boundary Value Analysis

Equivalence Partitioning Boundary Value Analysis


Divides the input domain into groups or partitions,
Focuses on testing values at the edges or
where each group is expected to behave in a
boundaries of the input domain.
similar way.
Suitable for inputs with a wide range of valid
Effective when values near the boundaries of
values, where values within a partition are
the input domain are more likely to cause
expected to have similar behavior.
issues.
Multiple test cases are created to test values at
Typically, one test case is selected from each
the boundaries, including just below, on, and
equivalence class or partition.
just above the boundaries.
Provides broad coverage across input domains, Focuses on testing edge cases and situations
ensuring that different types of inputs are tested. where errors often occur.

3.9 PATH TESTING


Path Testing is a method that is used to design the test cases. It is a structural testing method
that involves using the source code of a program in order to find every possible executable
path. It helps to determine all faults lying within a piece of code. This method is designed to
execute all or selected path through a computer program.
Any software program includes, multiple entry and exit points. Testing each of these
points is a challenging as well as time-consuming. In order to reduce the redundant tests and
to achieve maximum test coverage, path testing is used.
Path Testing Process:
In the path testing method, the control flow graph of a program is designed to find a
set of linearly independent paths of execution. In this method, Cyclomatic Complexity is used
to determine the number of linearly independent paths and then test cases are generated for
each path.

1. Control Flow Graph:


Draw the corresponding control flow graph of the program in which all the executable
paths are to be discovered.
2. Cyclomatic Complexity:
After the generation of the control flow graph, calculate the cyclomatic complexity of the
program using the following formula.
McCabe's Cyclomatic Complexity = E - N + 2P
Where, E = Number of edges in the control flow graph
N = Number of vertices in the control flow graph
P = Number of connected components
3. Make Set:
Make a set of all the paths according to the control flow graph and calculate cyclomatic
complexity. The cardinality of the set is equal to the calculated cyclomatic complexity.
4. Create Test Cases:
Create a test case for each path of the set obtained in the above step.
Here we will take a simple example, to get a better idea what is basis path testing include

Cyclomatic Complexity = E - N + 2P
= 8 – 7 – 2(1) = 3
In the above example, we can see there are few conditional statements that is executed
depending on what condition it suffice. Here there are 3 paths or condition that need to be
tested to get the output,
Path 1: 1,2,3,5,6, 7
Path 2: 1,2,4,5,6, 7
Path 3: 1, 6, 7
Generation of Test Cases:
After the identification of independent paths, we may generate test cases that traverse
all independent paths at the time of executing the program. This process will ensure that each
transition of the control flow diagram is traversed at least once.
Test Case A B C PATH
1 50 55 52 A=B, PRINT 55
2 50 55 60 A=C, PRINT 60
3 40 ANY ANY PRINT 40

Path Testing Techniques


 Control Flow Graph: The program is converted into a control flow graph by representing
the code into nodes and edges.
 Decision to Decision path: The control flow graph can be broken into various Decision
to Decision paths and then collapsed into individual nodes.
 Independent paths: An Independent path is a path through a Decision to Decision path
graph that cannot be reproduced from other paths by other methods.

Advantages of path testing:


 The path testing method reduces the redundant tests.
 Path testing focuses on the logic of the programs.
 Path testing is used in test case design.
Disadvantages of Path Testing
1. A tester needs to have a good understanding of programming knowledge to execute the tests.
2. The test case increases when the code complexity is increased.
3. It will be difficult to create a test path if the application has a high complexity of code.

3.10 DATA FLOW TESTING


Data Flow Testing is a type of structural testing. It is a method that is used to find the test
paths of a program according to the locations of definitions and uses of variables in the
program. Furthermore, it is concerned with:
 Statements where variables receive values,
 Statements where these values are used or referenced.
Define/Reference Anomalies:
Reference or define anomalies in the flow of the data are detected at the time of
associations between values and variables. These anomalies are:
 A variable is defined but not used or referenced,
 A variable is used but never defined,
 A variable is defined twice before it is used
Definitions:
To illustrate the approach of data flow testing, assume that each statement in the program
assigned a unique statement number. For a statement number ‘n’ -
DEF(v,n) = statement ‘n’ contains the definition of variable ‘v’
USE(v,n) = statement ‘n’ contains the use of variable ‘v’
Use Path = (denoted as du-path) for a variable ‘v’ is a path between two nodes ‘m’ and
‘n’where ‘m’ is the initial node in the path but the defining node for variable
‘v’(denoted as DEF (v, m)) and ‘n’ is the final node in the path but usage
node for variable ‘v’ (denoted as USE (v, n)).
Clear path = (denoted as dc-path) for a variable ‘v’ is a definition use path with initial
and final nodes DEF (v, m) and USE (v, n) such that no other node in the
path is a defining node of variable ‘v’.
The du-paths and dc-paths describe the flow of data across program statements from
statements where values are defined to statements where the values are used.
 A du-path for a variable ‘v’ may have manyredefinitions of variable ‘v’ between
initial node (DEF (v, m)) and final node (USE (v, n)).
 A dc-path for a variable ‘v’ will not have any definition of variable ‘v’ between
initial node (DEF (v, m)) and final node (USE (v, n)).
 The du-paths that are not definition clear paths are potential troublesome paths. They should
be identified and tested on topmost priority.
Identification of du and dc Paths
The various steps for the identification of du and dc paths are given as:
(i) Draw the program graph of the program.
(ii) Find all variables of the program and prepare a table for define / use status of
all variables using the following format:

(iii) Generate all du-paths from define/use variable table of step (ii) using the following format:

(iv) Identify those du-paths which are not dc-paths. Four testing strategies are used for this
Testing Strategies Using du-Paths:
We want to generate test cases which trace everydefinition to each of its use and
every use is traced to each of its definition. Some of the testing strategies are given as:
a. Test all du-paths:
All du-paths generated for all variables are tested. This is the strongest data flow
testing strategy covering all possible du-paths.
b. Test all uses
Find at least one path from every definition of every variable to every use of that
variable which can be reached by that definition.
For everyuse of a variable, there is a path from the definition of that variable to the
use of that variable.
c. Test all definitions
Find paths from every definition of every variable to at least one use of that variable;
we may choose any strategy for testing.
As we go from ‘test all du-paths’ (no. (i)) to ‘test all definitions’ (no.(iii)), the number
of paths are reduced. However, it is best to test all du-paths (no. (i)) and give priority to those
du-paths which are not definition clear paths. The first requires that each definition reaches
all possible uses through all possible du-paths. The second requires that each definition
reaches allpossible uses, and the third requires that each definition reaches at least one use.
Generation of Test Cases:
After finding paths, test cases are generated by giving values to the input parameter.
We get different test suites for each variable.
Example: Let us consider the program as follows:
a. read x, y;
b. if(x>y)
3. a = x+1
else
4. a = y-1
5. print a;
Control flow graph ( Program graph) of above example:

Define/use of variables of above example:

Variable Defined at node Used at node


x 1 2, 3
y 1 2, 4
a 3, 4 5

The du-paths with beginning node and end node are given as:
Variable du-path (Begin, end)
X 1,2,3
1,2
Y 1,2,4
1,2
A 3,5
4,5

The first strategy (best) is to test all du-paths, the second is to test all uses and the third is to
test all definitions. The du-paths as per these three strategies are given as:

Test all du-paths:

Testing Strategy Paths Definition Clear?


All du paths and all uses 1,2,3 Y
paths (Both are same in 1,2 e
this example) 1,2,4 s
1,2
(6 paths)
3,5
Y
4,5
e
s

Y
e
s

Y
e
s

Y
e
s

Y
e
s
all definitions(2 paths) 1-2-3-5 Y
1-2-4-5 e
s

Y
e
s

Test cases for data flow paths are given below:

S. Expected
No. X y a Output Remarks
1 20 10 21 21 1,2,3
2 20 10 21 21 1,2
3 15 25 24 24 1,2,4
4 15 25 24 24 1,2
5 20 10 21 21 3,5
6 15 25 24 24 4,5
Test cases for all definitions:

S. Expected
X Y a Remarks
No. Output
1. 20 10 21 21 1-2-3-5
2. 20 10 21 21 1-2-4-5

Advantages of Data Flow Testing:


Data Flow Testing is used to find the following issues-
 To find a variable that is used but never defined,
 To find a variable that is defined but never used,
 To find a variable that is defined multiple times before it is use,

Disadvantages of Data Flow Testing


 Time consuming and costly process
 Requires knowledge of programming languages

3.11 TEST DESIGN PREPAREDNESS METRICS


Management may be interested to know the progress, coverage, and productivity
aspects of the test case preparation work being done by a team of test engineers. Hence
metrics are used.
Test Metrics are used by the management and others involved in a software project to
(i) know if a test project is progressing according to schedule and if
more resources are required and
(ii)(ii) plan their next project more accurately.

The following metrics can be used to represent the level of preparedness of test design.
Preparation Status of Test Cases (PST): A test case can go through a number of phases, or
states, such as draft and review, before it is released as a valid and useful test case. Thus, it is
useful to periodically monitor the progress of test design by counting the test cases lying in
different states of design—create, draft, review, released, and deleted. It is expected that all the
planned test cases that are created for a particular project eventually move to the released state
before the start of test execution.

Average Time Spent (ATS) in Test Case Design: It is useful to know the amount of time it
takes for a test case to move from its initial conception, that is, create state, to when it is
considered to be usable, that is, released state. This metric is useful in allocating time to the
test preparation activity in a subsequent test project. Hence, it is useful in test planning.
Number of Available Test (NAT) Cases: This is the number of test cases in the released
state from existing projects. Some of these test cases are selected for regression testing in the
current test project.
Number of Planned Test (NPT) Cases: This is the number of test cases that are in a test
suite and ready for execution at the start of system testing. This metric is useful in scheduling
test execution. As testing continues, new, unplanned test cases may be required to be
designed. A large number of new test cases compared to NPT suggest that initial planning was
not accurate. Coverage of a Test Suite (CTS): This metric gives the fraction of all
requirements covered by a selected number of test cases or a complete test suite. The CTS is a
measure of the number of test cases needed to be selected or designed to have good coverage
of system requirements.

3.12 TEST CASE DESIGN EFFECTIVENESS


The objectives of the test case design effectiveness metric is to
(i) measure the “defect revealing ability” of the test suite and
(ii) use the metric to improve the test design process.
During system-level testing, defects are revealed due to the execution of planned test
cases. In addition to these defects, new defects are found during testing for which no test
cases had been planned. For these new defects, new test cases are designed, which are called
test case escaped (TCE).
Test case escapes occur because of deficiencies in the test design process. This
happens because the test engineers get new ideas while executing the planned test cases.
A metric commonly used in the industry to measure test case design effectiveness is
the test case design yield (TCDY), defined as

Where NPT – Number of Planned Test


TCE – Test Case Escaped
The TCDY is also used to measure the effectiveness of a particular testing phase. For
example, the system integration manager may want to know the TCDY value for his or her
system integration testing.

Factors that contribute to the effectiveness to test case design:


1. Test Coverage: Effective test design should ensure comprehensive coverage of the system’s
requirements, functionalities and critical paths
2. Test case relevance : Test cases should be relevant to the system being tested
3. Clear objectives: Each test case should have a specific purpose
4. Test case Independence : Test cases should be designed that they are independent of each other
5. Test data accuracy : Test cases should have accurate and valid test data
6. Reproducibility : Effective test cases should be reproducible
7. Test case Efficiency : Test cases should be effective in terms of detecting defects

Some of the other Metrics for accessing Test case design :


 Defect Detection rate : Measures the percentage of test cases that successfully identifies
defects
 Test case Effectiveness Ratio: Compares the no. of test cases that detect defects to the
total no. of executed test cases
 Code Coverage: Measures the percentage of code covered by the executed test cases

Best Practices for test case design :


 Start by understanding the requirements of the software application
 Use a variety of test case design techniques
 Prioritize the test cases
 Trace the test cases back to the requirements
 Review the test cases with the development team
 Execute the test cases and track the results
3.13 MODEL DRIVEN TEST DESIGN
Model Driven Test Design (MDTD) is built on the idea that designs will become more
effective and efficient if the designers can raise the level of abstraction. This approach breaks
down the testing into a series of small tasks that simplify test generation. Then test
designers isolate their tasks and work at a higher level of abstraction by using mathematical
engineering structures to design test values independently of the details of the software or
design artifacts,test automation, and Test Execution.

Figure : Model-driven test design.


The model driven test design process is illustrated in Figure above, which shows test
design activities above the line and other test activities below.
 The starting point is a software artifact. This could be program source, a UML diagram,
natural language requirements, or even a user manual.
 A criteria-based test designer uses that artifact to create an abstract model of the
software in the form of an input domain, a graph, logic expressions, or a syntax
description.
o Criteria-based test designer design test values to satisfy coverage criteria
o Theyrequire knowledge of Discrete math, Programming, and Testing.
o Require a traditional Computer Science degree
 Then a coverage criterion is applied to create test requirements.
o Coverage criteria give structured, practical ways to search the input space
o Testers search a huge input space -- to find the fewest inputs that will reveal
the mostproblems
 A human-based test designer uses the artifact to consider likely problems in the
software, then creates requirements to test for those problems.
o Human-based test designer design test values based on Domain knowledge of
the program
o It is a Human knowledge of testing
o Designer must have a knowledge of user interface
o Require almost no traditional CS degree.
 These requirements are sometimes refined into a more specific form, called the test
specification. For example, if edge coverage is being used, a test requirement specifies
which edge in a graph must be covered. A refined test specification would be a complete
path through the graph.
 Once the test requirements are refined, input values that satisfy the requirements must be
defined. This brings the process down from the design abstraction level to the
implementation abstraction level. These are analogous to the abstract and concrete tests in
the model-based testing literature. The input values are augmented with other values
needed to run the tests (including values to reach the point in the software being tested, to
display output, and to terminate the program).
 The test cases are then automated into test scripts (when feasible and practical), run on
the software to produce results, and results are evaluated. It is important that results from
automation and execution be used to feed back into test design, resulting in additional or
modified tests.
 This process has two major benefits:
o First, it provides a clean separation of tasks between test design, automation,
execution and evaluation.
o Second, raising our abstraction level makes test design much easier. Instead of
designing tests for a messy implementation or complicated design model, we design at
an elegant mathematical level of abstraction. This is exactly how algebra and calculus
has been used in traditional engineering for decades.

Figure - Example method, CFG, test requirements and test paths.


The Figure illustrates this process for unit testing of a small Java method. The Java
source is shown on the left, and its control flow graph is in the middle. This is a standard
control flow graph with the initial node marked as a dotted circle and the final nodes marked as
double circles.
The first step in the MDTD process is to take this software artifact, the indexOf()
method, and model it as an abstract structure. The control flow graph from Figure 3.6 is
turned into an abstract version. This graph can be represented textually as a list of edges, initial
nodes, and final nodes, as shown in Figure above under Edges. If the tester uses edge-pair
coverage, six requirements are derived. For example, test requirement #3, [2, 3, 2], means the
subpath from node 2 to 3 and back to 2 must be executed. The Test Paths box shows three
complete test paths through the graph that will cover all six test requirements.
Some of the models that used for MBT are :
 Use case Models : Describe the different ways in which the users will interact with the
software
 Data Flow Models: Describe the flow of data through the software
 State Machine Models : Describe the different states of the software

Advantages of Model Driven test design :


 Increased Coverage
 Improved Efficiency
 Reduced defects

DisAdvantages of Model Driven test design :


 Modeling Complexity
 Tool Support
 Required Skill

UNIT IV
ADVANCED TESTING CONCEPTS
Performance Testing: Load Testing, Stress Testing, Volume Testing, Fail-Over Testing,
Recovery Testing, Configuration Testing, Compatibility Testing, Usability Testing,
Testing the Documentation, Security testing, Testing in the Agile Environment, Testing
Web and Mobile Applications.
4.1 PERFORMANCE TESTING
Performance Testing is a type of software testing that ensures software applications
perform properly under their expected workload. Performance Testing is the process of
analyzing the quality and capability of a product. It is a testing method performed to
determine the system’s performance in terms of speed, reliability, scalability and stability
under varying workloads. Performance testing is also known as Perf Testing.
Eg : During peak shopping seasons, such as “New Year”,”Diwali”, thousands of users may be
trying to access the site simultaneously. Performance testing can simulate this scenario by
creating virtual users to mimic real-life usage patterns

Performance Testing Attributes:


• Speed: It determines whether the software product responds rapidly.
• Scalability: It determines the amount of load the software product can handle at a
time.
• Stability: It determines whether the software product is stable in case of varying
workloads.
• Reliability: It determines whether the software product is secure or not.

Objectives of Performance Testing (Goals / Advantages):


• It uncovers what needs to be improved before the product is launched in the market.
• It helps to make software stable and reliable.
• It helps to evaluate the performance and scalability of a system or application under
various loads and conditions.
• It helps to identify bottlenecks, measure system performance, and ensure that the
system can handle the expected number of users or transactions.
• It also helps to ensure that the system can handle the expected load in a production
environment.

Some important types of Performance Testing:


1. Load testing: It checks the product’s ability to perform under anticipated user loads. The
objective is to identify performance congestion before the software product is launched in
the market.
2. Stress testing: It involves testing a product under extreme workloads to see whether it
handles high traffic or not. The objective is to identify the breaking point of a software
product.
3. Volume testing: In volume testing, large number of data is saved in a database and the
overallsoftware system’s behaviour is observed. The objective is to check the product’s
performanceunder varying database volumes.
4. Scalabilty Testing : In Scalability testing , the software application’s effectiveness is
determined by scaling up to support an increase in user load
How to conduct Performance Testing ?

Disadvantages of Performance testing :


 High costs
 Difficulty in simulating real-world scenarios
 Need for specialized knowledge and tools

4.2 LOAD TESTING


Load testing determines the behavior of the application when multiple users use it at
the same time. It is the response of the system measured under varying load conditions.
• The load testing is carried out for normal and extreme load conditions.
• Load testing is a type of performance testing that simulates a real-world load on a
system or application to see how it performs under stress.
• The goal of load testing is to identify bottlenecks and determine the maximum
number of users or transactions the system can handle.
• It is an important aspect of software testing as it helps ensure that the system can
handle the expected usage levels and identify any potential issues before the system is
deployed to production.
During load testing, various scenarios are simulated to test the system’s behavior under
different load conditions. This can include simulating a high number of concurrent users,
simulating numerous requests, and simulating heavy network traffic. The system’s
performance is then measured and analyzed to identify any bottlenecks or issues that may
occur.
Some examples of load testing :
 Users trying to download large number of files.
 Server that is running multiple applications
Some Load Testing Techniques:
1. Spike testing: Testing the system’s ability to handle sudden spikes in traffic.
2. Soak testing: Testing the system’s ability to handle a sustained load over a prolonged
period of time.
Objectives of Load Testing:
 Evaluation of Scalability: Assess the system’s ability to handle growing user and
transaction demands. Find the point at which the system begins to function badly.
 Planning for Capacity: Describe the system’s ability to accommodate anticipated
future increases in the number of users, transactions and volume of data.
 Determine bottlenecks: Identify and localize bottlenecks in the application or
infrastructure’s performance. Finding the places where the system’s performance can
suffer under load is partof this.
 Analysis of Response Time: For crucial transactions and user interactions, tracking
and evaluating response times.
 Finding Memory Leaks: Find and fix memory leaks that may eventually cause a
decline in performance.
Load Testing Process:
1. Test Environment Setup: Firstly create a dedicated test environment setup for
performing the load testing. It ensures that testing would be done in a proper way.
2. Load Test Scenario: In second step load test scenarios are created. Then load testing
transactions are determined for an application and data is prepared for each
transaction.
3. Test Scenario Execution: Load test scenarios that were created in previous step are
now executed. Different measurements and metrices are gathered to collect the
information.
4. Test Result Analysis: Results of the testing performed is analyzed and various
recommendations are made.
5. Re-test: If the test is failed then the test is performed again in order to get the result in
correct way.

Metrics or parameters of Load Testing:


Metrics are used in knowing the performance of load testing under different circumstances. It
tells how accurately the load testing is working under different test cases. It is usually carried
out after the preparation of load test scripts/cases. Some of them are listed below.
1. Average Response Time
It tells the average time taken to respond to the request generated by the clients or
customers or users. It also shows the speed of the application depending upon the time taken
to respond to the all requests generated.
2. Error Rate
The Error Rate is mentioned in terms of percentage denotes the number of errors occurred
during the requests to the total number of requests..
3. Throughput
This metric is used in knowing the range of bandwidth consumed during tests. It is also
used in knowing the amount of data which is being used for checking the request that flows
between the user server and application main server. It is measured in kilobytes per second.
4. Requests Per Second
It tells that how many requests are being generated to the application server per second. The
requests could be anything like requesting of images, documents, web pages, articles or any
other resources.
5. Concurrent Users
This metric is used to take the count of the users who are actively present at the
particular time. It just keeps track of count those who are visiting the application at any time
without raising any request in the application..
6. Peak Response Time
Peak Response Time measures the time taken to handle the request. It also helps in finding
the duration of the peak time(longest time) at which the request and response cycle is handled
and finding that which resource is taking longer time to respond the request.
Load Testing Tools:

 WebLoad: It is a performance testing tool designed to simulate user load on web


applications and measure their behavior under various conditions
 NeoLoad: It is a performance testing tool used to simulate user traffic and measure
how well applications handle load and stress
 LoadNinja: It is a cloud-based performance testing tool that enables users to simulate
real-world user loads on their applications

Advantages of Load Testing


1. Identifying bottlenecks: Load testing helps identify bottlenecks in the system such as
slow database queries, insufficient memory, or network congestion.
2. Improved scalability: By identifying the system’s maximum capacity, load testing helps
ensure that the system can handle an increasing number of users or transactions over time.
3. Improved reliability: Load testing helps identify any potential issues that may occur
under heavy load conditions, such as increased error rates or slow response times.
4. Reduced risk: By identifying potential issues before deployment, load testing helps reduce
the risk of system failure or poor performance in production.
5. Cost-effective: Load testing is more cost-effective than fixing problems that occur in
production.
6. Improved user experience: By identifying and addressing bottlenecks, load testing helps
ensure that users have a positive experience when using the system.

Disadvantages of Load Testing


1. Resource-intensive: Load testing can be resource-intensive, requiring significant hardware
and software resources to simulate a large number of users or transactions.
2. Complexity: Load testing can be comp
3. lex, requiring specialized knowledge and expertise to set up and execute effectively.
3. Limited testing scope: Load testing is focused on the performance of the system under
stress, and it may not be able to identify all types of issues or bugs.
4. Inaccurate results: If the load test scenarios do not accurately simulate real-world usage, the
results of the test may not be accurate.
5. Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage, and
it’s hard to predict how users will interact with the system.

4.3 STRESS TESTING


Stress Testing is a software testing technique that determines the robustness of
software by testing beyond the limits of normal operation. Stress testing is particularly
important for critical software but is used for all types of software. Stress testing emphasizes
robustness, availability, and error handling under a heavy load rather than what is correct
behavior under normal situations.
Stress testing is defined as a type of software testing that verifies the stability and
reliability of the system. This test particularly determines the system on its robustness and
error handling under extremely heavy load conditions. It even tests beyond the normal
operating point and analyses how the system works under extreme conditions. Stress testing
is performed to ensure that the system would not crash under crunch situations. Stress testing
is also known as Endurance Testing or Torture Testing.

Need (Purpose / Goal ) for Stress Testing:


• To accommodate the sudden surges in traffic: It is important to perform stress
testing to accommodate abnormal traffic spikes. For example, when there is a sale
announcement on the e-commerce website there is a sudden increase in traffic. Failure
to accommodate such needs may lead to a loss of revenue and reputation.
• To Display error messages in stress conditions: Stress testing is important to check
whether the system is capable to display appropriate error messages when the system
is under stress conditions.
• To check if the system works under abnormal conditions: Stress testing checks
whether the system can continue to function in abnormal conditions.
• To Analyze the behavior of the application after failure: The purpose of stress
testing is to analyze the behavior of the application after failure and the software
should displaythe appropriate error messages while it is under extreme conditions.
• To Uncover Security Weakness: Stress testing helps to uncover the security
vulnerabilities that may enter into the system during the constant peak load and
compromise the system.
• To Ensures data integrity: Stress testing helps to determine the application’s data
integrity throughout the extreme load, which means that the data should be in a
dependable state even after a failure.

Stress Testing Process:


The stress testing process is divided into 5 steps:

1. Planning the stress test: This step involves gathering the system data, analyzing the
system, and defining the stress test goals.
2. Create Automation Scripts: This step involves creating the stress testing automation
scripts and generating the test data for the stress test scenarios.
3. Script Execution: This step involves running the stress test automation scripts and
storing the stress test results.
4. Result Analysis: This phase involves analyzing stress test results and identifying the
bottlenecks.
5. Tweaking and Optimization: This step involves fine-tuning the system and
optimizing the code with the goal meet the desired benchmarks.

Examples for stress testing : A web server may be stress tested using scripts, bots, and various
tools to observe the performance of a web site during peak loads. These attacks generally are
under an hour long, or until a limit in the amount of data that the web server can tolerate is
found.

Types of Stress Testing:


1. Server-client Stress Testing: Server-client stress testing also known as distributed
stress testing is carried out across all clients from the server.
2. Product Stress Testing: Product stress testing concentrates on discovering defects
related to data locking and blocking, network issues, and performance congestion in a
software product.
3. Transactional Stress Testing: Transaction stress testing is performed on one or more
transactions between two or more applications. It is carried out for fine-tuning and
optimizing the system.
4. Systematic Stress Testing: Systematic stress testing is integrated testing that is used
to perform tests across multiple systems running on the same server. It is used to
discover defects where one application data blocks another application.
5. Analytical Stress Testing: Analytical or exploratory stress testing is performed to
test the system with abnormal parameters or conditions that are unlikely to happen in
a real scenario. It is carried out to find defects in unusual scenarios like a large
number of users logged at the same time or a database going offline when it is
accessed from a website.
6. Application Stress Testing: Application stress testing also known as product stress
testing is focused on identifying the performance bottleneck, and network issues in a
software product.

Stress Testing Tools:


1. Jmeter: Apache JMeter is a stress testing tool is an open-source, pure Java-based
software that is used to stress test websites.
2. LoadNinja: LoadNinja is a stress testing tool that enables users to develop codeless
load tests
3. WebLoad: WebLoad is a stress testing tool that combines performance, stability, and
integrity as a single process for the verification of mobile and web applications.
4. Neoload: Neoload is a powerful performance testing tool that simulates large
numbers of users and analyzes the server’s behavior.
5. SmartMeter: SmartMeter is a user-friendly tool that helps to create simple tests
without coding. It has a graphical user interface and has no necessary plugins.

Metrics of Stress Testing:


Metrics are used to evaluate the performance of the stress and it is usually carried out at the
end of the stress scripts or tests. Some of the metrics are given below.
1. Pages Per Second: Number of pages requested per sec and number of pages loaded per sec.
2. Pages Retrieved: Average time is taken to retrieve all information from a particularpage.
3. Byte Retrieved: Average time is taken to retrieve the first byte of information from the page.
4. Transaction Response Time: Average time is taken to load or perform transactions
between the applications.
5. Transactions per Second: It takes count of the number of transactions loaded per
second successfully and it also counts the number of failures that occurred.
6. Failure of Connection: It takes count of the number of times that the client faced
connection failure in their system.
7. Failure of System Attempts: It takes count of the number of failed attempts
8. Rounds: It takes count of the number of test conditions executed by the
clientssuccessfully and it keeps track of the number of rounds failed.

Advantages of Stress Testing


 Determines the behavior of the system
 Ensure failure does not cause security issues
 Makes system function in every situation
 Improving Decision Making
 Increasing Stakeholder confidence
Disadvantages of Stress Testing
 Manual stress testing is complicated
 Good scripting knowledge required.
 Need for external resources
 Additional tool required in case of open-source stress testing tool
4.4 Difference between Load Testing and Stress testing
Load Testing Stress testing
Load Testing is a type of performance Stress Testing is performed to test the
testing that determines the robustness of the system or software
performance of a different application application under extreme load.
under real-life-based
load conditions.
In load testing load limit is the threshold of a In stress testing load limit is above the
break threshold of a break.
Huge number of users. There are too many users and too much data.

4.5 VOLUME TESTING


Volume Testing is a type of software testing which is carried out to test a software
application with a certain amount of data. The amount of data used in volume testing
could be a database size or it could also be the size of an interface file that is the subject of
volume testing. While testing the application with a specific database size, database is
extended to that size and after that the performance of the application is tested. When an
application needs interaction with an interface file this could be either reading or writing the
file or same from the file. A sample file of the size required is created and then functionality
of the application istested with that file in order to test the performance.
In volume testing a huge volume of data is acted upon the software. It is basically
performed to analyze the performance of the system by increasing the volume of data in the
database. Volume testing is performed to study the impact on response time and behavior of
the system when the volume of data is increased in the database. Volume Testing is also
known as Flood Testing.

Objectives (Goals) of Volume Testing


• To recognize the problems that may be created with large amount of data.
• To check the system’s performance by increasing the volume of data in the database.
• To find the point at which the stability of the system reduces.
• To identify the capacity of the system or application.
• To create the scaling plans

Volume Testing Attributes


• System’s Response Time: During the volume testing, the response time of the system
or the application is tested. It is also tested whether the system responses within the
finite time or not. If the response time is large then the system is redesigned.
• Data Loss: During the volume testing, it is also tested that there is no data loss. If
there is data loss some key information might be missing.
• Data Storage: During the volume testing, it is also tested that the data is stored
correctly or not. If the data is not stored correctly then it is restored accordingly in
proper place.
• Data Overwriting: In volume testing, it is tested that whether the data is overwritten
without giving prior information to the developer. If it so then developer is notified.
 Various Difficulties that Volume testing face :

Volume testing has to handle big volumes of data . It is difficult to maintain the database that
has a strong structure. Also reviewing the data types and connections between them is difficult.

Example of volume testing:


A company might artificially grow their database to a certain size and test how the application
performs, a media company might test their digital asset management system by uploading
thousands of large video and image files

Advantages of Volume Testing


• Volume testing is helpful in saving maintenance cost
• Volume testing is also helpful in a rapid start for scalability plans.
• Volume testing also helps in early identification of bottlenecks.
• Volume testing ensures that the system is capable of real world usage.

Disadvantages of Volume Testing


• More number of skilled resources are needed to carry out this testing.
• It is sometimes difficult to prepare test cases
• It is a time consuming technique since it requires lot of time to decide the number of
volume of data and test scenarios.
• It is bit costly.
• It is not possible to have the exact break down of memory used in the real world
application.

Volume testing tools


HammerDB - It is an open-source tool used in the global database industry
DbFit - The DbFit tests can be used as existing executable documentation of system behavior.
JdbcSlim- used by Developers, Testers, and Business Users who know SQL language.
NoSQLMap- designed to automatically insert outbreaks to evaluate the threat.

4.6 FAIL-OVER TESTING


Software products/services are tested multiple times before delivery to ensure that it is
providing the required service. Testing before delivery doesn’t guarantee that no problem will
occur in the future. Even sometimes the software application fails due to some unwanted
event due to network issues or due to server-related problems. Failover testing aims to
respond to these types of failures. Suppose that the PC gets off due to some technical issue,
and on restarting we open the browser, then a pop-up is shown saying “Do you want to
restore all pages?” On clicking restore, all tabs are restored. The process of ensuring such
restorations after failure is known as FAILOVER TESTING.
What is Failover Testing :
Failover testing is a technique that validates if a system can allocate extra resources and
back up all the information and operations when a system fails abruptly due to some
reason. This test determines the ability of a system to handle critical failures and handle
extraservers.
It is preferred that testing should be performed by servers. Active-active and active-
passive standby are the two most common configurations. Both the techniques achieve
failover in a very different manner but both of them are performed to improve the server’s
reliability. For example, if we have three servers, one of them fails due to heavy load,
and then two situations occur. Either that failed server will restart on its own or in another
situation when the failed server cannot be restarted, the remaining servers will handle the
load. Such situations are tested during this test.

Factors to be considered before Performing Failover Testing:


1. The budget has to be the first thing to be taken into consideration before thinking
about performing the Failover test.
2. The budget is connected to the frameworks that might crash or break down under
pressure/load.
3. Always keep in mind how much time it will take to fix all the issues caused by the
failure of the system.
4. Note down the most likely failures and organize the outcomes according to how much
harm is caused by the failure.

Factors to be considered while Performing Failover Testing:


1. Keep a plan of measures to be taken after performing a test.
2. Focus on the execution of the test plan.
3. Set up a benchmark so that performance requirements can be achieved.
4. Prepare a report concerning issue requirements and/or requirements of the asset.

Working of Failover testing:

1. Consider the factors: Before performing failover testing consider the factors like
budget, time, team,technology, etc.
2. Analysis on failover reasons and design solutions: Determine probable failure
situations that the system might experience. Examine the causes of failure, including
software bugs, hardware malfunctions, network problems, etc. It provides fixes for
any flaws or vulnerabilities found in the failover procedure.
3. Testing failover scenarios: It develops extensive test cases to replicate various
failover scenarios. This covers both unplanned failovers (system or component
failures) and scheduled failovers (maintenance). Test cases ought to address many
facets of failover, such as load balancing, user impact, network rerouting, and data
synchronization.
4. Executing the test plan: To reduce the impact on production systems, carry out the
failover test plan in a controlled setting. Keep an eye on how the system behaves
during failover to make sure it satisfies the recovery point and recovery time
objectives.
5. Detailed report on failover: Keep a record of the failover testing findings, including
any problems you ran across, how long it took to failover and how it affected
customers or services. Assess problems according to their severity and offer
suggestions for improvements.
6. Necessary actions based on the report: Distribute the report on the failover test to
all concerned parties, such as project managers, developers, and system
administrators. Determine what needs to be done and prioritize it based on the
report’s conclusions. This might involve fixing found flaws in the system, updating
failover setups or improving the documentation.

Benefits of Failover Testing:


1. Determines Vulnerabilities and Weaknesses.
2. Verifies Redundancy Procedures
3. Improving the User Experience
4. Encourages Compliance
5. Encourages Continuous Improvement

Challenges in Failover Testing


 Complex System Dependencies: Modern software is often a web of interconnected
services, making failover scenarios complex.
 Data Synchronization Issues: Ensuring data remains consistent across primary and
backup systems can be tricky.
 Resource Allocation: Backup systems need to be as robust as primary systems, which
can be resource-intensive.
 Network Configuration: Failover testing often involves intricate network setups that can
be difficult to manage.

● Example of Failover Testing: A bank needs to ensure its online banking system can
handle server failures without affecting customer transactions. The testing team
simulates a server crash in the primary data center while monitoring how quickly
transactions are redirected to a backup server in a different location. The failover
process takes 30 seconds, during which some transactions are lost. The team
implements improvements. After re-testing, the failover time is reduced to 5 seconds,
meeting the bank's requirements

4.7 RECOVERY TESTING


Recovery testing is a type of system testing which aims at testing whether a
system can recover from failures or not. The technique involves failing the system and
then verifyingthat the system recovery is performed properly.
To ensure that a system is fault-tolerant and can recover well from failures, recovery
testing is important to perform. A system is expected to recover from faults and resume its
work within a pre-specified time period. Recovery testing is essential for any mission-critical
system, for example, the defense systems, medical devices, etc. In such systems, there is a
strict protocol that is imposed on how and within what time period the system should recover
from failure and how the system should behave during the failure.
A system or software should be recovery tested for failures like:
• Power supply failure
• The external server is unreachable
• Wireless network signal loss
• Physical conditions, etc.
Steps to be performed before executing a Recovery Test
1. Recovery Analysis – It is important to analyze the system’s ability to allocate extra
resources like servers or additional CPUs. This would help to better understand the
recovery-related changes that can impact the working of the system. Also, each of the
possible failures, their possible impact, their severity, and how to perform them
should be studied.
2. Test Plan preparation – Designing the test cases keeping in mind the environment
and results obtained in recovery analysis.

3. Test environment preparation – Designing the test environment according to the


recovery analysis results.
4. Maintaining Back-up – Information related to the software, like various states of the
software and database should be backed up. Also, if the data is important, then the
backing up of the data at multiple locations is important.
5. Recovery personnel Allocation – For the recovery testing process, it is important to
allocate recovery personnel who are aware and educated enough for the recovery
testing being conducted.
6. Documentation – This step emphasizes on documenting all the steps performed
before and during the recovery testing so that the system can be analyzed for its
performance in case of a failure.

Examples of Recovery Testing


• When a system is receiving some data over a network for processing purposes, we can
stimulate software failure by unplugging the system power. After a while, we can
plug in the system again and test its ability to recover and continue receiving the data
from where it stopped.
• Another example could be when a browser is working on multiple sessions, we can
stimulate software failure by restarting the system. After restarting the system, we can
check if it recovers from the failure and reloads all the sessions it was previously
working on.
• While downloading a movie over a Wifi network, if we move to a place where there is
no network, then the downloading process will be interrupted. Now to check if the
process recovers from the interruption and continues working as before, we move
back to a place where there is a Wifi network. If the downloading resumes, then the
software has a good recovery rate.

Types of Recovery Testing


• Database Recovery Testing: Evaluate the system’s capacity to recover from
corrupted or malfunctioning databases. In order to test how well the system can
restore the database to a consistent and useful condition, it involves intentionally
destroying or damaging it.
• Load and Stress Recovery Testing: Determine how effectively the system bounces
back from variables that affect performance, including heavy loads or stressful
situations. It helps in determining if the system is capable of handling higher loads
and in the event that it cannot, how soon it will resume normal operation after the load
is dropped.
• Crash Recovery Testing: Determine how well the system bounces back from a
hardware or software failure. To make sure the system can resume regular operations
without losing data, it can involve unexpected shutdowns, abrupt power failures or a
sudden halt of services.
• Security Recovery Testing: Examine the system’s resilience to security lapses,
illegal access, and other security-related events by conducting security recovery
testing. It guarantees that the system can recover from security breaches and helps
discover loopholes in the security procedures, reducing the impact of any
unauthorized access.
• Environment Recovery Testing: Examine the software’s ability to adjust to changes
in dependencies or configurations in the environment. It guarantees that in the event
of modifications to the underlying structure or environmental circumstances, the
system can recover and go on operating as anticipated.
Advantages of Recovery Testing
• Improves the quality of the system by eliminating the potential flaws in the system
so that the system works as expected.
• Recovery testing is also referred to as Disaster Recovery Testing. A lot of companies
have disaster recovery centers to make sure that if any of the systems is damaged or
fails due to some reason, then there is back up to recover from the failure.
• Risk elimination is possible as the potential flaws are detected and removed from the
system.
• Improved performance as faults are removed, and the system becomes more reliable
and performs better in case a failure occurs.

Disadvantages of Recovery testing


• Recovery testing is a time-consuming process as it involves multiple steps and
preparations before and during the process.
• The recovery personnel must be trained as the process of recovery testing takes
place under his supervision. So, the tester needs to be trained to ensure that recovery
testing is performed in the proper way. For performing recovery testing, he should
have enough data and back up files to perform recovery testing.
• The potential flaws or issues are unpredictable in a few cases. It is difficult to point
out the exact reason for the same, however, since the quality of the software must be
maintained, so random test cases are created and executed to ensure such potential
flaws are removed.

Recovery testing Vs Fail over testing


Recovery testing Fail over testing
Recovery testing validates a system's Failover testing verifies the system's
ability to recover from failures capacity to switch to a backup
system in the event of a failure
Recovery testing aims to ensure data Failover testing aims to maintain
integrity and system restoration with continuous operation by switching
minimal downtime to a
backup system.

4.8 CONFIGURATION TESTING


Configuration Testing is the type of Software Testing that verifies the
performance ofthe system under development against various combinations of
software and hardware to find out the best configuration under which the system
can work without any flaws or issues while matching its functional requirements.
What is Configuration Testing?
Configuration Testing is the process of testing the system under each configuration of
the supported software and hardware. Here, the different configurations of hardware and
software mean the multiple operating system versions, various browsers, various supported
drivers, distinct memory sizes, different hard drive types, various types of CPU, etc.
The various configurations are
1. Operating System : Win XP, Win 7 32/64 bit, Win 8 32/64 bit, Win 10, etc.
2. Database Configuration: Oracle, DB2, MySQL, MSSQL Server, Sybase etc.
3. Browser Configuration: IE 8, IE 9, FF 16.0, Chrome, Microsoft Edge etc.

Objectives of Configuration Testing:


1. Adaptability to Different Configurations: Check that the program’s basic features
work consistently and dependably in all configurations. Testing the behavior of the
program with different setups and settings is part of this process.
2. Evaluation of Stability: Examine the software’s stability under various
configurations. Find and fix any configuration-specific problems that might be
causing crashes, unstable systems or strange behavior.
3. Testing the User Experience: Assess the value and consistency of the user
experience across various setups. Make that the graphical user interface (GUI) of the
software adjusts to various screen sizes, resolutions and display settings.
4. Security Throughout Configurations: To make sure that sensitive data is kept safe,
test the software’s security features in various setups. Determine and fix any
vulnerabilities that might be configuration-specific.
5. Compatibility of Networks: Examine the software’s behavior with various network
setups. Evaluate its compatibility with various network types, speeds and latency.
6. Data Compatibility: Check if the program can manage a range of data
configurations, such as those from diverse sources, databases and file formats. Verify
the consistency and integrity of the data across various setups.

Configuration Testing Process:

Types of Configuration Testing:


1. Software Configuration Testing: Software configuration testing is done over the
Application Under Test with various operating system versions and various browser
versions etc. It is a time-consuming testing as it takes long time to install and uninstall
the various software which are to be used for testing.
2. Hardware Configuration Testing: Hardware configuration testing is typically
performed in labs where physical machines are used with various hardware connected
to them.

Step to Design the Test Cases to Run on Each Configuration:


1. Select and set up the next test configuration from the list.
2. Start the software.
3. Load in the file configtest.doc.
4. Confirm that the file is displayed correctly.
5. Print the document.
6. Confirm that there are no errors and that the printed document matches the standard.
7. Log any discrepancies as a bug.
Execute the Tests on Each Configuration:
We need to run the test cases and carefully log. Then report our results to the team
and to the hardware manufacturers if necessary. It’s often difficult and time-consuming to
identify the specific source of configuration problems. A tester need to work closely with the
programmers and white-box testers to isolate the cause and decide if the bugs you find are
due to your software or to the hardware.
If the bug is specific to the hardware, consult the manufacturer’s website for
information on reporting problems to them. They may ask you to send copies of your test
software, your test cases, and supporting details to help them isolate the problem.
Rerun the Tests Until the Results Satisfy Your Team - Initially a few configurations might
be tried, then a full test pass, then smaller and smaller sets to confirm bug fixes. Eventually
you will get toa point where there are no known bugs or to where the bugs that still exist are
in uncommon or unlikely test configurations. At that point, you can call your configuration
testing complete.
Advantages:
1. Improved User Experience
2. Cost-Effective
3. System Stability

Disadvantages:
1. Increased Complexity
2. Resource Intensive
3. Time-Consuming

4.9 COMPATIBILITY TESTING


Compatibility test aims to check the developed software application functionality on
various software, hardware platforms, network and browser etc. Compatibility testing is
software testing which comes under the non-functional testing category, and it is performed
on an application to check its compatibility (running capability) on different
platform/environments. This testing is done only when the application becomes stable. This
compatibility testing is very important in product production and implementation point of
view as it is performed to avoid future issues regarding compatibility. Developers, testers,
product managers, and customers are all involved in compatibility testing

Types of Compatibility Testing:


1. Software:
• Testing the compatibility of an application with an Operating System like Linux,
Mac, Windows
• Testing compatibility on Database like Oracle SQL server, MongoDB server
• Testing compatibility on different devices like in mobile phones, computers.
Types based on Version Testing:
There are two types of compatibility testing based on version testing
 Forward compatibility testing: When the behavior and compatibility of a software
or hardware is checked with its newer version then it is called as forward
compatibility testing.
 Backward compatibility testing: When the behavior and compatibility of a software
or hardware is checked with its older version then it is called as backward
compatibility testing.
2. Hardware: Checking compatibility with a particular size of
• RAM, ROM, Hard Disk, Memory Cards, Processor,Graphics Card
3. Smartphones / Mobiles : Checking compatibility with different mobile platforms like
android, iOS etc.
4. Network: Checking network compatibility with different
• Bandwidth
• Operating speed
• Capacity
5. Browser: Examines the application compatibility with several browsers including FireFox,
Google Chrome, IE, etc
6. Devices: Checking if the software is compatible with various devices including Bluetooth, USB
port, printers, Scanners, etc

How to perform Compatibility testing? (Eg)


 Testing the application in the same environment but having different versions. For
example, to test compatibility of Facebook application in your android mobile, first
check forthe compatibility with Android 9.0 and then with Android 10.0 for the same
version of Facebook App.
 Testing the application in a same version but having different environment. For
example, to test compatibility of Facebook application in your android mobile , first
check forthe compatibility with a Facebook application of lower version with a Android
10.0(or your choice) and then with a Facebook application of higher version with a same
version of Android.
Who is Involved in Compatibility Testing?
Here are the key individuals responsible for performing compatibility tests in software testing:

 QA Engineers: Primarily responsible for designing, executing, and analyzing test cases across
different hardware, software, and platforms.
 Developers: Collaborate with QA to identify and fix compatibility issues.
 Product Managers: Define compatibility requirements and prioritize testing efforts.
 End-users: Can provide valuable feedback on real-world compatibility issues.

Why compatibility testing is important? (Advantages)


1. It ensures complete customer satisfaction.
2. It provides service across multiple platforms.
3. Reduced risk of errors
4. Cost-effective

Compatibility testing defects: (Disadvantages)


1. Variety of user interface.
2. Changes with respect to font size.
3. Alignment issues.
4. Issues related to overlapping of content.
Tools for Compatibility Testing :
Lambda Test - The tool ensures that your application is able to function efficiently on all
desktop as well as mobile browsers.
Cross Browser Testing- This compatibility testing tool that enables manual, visual, as well as
Selenium tests across a number of mobile and desktop
Browser Stack- With BrowserStack, you can test websites on different Android and iOS devices
across various browsers.
Browser Shots -BrowserShots enable testing websites across any operating system and browser.
Browserling –It is one of the cost-effective compatibility testing tools which enable easy, live,
and interactive browser compatibility testing

4.10 USABILITY TESTING


Usability Testing is the testing done from an end user’s perspective to determine if the
system is easily usable. Usability testing is generally the practice of testing how easy a
design is to use on a group of representative users. Usability testing is also referred to as
User Experience.
A very common mistake in usability testing is conducting a study too late in the
design process. If you wait until right before your product is released, you won’t have the
time or money to fix any issues – and you’ll have wasted a lot of effort developing your
product the wrong way.
The following qualities are tested in Usability testing to ensure user friendliness
 Easy to understand
 Easy to access
 Look and feel
 Faster to access
 Effective navigation

Phases of Usability Testing [ usability Testing Process]


There are five phases in usability testing which are followed by the system when usability
testing is performed. These are given below:

1. Prepare your product or design to test: The first phase of usability testing is
choosing a product and then making it ready for usability testing.
2. Find your participants: Generally, the number of participants that you need is based
on several case studies. Mostly, five participants canfind almost as many usability problems
as you’d find using many more test participants.
3. Write a test plan: The main purpose of the plan is to document what you are going to
do, how you are going to conduct the test, what metrics you are going to find, the number of
participants you aregoing to test, and what scenarios you will use.
4. Take on the role of the moderator: The moderator plays a vital role that involves
building a partnership with the participant. Most of the research findings are derived by
observing the participant’s actions and gathering verbal feedback to be an effective
moderator.
5. Present final report: This phase generally involves combining your results into an
overall score and presenting it meaningfully to your audience.

Why do we need Usability Testing?


When software is ready, it is important to make sure that the user experience with the product
should be seamless. It should be easy to navigate and all the functions should be working
properly, the competitor’s website will win the race. Therefore, usability testing is performed.
The objective of usability testing is to understand customers’ needs and requirements and
also how users interact with the product (software). With the test, all the features, functions,
and purposes of the software are checked.
The primary goals of usability testing are – discovering problems (hidden issues) and
opportunities, comparing benchmarks, and comparison against other websites. The
parameters tested during usability testing are efficiency, effectiveness, and satisfaction.
Factors Affecting Cost of Usability Testing:
The testing cost will depend on the following factors:
1. No. of participants for testing.
2. Number of Days needed for testing.
3. Which type of testing.
4. The size of the team used for testing.
A tester should also remember to budget for the usability testing, making usability testing into
a product . The other factors that are needed are as follows:
• Rental cost: If you are not considering the equipment, you will need to ensure the
budget cost for all other equipment, and also need to allot the location for the testing
purpose. For example the rental room like a conference room which is used to
perform all operations.
• Recruiting Costs: Consider how and where you have recruited your participants. You
will need to allow the staff to engage a recruiting team to schedule participants based
on requirements.
• Participants Compensation based on: You will need to compensate the participants
for their time and travel purposes that also important to finding the testing budget.

Techniques and Methods of Usability Testing:


There are various types of usability testing that when performed lead to efficient software.
But few of them which are the most widely used have been discussed here.
1. Guerilla Testing
It is a type of testing where testers wander to public places and ask random users about the
prototype. It is the best way to perform usability testing during the early phases of the
product development process. Users mostly spare 5–10 minutes and give instant feedback on
the product. Also, the cost is comparatively low as you don’t need to hire participants. It is
also known as corridor or hallway testing.
2. Usability Lab
Usability lab testing is conducted in a lab environment where moderators (who ask for
feedback on the product) hire participants and ask them to take a survey on the product. This
test is performed on a tablet/desktop. The participant count can be 8-10 which is a bit costlier
than guerrilla testing as you need to hire participants, arrange a place, and conduct testing.
3. Screen or Video Recording
Screen or video recording kind of testing is in which a screen is recorded as per the user’s
action (navigation and usage of the product). This testing describes how the user’s mind runs
while using a product. This kind of testing involves the participation of almost 10 users for
15 minutes. It helps in describing the issues users may face while interacting with the
product.

Generally, there are two studies in usability testing –


1. Moderated – the Moderator guides the participant for the changes required in the
product (software)
2. Unmoderated – There’s no moderator (no human guidance), participants gets a set of
questions on which he/she has to work.
An example of usability testing : Asking a group of users to complete a task on a new e-
commerce website, like "find a pair of blue jeans in size medium," while observing their actions
and asking follow-up questions to understand if they encounter any difficulties navigating the
site, finding the product, or completing the purchase, allowing the designers to identify areas for
improvement in user experience
Advantages of Usability Testing
 Meet the User’s Expectations
 Avoid design flaws
 Product becomes Efficient
Disadvantages of Usability Testing
 Expensive and time consuming
 Usability Test Outcomes are Arguable
 Selecting a Target Group Can be Tricky

4.11 TESTING THE DOCUMENTATION


The works of a software tester aren’t constrained to just testing the software. It is his
duty to cover all the parts that make up the entire software product. Hence assuring the
documentation is correct is also a tester’s job.

Type of Software Documentation:


Here’s a list of software components that can be classified as documentation:
• Packaging text and graphics - This includes the box, carton, wrapping, and so on. The
documentation might contain screen shots from the software, lists of features, system
requirements, and copyright information.
• Marketing material, ads, and other inserts - These are all the pieces of paper you usually
throw away, but they are important tools used to promote the sale of related software, add-
on content, service contracts, and so on. The information for them must be correct for a
customer to take them seriously.
• Warranty/registration- This is the card that the customer fills out and sends in to register
the software. It can also be part of the software, being displayed onscreen for the user to
read, acknowledge, and complete online.
• EULA- It stands for End User License Agreement. This is the legal document that the
customer agrees to that says, among other things, that he won’t copy the software nor sue
the manufacturer if he’s harmed by a bug.
• Labels and stickers- These may appear on the media, on the box, or on the printed
material. There may also be serial number stickers and labels that seal the EULA envelope.
The Fig below is an example of a disk label and all the information that needs to be
checked.

Figure Sample documentation on the disk label for the software tester to check.
• Installation and setup instructions. Sometimes this information is printed directly on the
discs, but it also can be included on the CD content
• User’s manual. The usefulness and flexibility of online manuals has made printed manuals
much less common than they once were. Most software now comes with a small, concise
“getting started”–type manual with the detailed information moved to online format. The
online manuals can be distributed on the software’s media or on a website or on both.
• Online help- Online help often gets intertwined with the user’s manual, sometimes even
replacing it. Online help is indexed and searchable, making it much easier for users to find
the information they’re looking for.
• Tutorials, wizards, and CBT (Computer Based Training). These tools blend programming
code and written documentation. They’re often a mixture of both content and high-level,
macro-like programming and are often tied in with the online help system. A user can ask a
question and the software then guides him through the steps to complete the task.
• Samples, examples, and templates -An example of these would be a word processor with
forms or samples that a user can simply fill in to quickly create professional-looking results.
A compiler could have snippets of code that demonstrate how to use certain aspects of the
language.
• Test cases- Numerous test cases are used to test the software.They are recorded in a document
• Test plan – The strategy of testing is clearly mentioned in the Test plan. Test Data – The test
data are used to compare the test case findings and for writing test cases

The Importance of Documentation Testing (Advantages)


• It improves usability.
• It improves reliability.
• It lowers costs. The reason is that users who are confused or run into unexpected problems
will call the company for help, which is expensive. Good documentation can prevent these
calls by adequately explaining and leading users through difficult areas.

What to Look for When Reviewing Documentation:


Table 4.1 is a simple checklist to use as a basis for building your documentation test cases.

Table : A Documentation Testing Checklist


Drawbacks of the Test Documents
 Maintaining the documents is a tiresome work
 If the document is inadequate , the quality of the application will be less
 Sometimes it might be very expensive and more than it’s worth

4.12 SECURITY TESTING


Security testing is used to discover the weaknesses, risks, or threats in the
software application and also help us to stop the attackfrom the outsiders and
make sure the security of our software applications.
The primary objective of security testing is to find all the potential ambiguities and
vulnerabilities of the application so that the software does not stop working. If we perform
security testing, then it helps us to identify all the possible security threats and also help the
programmer to fix those errors.
It is a testing procedure, which is used to define that the data will be safe and also
continue the working process of the software.

Principle of Security testing:

Availability: In this, the data must be retained by an official person, and they also guarantee
that the data and statement services will be ready to use whenever we need it.
Integrity: In this, we will secure those data which have been changed by the unofficial
person. The primary objective of integrity is to permit the receiver to control the data that is
given by the system. The integrity systems regularly use some of the similar fundamental
approaches as confidentiality structures. And also verify that correct data is conveyed from
one application to another.
Authorization: It is the process of defining that a client is permitted to perform an action and
also receive the services. The example of authorization is Access control.

Confidentiality: It is a security process that protects the leak of the data from the outsider's
because it is the only way where we can make sure the security of our data.
Authentication: The authentication process comprises confirming the individuality of a
person, tracing the source of a product that is necessary to allow access to the private
information or the system
Non-repudiation: It is used as a reference to the digital security. It a way of assurance that
the sender of a message cannot disagree with having sent the message and that the
recipient cannot repudiate having received the message.
The non-repudiation is used to ensure that a conveyed message has been sent and
received by the person who claims to have sent and received the message.

Key Areas in Security Testing:


While performing the security testing on the web application, we need to concentrate on the
following areas to test the application:

System software security: In this, we will evaluate the vulnerabilities of the application
based on different software such as Operating system, Database system, etc.
Network security: In this, we will check the weakness of the network structure, such as
policies and resources.
Server-side application security: We will do the server-side application security to ensure
that the server encryption and its tools are sufficient to protect the software from any
disturbance.
Client-side application security: In this, we will make sure that any intruders cannot operate
on any browser or any tool which is used by customers.

Types of Security testing:


Security Scanning: Security scanning can be done for both automation testing and manual
testing. This scanning will be used to find the vulnerability or unwanted file modification in a
web-based application, websites, network, or the file system. After that, it will deliver the
results which help us to decrease those threats. Security scanning is needed for those systems,
which depends on the structure they use.

Risk Assessment: To moderate the risk of an application, we will go for risk assessment. In
this, we will explore the security risk, which can be detected in the association. The risk can
be further divided into three parts, and those are high, medium, and low. The primary
purpose of the risk assessment process is to assess the vulnerabilities and control the
significant threat.
Vulnerability Scanning: It is an application that is used to determine and generates a list of
all the systems which contain the desktops, servers, laptops, virtual machines, printers,
switches, and firewalls related to a network. The vulnerability scanning can be performed
over the automated application and also identifies those software and systems which have
acknowledged the security vulnerabilities.

Penetration testing: Penetration testing is a security implementation where a cyber-security


professional tries to identify and exploit the weakness in the computer system. The primary
objective of this testing is to simulate outbreaks and also finds the loophole in the system and
similarly save from the intruders who can take the benefits.
Security Auditing: Security auditing is a structured method for evaluating the security
measures of the organization. In this, we will do the inside review of the application and the
control system for the security faults.
Ethical hacking: Ethical hacking is used to discover the weakness in the system and also
helps the organization to fix those security loopholes before the hacker exposes them. The
ethical hacking will help us to increase the security position of the association because
sometimes the ethical hackers use the same tricks, tools, and techniques that nasty hackers
willuse, but with the approval of the official person.
The objective of ethical hacking is to enhance security and to protect the systems from
malicious users' attacks.
Posture Assessment: It is a combination of ethical hacking, risk assessments, and security
scanning, which helps us to display the complete security posture of an organization.

Security testing tools:


We have various security testing tools available in the market, which are as follows:
SonarQube, ZAP, Netsparker, Arachni, IronWASP

4.13 TESTING IN THE AGILE ENVIRONMENT


o Agile Testing is a type of software testing that follows the principles of agile software
development to test the software application.
o All members of the project team along with the special experts and testers are
involved in agile testing.
o Agile testing is not a separate phase and it is carried out with all the development phases
i.e. requirements, design and coding, and test case generation. Agile testing takes place
simultaneously throughout the Development Life Cycle.
o Agile testers participate in the entire development life cycle along with development
team members and the testers help in building the software according to the customer
requirements and with better design and thus code becomes possible.
o Agile Testing has shorter time frames called iterations or loops. This methodology is
also called the delivery-driven approach because it provides a better prediction on the
workable products in less duration time.

Agile Testing Principles ( Features / Advantages )


 Quick feedback : In Agile Testing, the testing team gets to know the product
development and its quality for each and every iteration. Thus continuous feedback
minimizesthe feedback response time and the fixing cost is also reduced.
 Continuous Testing: Agile testing is not a different phase. It is performed alongside
the development phase. It ensures that the features implemented during that iteration are
actually done. Testing is not kept pending for a later phase.
 Involvement of all members: Agile testing involves each and every member of the
development team and the testing team. It includes various developers and experts.
 Less Documentation: Agile testers use reusable checklists to suggest tests and focus
on the essence of the test rather than the incidental details.
 Clean code: The defects that are detected are fixed within the same iteration. This
ensures clean code at any stage of development.
 Constant response: Agile testing helps to deliver responses or feedback on an ongoing
basis.Thus, the product can meet the business needs.
 Customer satisfaction: As the customers are exposed to the product throughout the
development process in agile testing, the customer can modify the requirements, and update
the requirements and the tests can also be changed as per the changed requirements.
 Test-driven: In agile testing, the testing needs to be conducted alongside the
development process to shorten the development time. But testing is implemented after the
implementationor when the software is developed in the traditional process.

Agile Testing Life Cycle


The agile testing life cycle has 5 different phases:
1. Impact Assessment: This is the first phase of the agile testing life cycle also known as
the feedback phase where the inputs and responses are collected from the users and
stakeholders. This phase supports the test engineers to set the objective for the next phase in
the cycle.
2. Agile Testing Planning: In this phase, the developers, customers, test engineers, and
stakeholders team up to plan the testing process schedules, regular meetings, and
deliverables.
3. Release Readiness: This is the third phase in the agile testing lifecycle where the test
engineers review the features which have been created entirely and test if the features are
ready to go live or not and the features that need to be sent again to the previous development
phase.
4. Daily Scrums: This phase involves the daily morning meetings to check on testing and
determine the objectives for the day. The goals are set daily to enable test engineers to
understand the status of testing.
5. Test Agility Review: This is the last phase of the agile testing lifecycle that includes
weekly meetings with the stakeholders to evaluate and assess the progress against the goals.
Agile Testing Strategies:
Agile testing has four strategies or stages that helps to enhance the quality of the product

1. Iteration 0
It is the first stage of the testing process and the initial setup is performed in this stage. The
testing environment is set in this iteration.
• This stage involves executing the preliminary setup tasks such as finding people for
testing, preparing the usability testing lab, preparing resources, etc.
• The business case for the project, boundary situations, and project scope are verified.
• Important requirements and use cases are summarized.
• Initial project and cost valuation are planned.
• Risks are identified.
2. Construction Iteration
It is the major phase of the testing and most of thework is performed in this phase. It is a set
of iterations to build an increment of the solution. This process is divided into two types:
a. Confirmatory testing: This type of testing concentrates on verifying that the system
meets the stakeholder’s requirements as described to the team to date and is performed by
the team.It is further divided into 2 types of testing:
Agile acceptance testing: It is the combination of acceptance testing and functional
testing. It can be executed by the development team and the stakeholders.
Developer testing: It is the combination of unit testing and integration testing and
verifies both the application code and database schema.
b. Investigative testing: Investigative testing detects the problems that are skipped or
ignored during confirmatory testing. In this type of testing, the tester determines the
potential problems in the form of defect stories. It focuses on issues like integration
testing, load testing, securitytesting, and stress testing.

3. Release End Game


This phase is also known as the transition phase. This phase includes the full system testing
and the acceptance testing. To finish the testing stage, the product is tested more relentlessly
while it is in construction iterations. In this phase, testers work on the defect stories. This
phase involves activities like:
• Training end-users.
• Support people and operational people.
• Marketing of the product release.
• Back-up and restoration.
• Finalization of the system and user documentation.

4. Production
It is the last phase of agile testing. The product is finalized in this stage after the removal of
all defects and issues raised.
Agile Testing Quadrants
The whole agile testing process is divided into four quadrants:

1. Quadrant 1 (Automated)
The first agile quadrat focuses on the internal quality of code which contains the test cases
and test components that are executed by the test engineers. All test cases are technology-
driven and used for automation testing. All through the agile first quadrant of testing, the
following testing can be executed:
• Unit testing / Component testing.

2. Quadrant 2 (Manual and Automated)


The second agile quadrant focuses on the customer requirements that are provided to the
testing team before and throughout the testing process. The test cases in this quadrant are
business-driven and are used for manual and automated functional testing. The following
testing will be executed in this quadrant:
• Pair testing.
• Testing scenarios and workflow.
• Testing user stories and experiences like prototypes.

3. Quadrant 3 (Manual)
The third agile quadrant provides feedback to the first and the second quadrant. This quadrant
involves executing many iterations of testing, these reviews and responses are then used to
strengthen the code. The test cases in this quadrant are developed to implement automation
testing. The testing that can be carried out in this quadrant are:
• Usability testing.
• Collaborative testing.
• User acceptance testing.
• Pair testing with customers.

4. Quadrant 4 (Tools)
The fourth agile quadrant focuses on the non-functional requirements of the product like
performance, security, stability, etc. Various types of testing are performed in this quadrant to
deliver non-functional qualities and the expected value. The testing activities that can be
performed in this quadrant are:
• Non-functional testing such as stress testing, load testing, performance testing, etc.
• Security testing.
• Scalability testing.
• Infrastructure testing.
• Data migration testing.
Challenges During Agile Testing (Disadvantages):
Changing requirements: Sometimes during product development changes in the
requirements or the specifications occur but when they occur near the end of the sprint, the
changes are moved to the next sprint and thus become the overhead for developers and
testers. Inadequate test coverage: In agile testing, testers sometimes miss critical test cases
because of the continuously changing requirements and continuous integration. This problem
can be solved by keeping track of test coverage by analyzing the agile test metrics.
Tester’s availability: Sometimes the testers don’t have adequate skills to perform API and
Integration testing, which results in missing important test cases.
Less Documentation: In agile testing, there is less or no documentation which makes the
task of the QA team more tedious.
Performance Bottlenecks: Sometimes developer builds products without understanding the
end-user requirements and following only the specification requirements, resulting in
performance issues in the product.
Skipping essential tests: In agile testing, sometimes agile testers due to time constraints and
the complexity of the test cases, put some of the non-functional tests on hold. This may
causesome bugs later that may be difficult to fix.

4.14 TESTING WEB AND MOBILE APPLICATIONS


Web Applications
Web applications refer to computer programs that run in a web browser. Commonly built
with the help of HTML5, CSS and JavaScript, web applications offer more interactivity than
websites and can be accessed via a desktop or laptop. The classic examples of web
applications include webmail, online stores and web banking.

Mobile Applications
A mobile application is a program that was built to be used on mobile devices (smartphones,
tablets and various wearables). Mobile apps are not as straightforward as desktop web apps
and fall into three varieties: mobile web, native and hybrid apps.
Mobile web applications
A mobile web application is a program that can be accessed via a mobile browser, meaning
that you don’t have to download them to your device to start using them. Like web apps,
mobile web applications are usually built using JavaScript, CSS and HTML5; however, there
is no standard software kit. Contrary to other mobile applications, web apps for mobile use
are easier to build and test, but they’re usually much more primitive in terms of functionality.
Native applications
Native mobile applications run on the device itself, so you have to download them before
using them. Since they are platform- specific, native mobile apps are built using specific
languages and integrated development environments (IDEs). For example, Android native
applications are developed using Java and Android Studio or Eclipse IDE. At the same time,
to build an app for an Apple device, you’ll need to use Objective-C or Swift and the XCode
IDE. Native apps are secure, integrate with the hardware perfectly and have the best UI/UX
experience.
Hybrid applications
Hybrid apps combine the characteristics of native and mobile web apps. Built with the help of
the “standard web” stack (JavaScript, CSS and HTML5), they are then wrapped in a native
environment, so you can use the same code for different platforms. While running on your
mobile browser, hybrid applications are downloadable and have access to your camera, GPS,
contact list, etc. Though such applications are easier to build and maintain, they are slower
and offer less advanced functionality than their native counterparts.
Types of Mobile App Testing and Web App Testing
Whether it comes to testing web or mobile applications, the aim is to ensure that an app is
user-friendly and functions properly under different circumstances. Furthermore, both
application testing varieties include common types of testing listed below:
• Functional testing
• User-interface testing
• Usability testing
• Configuration and Compatibility testing
• Security testing
• Performance testing
• Database testing

1. Functional Testing
Functional testing involves checking of the specified functionality of a web
application. Functional test cases for web applications may be generated using boundary
value analysis, equivalence class testing, decision table testing and many other techniques.
Example: Let us consider the eCommerce application sells products such as computers,
mobile phones, cameras, , etc. The home page of this web application is given in Fig
below:

Figure : Homepage of online shopping web application


Table below presents some sample functional test cases of the order process form of an
online shopping website.

Table: Sample functional test cases of order process of an online shopping webapplication

2. User-interface Testing
User interface testing tests that the user interaction features work correctly. These
features include hyperlinks, tables, forms, frames and user interface items such as text fields,
radio buttons, check boxes, list boxes, combo boxes, command buttons and dialog boxes.
User interface testing ensures that the application handles mouse and keyboard events
correctly and displays hyperlinks, tables, frames, buttons, menus, dialog boxes, error message
boxes, and toolbars properly.
2.1. Navigation Testing
Navigation testing investigates the proper functioning of all the internal and external links.
Navigation testing must ensure that websites provide consistent, well-organized links and
should also provide alternative navigation schemes such as search options and site maps. The
placement of navigation links on each page must be checked. Search based navigation facility
must also be thoroughly tested and search items should be consistent across one page to
another. All the combinations of keywords and search criteria must be verified in navigation
testing. Table below presents test cases for navigation testing for an online shopping website

Table :Navigation testing test cases for online shopping website


Manual checking of hyperlinks can be very time consuming. There are various online
tools available for checking broken links, accuracy and availability of links and obtaining
advice on search engines. Some tools for navigation testing include Link checker, Dead Links,
LinkTiger.

2.2 Form Based Testing


Websites that include forms need to ensure that all the fields in the form are working
properly. Form-based testing involves the following issues:
1. Proper navigation from one field of the form to a
2. nother using the tab key.
3. Ensures that the data entered in the form is in a valid format.
4. Checks that all the mandatory fields are entered in the form.

3. Usability Testing
Usability testing refers to the procedure employed to evaluate the degree to which the software
satisfies the specified usability criteria.
4. Configuration and Compatibility Testing
One of the significant challenges of web testing is that it must ensure the proper
functioning of a web application on all the supported platforms and suitable environments.
Configuration testing determines the behaviour of the software with respect to various
configurations whereas compatibility testing determines whether the web application behaves
as expected with respect to various supported configurations.
5. Security Testing
Security is the procedure used to protect information from various threats. It is very
important to protect sensitive and critical information and data while communicating over the
network. The user wants implementation of a safeguard to protect personal, sensitive and
financial information. We want data to be accurate, reliable and protected against
unauthorized access.
Security involves various threats such as unauthorized users, malicious users, message
sent to an unintended user, etc. The primary requirement of security includes:
i. Authentication: Is the information sent from an authenticated user?
ii. Access Control: Is data protected from unauthorized users?
iii. Integrity: Does the user receive exactly what is sent?
iv. Delivery: Is the information delivered to the intended user?
v. Reliability: What is the frequency of a failure? How much time does the networktake to
recover from a failure? What measures are taken to counter catastrophic failure?
6. Performance Testing The goal of performance testing is to evaluate the application’s
performance with respect to real world scenarios. The following issues must be addressed
during performance testing:
i. Performance of the system during peak hours (response time, reliability and
availability).
ii. Points at which the system performance degrades or system fails.
iii. Impact of the degraded performance on the customer loyalty, sales and profits.
6.1 Load Testing
Load testing involves testing the web application under real world scenarios by
simulating numerous users accessing the web application simultaneously. It tests the web
application by providing it maximum load.
6.2 Stress Testing
Stress testing involves execution of a web application with more than maximum and
varying loads for long periods.
7. Database Testing
In web applications, many applications are database driven, for example, e-commerce
related websites or business-to-business applications. It is important for these applications to
work properly and provide security to the user’s sensitive data such as personal details and
credit card information. Testing data-centric web applications is important to ensure their
error-free operation and increased customer satisfaction.
For example, consider the example for purchasing items from an online store. If the
user performs a search based on some keywords and price preferences, a database query is
created by the database server. Suppose due to some programming fault in the query, the
query does not consider the price preferences given by the customer, this will produce
erroneous results. These kinds of faults must be tested and removed during database testing.

Table : shows sample test cases based on a user operation in an online shopping website.
Difference between Mobile App and Web App
S.No Mobile App Web App
1 These are software programs that are These are software programs that are used on
used on mobile devices. computer.
2 New applications can be downloaded
Applications will be updated on website.
from app store.
3 It is not easy to create responsive design
It is easy to code relative design for large
for small screen devices such
screen devices such as desktop and laptop.
as mobile devices, tablets.
4 Web applications are developed for shorter
Mobile applications are developed for
range of users as compared to
broader range of users.
mobile applications
5 Mobile storage capacity is less than Desktop or laptop has larger storage capacity
desktop or laptop as compare to mobile.

Tools for Mobile App and Web App Testing


The reasons for choosing the right tool for test automation are a higher level of test
coverage, better reliability and faster test execution

Tools for Web App Testing


 Selenium
Selenium is a powerful open-source automated testing framework consists of Selenium
IDE, Selenium WebDriver and Selenium Grid. Selenium supports multiple programming
languages for script creation, allows users to record and re-run saved scripts and works
well on different browsers and operating systems.

Tools for Mobile App Testing


 Appium
Appium is a black-box mobile app testing tool. Based on Selenium, it is an open-source
tool for testing hybrid, web and native Android and iOS mobile applications..
 Espresso
Espresso is a UI quality assurance framework designed by Google for white box testing.
Since it was created to test Android native applications, Espresso tests can be written in
Java and Kotlin, the programming languages used to develop Android applications.

Selenium Appium Espresso

Platform Desktop Both Android & Android


Type Browsers iOS
App type Web Native, web and Native, web
Hybrid and hybrid
Areas Functional, Functional, UI
to test Regression regression, UI
Scripting Java, C#, Perl, Python, Java, C#, Python, PHP, Java
language JavaScript, Ruby, JavaScript
Ruby, PHP

License Open-source Open-source Open-source


Type
UNIT V
TEST AUTOMATION AND TOOLS
Automated Software Testing, Automate Testing of Web Applications, Selenium:
Introducing Web Driver and Web Elements, Locating Web Elements, Actions on Web
Elements, Different Web Drivers, Understanding Web Driver Events, TestNG:
Understanding TestNG.xml, Adding Classes, Packages, Methods to Test, Test Reports.

5.1 AUTOMATED SOFTWARE TESTING


Automated software testing is the method of automatically reviewing and
validating software products, such as web and mobile applications. This process ensures
that they meet all predefined quality standards for code style, functionality, and user
experience. Test automation replaces manual human activity with systems. Even though
tests, like regression or functional testing, can be done manually, automating the process will
reduce the time takento perform the tests. Moreover, it takes less time to perform exploratory
tests and more time to maintain test scripts, thus, increasing the overall test coverage.

5.1.1 : Evolution of Automated Testing (Generations of Automation)


Automation in software testing has evolved significantly over the years, and different
generations of automation reflect the advancement of tools, methodologies, and practices.

1st Generation: Manual Testing + Scripted Automation


 Era: 1950s - 1980s
 Overview: The first generation of automation was focused on transitioning manual
testing into the digital realm using basic scripts.
 Characteristics:
o Manual Testing Dominates: Testing was mostly manual, and scripts were
created for repetitive tasks.
o Tools Used: Early automation tools were primarily focused on record-and-
playback methods (e.g., WinRunner, QTP).
o Focus: The goal was to automate repetitive test execution and reduce human
error.
o Challenges: Tools were rudimentary and offered limited flexibility. Automation
was largely used for regression testing rather than complex test cases.
2nd Generation: Functional Test Automation
 Era: 1990s - Early 2000s
 Overview: The second generation expanded on the first by incorporating functional
automation that was more integrated into development and testing processes.
 Characteristics:
o More Advanced Tools: Tools like Selenium (launched in 2004) and JUnit
emerged. These tools supported better integration with continuous integration
(CI) systems.
o Test Coverage Expansion: Focused on increasing test coverage, especially for
functional and regression testing.
o Scripting Languages: More advanced scripting languages (e.g., Java, Python)
began to be used, making the automation more adaptable.
o Better Integration: Testing tools integrated better with the development
environment, improving collaboration between developers and testers.
o Challenges: Test maintenance became harder with the growth of code bases.
The scripts were often fragile and required frequent updates.
3rd Generation: Continuous Integration & Continuous Testing
 Era: 2000s - 2010s
 Overview: Automation evolved alongside agile methodologies and DevOps practices.
Continuous testing (CT) became an important part of the software development lifecycle.
 Characteristics:
o CI/CD Integration: Automation tools were integrated into CI/CD pipelines,
making tests run automatically every time code changes are committed.
o Test Automation Frameworks: Frameworks like JUnit, TestNG, and Appium
became more sophisticated, enabling more complex test strategies.
o Parallel Execution: Parallel testing techniques and distributed testing became
more prevalent.
o Shift-left Testing: Testing was performed earlier in the software lifecycle,
allowing bugs to be caught early.
o Challenges: Increased focus on automated regression testing, leading to high
maintenance of test scripts. Achieving stable automation across large systems
was still a challenge.
4th Generation: AI/ML-Driven Automation
 Era: 2015 - Present
 Overview: The latest generation of automation testing is heavily influenced by artificial
intelligence (AI) and machine learning (ML) algorithms.
 Characteristics:
o AI and ML in Test Creation: Tools are using AI/ML to create, optimize, and
maintain test scripts, reducing the need for manual test creation.
o Self-Healing Automation: AI-powered tools can automatically adjust and fix
tests when application elements change (e.g., name, attributes).
o Predictive Analysis: ML models predict areas of code that are likely to break
based on past behavior, helping focus testing efforts.
o Natural Language Processing (NLP): Tools like Testim.io and mabl are
utilizing NLP to allow testers to write tests in plain language.
o Autonomous Testing: Some tools are moving toward fully autonomous
testing, requiring minimal human input.
o Challenges: Complexity increases as testers need a deep understanding of AI/ML
techniques. Tools still require quality data and fine-tuning.

5th Generation: Autonomous Test Generation & Execution


Each generation of test automation has focused on reducing human intervention, increasing
efficiency, and improving the ability to detect issues early. The future promises even more
intelligent and adaptive systems, where testing is largely automated and self-healing.

5.1.2 : Purpose of Automation Testing:


Automation testing serves several important purposes in the software development lifecycle.
Let's explore some key reasons why organizations embrace automation testing:
1. Increased Test Coverage: Automation testing enables a broader scope of test coverage.
Organizations can leverage a test automation platform and use it to design test scripts to
cover various scenarios and test cases, ensuring thorough validation of software functionality.
With automated tests, organizations can achieve higher levels of test coverage, resulting in
improved software reliability.
2. Consistency and Reusability: Automation testing ensures consistent test execution by
removing the element of human error. Using test automation platform to automate testing,
you can reuse test scripts across multiple test cycles and different software versions. This
reusability not only saves time but also promotes consistency in testing, enabling accurate
comparison of results over time.
3. Early Detection of Defects: Automation testing enables early detection issues. By running
automated tests at different stages, such as during integration or regression testing, potential
bugs can be identified and addressed promptly. Early defect detection helps in reducing the
costs.
5.1.3 : Kinds of Tests that should be Automated:
While testing an application/software, testers cannot automate all processes involved in
the testing cycle. Some tests need human supervision and involvement to get better
results. Using test automation platforms to automate testing is not an alternative to
manual testing but helps and supports the entire testing team by reducing the workload.

In order to determine whether a test is suitable for automation, testers can check if it fits the
following criteria:
• The tests should be highly repetitive and take a long period of time to perform if it
is donemanually
• The testing path must be predictable, as it has been verified earlier through manual testing
• The tests that involve the testing of frequently used features that introduce high-
riskconditions
• The tests that require multiple datasets and run on several different hardware
or softwareplatforms and configurations
• Tests that are not possible for human manual testing, e.g., thousands of concurrent
userstrying to log in at the same time
If a test meets all these criteria mentioned above, you can consider test automationplatforms

5.1.4 : Stages (Phases) of Automation Testing Life Cycle (ATLC)


Automation testing life cycle is a multi-stage process that consists of the tasks necessary to
identify and introduce an automation test tool, write and run test cases, develop test designs, and
build and manage test data and environment. Six phases are important.
1. Deciding The Scope of Test Automation
The first stage of automation testing life cycle aims to discover the feasibility of automation. It is
essential to perform a feasibility analysis on the manual test cases that helps automation
engineers to design the test scripts.
We address the following in the first stage -
 Which components of the applications can be automated?
 Which tests can be automated and how to automate them?
 Factors like cost, team size and capabilities must also be considered.
Feasibility checks like Test Case Automation feasibility and AUT Automation feasibility should
be performed before starting the test automation.

Deciding the Scope of Test Automation

Choosing the Right Automation Tool

Plan, Design, and Strategy

Set-up Test Environment

Test Script & Execution

Test Analysis & Reporting


2. Choosing The Right Automation Tool
While choosing an automation tool, the technologies being used in the project, the familiarity of
the tool with the team, intuitiveness, and flexibility must be considered. For example, if you are
looking for an automated browser compatibility testing tool then the variety of browsers offered
is a critical deciding parameter.
We must do a comparative study of automation tools before making a decision. Some of the
frequently used automation tools nowadays are Selenium, Appium, Katalon Studio, Cucumber,
SoapUI, Worksoft, Test Studio, Lambda Test, Test Complete and Testimony.
3. Plan, Design, and Strategy
Selecting a test automation framework is the first and foremost thing to do in the Test Strategy
phase of Automation Testing Life Cycle.
The team of test engineers design a test architecture to describe the test program structure and
the way test procedures are managed.
We consider the following things when planning this phase :
 Gather all manual test cases from the test management tool to identify which test
cases need to be automated.
 Identify which framework to be used after understanding the pros and cons of the
testing tools.
 Build a test suite for Automation test cases in the selected tool for test management.
 Ensure to mention the background, risk, and dependency between the tool
and application in the test plan.
 Seek approval on the test strategy from clients or stakeholders.

4. Set-Up Test Environment


Key areas for the Test Environment setup :
 Check for the required software, licenses and hardware.
 Maintain a checklist of automation tools and their configurations.
 Test data – Test environment setup are populated with similar to production data.
 Front-End Running Environment – Availablity of front-end running environment to
perform load testing for analyzing the capabilities of handling web traffic.
 Checklist of all the Systems, modules and applications to be put under test.
 Availablity of the staging environment.
 Test across various operating systems, browsers and browser versions.
 Test your web applications on low and high network to realize the
difference between rendering time.
 Document all the Configuration/Installation/User manuals in a central repository.
 Planning the scheduled use of the test environment.

5. Test Script & Execution


Once we introduce the test environment, the next step is to develop and execute the test scripts.
 Create scripts based on project requirements.
 Use a currency approach throughout the process.
 Scripts must be reusable, simple, and structured so that anyone can
understand them.
 Perform proper code reviews and reporting to get better insights and
maintain quality throughout the process.
 We must incorporate the following during test executions -
 Test cases should cover all functional aspects.
 They should cover all platforms and environments.
 They must be processed in batches to save time and effort.
 Always document bug reports, preventing any functional errors.
Evaluating and documenting test results for further reference is done in this stage of the ATLC.
6. Test Analysis & Reporting
In this phase, we gather the test automation results and share them with the team, stakeholders
and client. Test results must be easy to understand for everyone involved. Proper filters must be
used in the report.
For maintenance, test cases are updated and automated regularly as per the functional or UI
changes or new testing criterias.
Conclusion
The 6 stages of automation testing life cycle are crucial for ensuring the effective
implementation of automated testing.

5.1.5 : Various Types of Automated Software Testing:


1. Unit Testing:
The testing of each unit of the software application is known as unit testing. As it is the first
level of testing, you can use test automation platforms to automate it. This type of testing is used
to validate unit components with their performance. Primarily, unit testing is performed during
the development phase.
2. Smoke Testing:
Smoke testing is usually done on a build software received from the development team. The
focus of the smoke tests is to check whether the build software is stable or not. If the software
passes this test, then testers can proceed with further testing.
3. Integration Testing:
Integration testing is the testing process that is performed after unit testing. This test ensures that
units or individual components of the software are tested in a group and work well as a whole.
This test is used to detect defects at the time of interaction between integrated components or
units.
4. Regression Testing
Regression testing is both functional and non-functional type of testing. It verifies the code
changes that do not impact the software's existing functionality. This testing ensures that the
software works fine with new functionality, bug fixes, or code changes in the existing feature.
With HeadSpin’s test automation platform, testing teams can perform regression automation
testing for their apps/websites. HeadSpin's Regression Intelligence is a powerful comparison
tool for analyzing degradation across new app builds, OS releases, feature additions, locations,
and more. Using the test automation platform, testers can also compare build over build, location
over location, network over network, and device over device performance of their apps/websites.
5. API Testing
The application programming interface (API) is the connection between all the other systems
that software needs to function. This testing verifies all APIs. API testing is mainly used to test
the programming interfaces' functionality, reliability, performance, and security.
While executing API testing with the HeadSpin Platform, the API usage monitoring feature will
help testers keep track of how their APIs are being used by applications or track the impact of
3rd party APIs on application performance.
6. Security Testing
Security testing is also functional and non-functional in nature. It detects the weaknesses and
threats in the software. This testing can block the attacks from hackers and ensure the security of
the software.
7. Performance Testing
Performance testing records the system performance of the software in terms of responsiveness
and stability under a specific workload. The main parameters checked under this testing include
the software's speed, robustness, and reliability.
8. Acceptance Testing
Acceptance testing is used to check how end users will respond to the final software product.
Usually, this is the last type of testing used before a software/application is released.
5.1.6 : Automated Software Testing Tools
Automated testing tools have become indispensable in achieving this goal efficiently. These
tools speed up the testing process and enhance its accuracy. Some automated software testing
tools are listed below :
 Selenium - Selenium supports multiple languages and browsers, focusing on web
application testing.
 TestComplete - TestComplete supports desktop, mobile, and web applications.
 JUnit - JUnit facilitates unit testing with simplicity and easeof use.
 Cypress - Cypress is a modern web testing tool designed to work exclusively with
web applications.
 Robot Framework - Robot Framework is designed for acceptance testing
 Cucumber - Cucumber allows for the specification of application behavior in plain
language. This makes tests easy to read and understand.

5.1.7 : Types of Test Automation Frameworks:


(Five ways in which automation can aid in software testing)
In the test automation process, testing frameworks play a crucial role. These frameworks include
guidelines for testers/developers in coding standards, repository management, and handling of
test data. The main focus of these frameworks is to reduce maintenance costs and testing efforts
and achieve a high return on investment for the testing teams.
The different types of automated software testing are given below:
1. Linear Automation Framework
The linear test automation framework guides testers to create functions without writing codes,
and the steps in this framework are given in sequential order. While testing with this framework,
testers record every step and play the script back automatically to repeat the test.
Advantages of a linear framework:
 There is no need to write custom code, so expertise in test automation is not necessary.
 This is one of the fastest ways to generate test scripts since they can be easily recorded in a
minimal amount of time.
 The test workflow is easier to understand for any party involved in testing since the scripts
are laid out in a sequential manner.
Disadvantages:
 The scripts developed using this framework aren’t reusable.
 Maintenance is considered a hassle because any changes to the application will require a
lot of rework.
2. Modular-based Testing Framework
In the modular-based testing framework, testers need to divide the application/software under
test into separate units or sections. These separate units or sections are tested in isolation.
Individual test scripts are created for each part, and after testing, all parts are combined to build
larger tests that represent various test cases.

Advantages of a Modular Framework:


 If any changes are made to the application, only the module and it’s associated individual
test script will needs to be fixed.
 Creating test cases takes less effort because test scripts for different modules can be reused.
Disadvantages of a Modular Framework:
 Data is still hard-coded into the test script since the tests are executed separately, so you can’t
use multiple data sets.
 Programming knowledge is required to set up the framework.

3. Data-driven Framework
In the data-driven framework, the test data are separated from script logic, and testers can
store all the data externally. With this framework, whenever testers need to test
application/software multiple times with different data sets, they can use the data stored
in external data sources.

Advantages of a Data-Driven Framework:


 Tests can be executed with multiple data sets.
 Multiple scenarios can be tested quickly by varying the data, thereby reducing the number of
scripts needed.
Disadvantages
 A highly-experienced tester is needed who is proficient in various
programming languages to properly utilize this framework design
 Setting up a data-driven framework takes a significant amount of time.

4. Keyword-driven Framework
Keywords are also stored in the external data table. These keywords represent the
various actions that are being performed to test the GUI of an application.

Advantages of Keyword-Driven Frameworks:


 Minimal scripting knowledge is needed.
 A single keyword can be used across multiple test scripts, so the code is reusable.
Disadvantages:
 The initial cost of setting up the framework is high.
 It is time-consuming and complex.

5. Hybrid Testing Framework


The hybrid framework is a combination of the already mentioned frameworks. This type
of framework is used to use some frameworks' advantages and mitigate others'
weaknesses.
5.1. 8 Advantages of Automation Testing
 Simplifies Test Case Execution
 Improves Reliability of Tests.
 Increases amount of test coverage
 Minimizing Human Interaction
 Earlier detection of defects

5.1. 9 Disadvantages of Automation Testing


 High initial cost
 100% test automation is not possible
 Not possible to automate all testing types
 Programming knowledge is required
 Complex process

5.1. 10 :Test Automation Pyramid


The Test Automation Pyramid is a concept that helps guide the organization and strategy of
automated tests in software development. The pyramid consists of three main levels,
representing different types of tests that should be automated at various stages of development.:

1. Unit Tests (Bottom Layer)


 Focus: Tests small, isolated pieces of code (functions, methods, classes).
 Purpose: To ensure that individual units of code work correctly in isolation.
 Speed: Fast execution because they run on a small scope of code.
 Example: Testing if a function returns the expected result for a given input.
 Characteristics:
o Cheap to write and run.
o Should cover most of the codebase.
o Focuses on the internal logic of the application.

2. Service/Integration Tests (Middle Layer)


 Focus: Tests the interaction between different components, services, or systems (e.g.,
database, external APIs).
 Purpose: To ensure that different parts of the application work together as expected.
 Speed: Slower than unit tests because they involve multiple components or external
systems.
 Example: Testing if a web service correctly integrates with a database.
 Characteristics:
o Focus on testing system interactions.
o May involve some level of external dependencies, which could make tests more
complex and slower.
3. End-to-End (E2E) Tests (Top Layer)
 Focus: Tests the full system from start to finish, simulating real-world user interactions.
 Purpose: To verify that the application as a whole behaves correctly from the
user's perspective.
 Speed: Slow execution, as they simulate user interactions with the entire system.
 Example: Testing if a user can successfully log in and complete a purchase on an e-
commerce website.
 Characteristics:
o Focuses on overall system behavior and user experience.
o Expensive to write and maintain.
o Often run less frequently (e.g., on specific builds or milestones).

By following the pyramid approach, test automation becomes a structured and balanced part of
your software development lifecycle, helping you catch issues early without slowing down the
release process.

5.2 AUTOMATE TESTING OF WEB APPLICATIONS


Web Application Testing :
 Web Application Testing ensures that a web app works as expected across browsers,
devices, and platforms.
 Testing a web application is a highly crucial and essential part of software
development. It is a software practice that can be automated with the combination of
different tools related to it.
 It eventually reduces the need for human intervention and leads to incredible speed,
reliability,and efficiency in it.
 It can be implemented and accomplished using various types of software automation
testing tools and types of it for that particular purpose and eventually increase the
performance and enhance the user interface of it.

5.2.1 Types of Web App Testing that can be automated:


 Functional Testing
A single end-user can make the whole system crash in minutes, even after unit,
integration, and performance tests have passed. This usually happens because the user
does something the developers did not expect. The purpose of functional testing is
therefore to ensure that the functionality of the software works as intended for an end-
user. It tests this through the UI of the application. Examples of functional tests in a web
application UI include testing:
 The login to your web application is successful across browsers and devices
 The web application is interacting as intended with external databases and
syncing successfully
 Invoices are being sent and received with the correct information and securely
 Buttons, text fields, menus, etc., are working as per the requirements

 Usability Testing (UI/UX Testing)


Usability testing focuses on design aspects rather than functional aspects, assessing the
user experience, and how user-friendly the web application is.
Key aspects include:
 User Interface Evaluation: Analyzing the layout, design, and navigability.
 User Experience Testing: Assessing the ease of learning and using the application.
 Accessibility Testing: Ensuring the application is accessible to all users,
including those with disabilities.

 Regression Testing
Regression testing is critical whenever updates or changes are made to the
application. It ensures that new code doesn't negatively impact existing functionality.
This type of testing:
 Verifies Existing Functionality: Ensures that previous functions still operate as intended
after modifications.
 Identifies Unintended Consequences: Catches any new bugs introduced by recent changes.
 End-to-End Testing
End-to-end testing examines the complete functionality of the web application from start to
finish, emulating real user scenarios. It aims to ensure that all components of the application
work together seamlessly. This involves:
 Workflow Testing: Ensuring all the integrated parts of the application interact
correctly.
 Data Integrity Testing: Confirming that data maintains its integrity throughout all
transactions.

 Cross-Browser Testing
With the variety of browsers available, browser-based testing ensures that the web application
performs consistently across different browsers and their versions. This testing type:
 Ensures Compatibility: Verifies that the application functions correctly on
variousbrowsers.
 Identifies Browser-Specific Issues: Highlights any layout or functional
issues uniqueto certain browsers.
 Related reading: How to automate web testing across browsers and devices
 Performance Testing
Performance testing evaluates the web application’s stability and responsiveness under various
conditions. This includes:
 Load Testing: Assessing the application's ability to handle high volumes of users.
 Stress Testing: Determining the application's breaking point and how it
recovers fromfailure.
 Speed Testing: Measuring response times and the speed of page loading
under normalconditions.

5.2.2 Steps involved in automating Web application testing


 Planning - Identify the test cases that need to be automated and select the right tools.
 Development – Writing code to execute the test cases
 Execution – Execute the test cases
 Reporting – Report on the results of the automated tests

5.2.3 Web app test automation best practices


(How to select a right automation tool for Web app testing)
Before you start automating your web application tests, make sure you draft a test automation
strategy to keep you on track. Things to keep in mind before you start automation are:
 What are the specific requirements of your web application?
 What types of tests do you need to automate?
 Which test automation tool best suits your requirements and goals, as well as the
resources on your team?
 How much maintenance will automation require?
As a first rule of thumb, start small, and once you’re comfortable, start scaling your
automation efforts. No one wants to end up with hundreds of automated test cases that are
impossible to maintain. Rather, think of automation like a bell curve - automate too little, and
the ROI on your potential tool costs and onboarding will be too high. Automate too much, and
the time you spend on changing or maintaining your tests starts to exceed the time saved.
Ideally, find the sweet spot in the middle where return is the highest.
Successful web application testing requires effective test automation processes, clear
communication within the team, an efficient strategy, and an automation tool that doesn’t impair
testers, but enables them.
The following are some considerations to help to choose the appropriate automation tool:
 Application type and Technology
 Test requirements
 Programming language
 Learning Curve
 Integration and Extensibility
 Cost and licensing
 Maintenance and support
 Team Collaboration

5.2.4 Why to automate Web application testing / Benefits of Web Application testing
There are so many reasons to automate web application testing including
 Increased Speed
 Reduced Cost
 Improved Quality
 Increased Confidence

5.2.5 Challenges (Disadvantages) of Web Application testing


 Initial investment is more
 High Maintenance
 Complex Process
 Lack of Expertise

5.2.6 Web Application Automated test tools:


Some of the tools for web application automated testing are given below:
a. Katalon Studio
Katalon Studio is an automation testing solution that provides users with a comprehensive set of
features for testing web, mobile, and API applications
Advantages:
 It is convenient and accessible to different types of testers
 It is flexible and easy to use with its quick, powerful features(robust).
Disadvantages:
 It only supports java .
 It is not an open-source tool.

Cucumber: Cucumber is a free, open-source testing framework that helps users write automated
tests in plain English. It's used for behavior-driven development (BDD).

Advantages:
 It is an open-source automated software testing tool.
 It helps in writing acceptance tests for our web applications.
Disadvantages:
 Integration and its dependency on generating reports through plugins can be
challenging enough.
 Every time a new attribute or feature undergoes it, we have to ensure all current steps
and validate them to see if they can be used.

Selenium : Selenium is an open-source testing tool that automates web application testing
across browsers and operating systems. It can be used for a variety of test types, including
system testing, regression testing, and performance testing.

Advantages:
 This tool is open-source and widely supports all languages and frameworks.
 It comes with heavy library packages.
 It supports cross-browser automation, API automation, and database automation.
 Testers can use it for regression, exploratory testing, and quick reproduction of bugs.
 It is highly known for its flexibility with ease of implementation.

Disadvantages:
 Test Maintenance in selenium can become cumbersome and even expensive
sometimes
 Selenium requires coding skills, if not exceptional but above average
 It is only supported for web applications
 Technical support and its reliability can cause problems

5.3 SELENIUM: INTRODUCING WEB DRIVER AND WEB ELEMENTS


Working of Selenium;
Selenium is a powerful tool for controlling web browsers through programs. It is functional
for all browsers, works on all major OS, and its scripts are written in various languages i.e.,
Python, Java, C#, etc. Selenium has four major components : Selenium IDE, Selenium
RC, Selenium Web driver, and Selenium GRID.

1. Selenium IDE
Selenium IDE is a Firefox add-on that allows users to record, edit, debug, and play back
tests captured in the Selenese format, which was introduced in the Selenium Core
version. It also provides us with the ability to convert these tests into the Selenium RC or
Selenium WebDriver format. We can use Selenium IDE to do the following:
 Create quick and simple scripts using record and replay, or use them in exploratorytesting
 Create scripts to aid in automation-aided exploratory testing
 Create macros to perform repetitive tasks on Web pages
2. Selenium RC (Remote control)
Selenium Remote Control (RC) was one of the earliest Selenium tools,
preceding WebDriver. It allowed testers to write automated web application tests in various
programming languages like Java, C#, Python, etc. The key feature of Selenium RC was its
ability to interact with web browsers using a server, which acted as an intermediary between
the testing code and the browser. Its architecture is complex and has limitations. One must
have good programming language while working with Selenium RC.
3. Selenium WebDriver
Selenium WebDriver is the successor of Selenium RC (Remote Control), which has been
officially deprecated. Selenium WebDriver accepts commands using the JSON-Wire
protocol (also called Client API) and sends them to a browser launched by the specific
driver class (such as ChromeDriver, FirefoxDriver, or IEDriver). This is implemented
through a browser-specific browser driver. It works with the following sequence:
1. The driver listens to the commands from Selenium
2. It converts these commands into the browser's native API
3. The driver takes the result of native commands and sends the result back to Selenium.
We can use Selenium WebDriver to do the following:
 Create robust, browser-based regression automation
 Scale and distribute scripts across many browsers and platforms
 Create scripts in your favourite programming language.
Features of Selenium Web driver
 Cross platform support
 APIs for different languages
 Support for different frameworks
 Easy to use
4. Selenium Grid
Selenium Grid is a Server that allows us to run tests on browser instances running
on remote machines and in parallel, thus spreading a load of testing across several
machines. We can create a Selenium Grid, where one server runs as the Hub, managing a
pool of Nodes.
Selenium Grid enables us to execute tests in parallel on multiple machines by
managing different types of browsers, their versions, and operating system
configurations centrally.
Two key components of Selenium grid are Web driver and Web Elements

5.3.1 WebElements
A web page is composed of many different types of HTML elements, such as links,
textboxes, dropdown buttons, a body, labels, and forms. These are called WebElements
in the context of WebDriver. Together, these elements on a web page will achieve the
user functionality.
For example, let's look at the HTML code of the login page of a website:
<html>
<body>
<form id="loginForm">
<label>Enter Username: </label>
<input type="text" name="Username"/>
<label>Enter Password: </label>
<input type="password" name="Password"/>
<input type="submit"/>
</form>
<a href="forgotPassword.html">Forgot Password ?</a>
</body>
</html>
In the preceding HTML code, there are different types of WebElements, such as <html>,
<body>, <form>, <label>, <input>, and <a>, which together make a web page provide the
Login feature for the user. Let's analyze the following WebElement:
<label>Enter Username: </label>
Here, <label> is the start tag of the WebElement label. Enter Username: is the text
present on the label element. Finally, </label> is the end tag, which indicates the end of a
WebElement. Similarly, take another WebElement:
<input type="text" name="Username"/>
In the preceding code, type and name are the attributes of the WebElement input with the text
and Username values, respectively. UI-automation using Selenium is mostly about locating
these WebElements on a webpage and executing user actions on them.

5.4 LOCATING WEB ELEMENTS


Selenium offers a number of built-in locator strategies to uniquely identify an element. One can
locate an element in 8 different ways. Here is a list of locating strategies for Selenium in python.
Locators Description
The first element with the id attribute value matching the
By.ID
location will be returned.
The first element with the name attribute value matching the
By.NAME
location will be returned.
The first element with the xpath syntax matching the location
By.XPATH
will be returned.
The first element with the link text value matching the location
By.LINK_TEXT
will be returned.
The first element with the partial link text value matching the
By.PARTIAL_LINK_TEXT
location will be returned.
By.TAG_NAME The first element with the given tag name will be returned
The first element with the matching class attribute name will
By.CLASS_NAME
be returned.
The first element with the matching CSS selector will be
By.CSS_SELECTOR
returned.

Eg : 1
<html>
<body>
<form id="loginForm">
<input name="username" type="text" />
<input name="password" type="password" />
<input name="continue" type="submit" value="Login" />
</form>
</body>
<html>
We can locate elements using the following commands
login_form = driver.find_element(By.ID, 'loginForm')
 The first element with the id attribute value matching the location will be returned. If
no element has a matching id attribute, a NoSuchElementException will be raised.

element = driver.find_element(By.NAME, 'username')


 The first element with the name attribute value matching the location will be
returned. If no element has a matching name attribute, a NoSuchElementException
will be raised

Eg : 2
<html>
<body>
<h1>Welcome</h1>
<p>Are you sure you want to do this?</p>
<a href="continue.html">Continue</a>
<a href="cancel.html">Cancel</a>
</body>
<html>
Now after you have created a driver, you can locate an element using
login_form = driver.find_element(By.LINK_TEXT, 'Continue') login_form =
driver.find_element(By.PARTIAL_LINK_TEXT, 'Conti') login_form =
driver.find_element(By.TAG_NAME, 'h1')

5.5 ACTIONS ON WEB ELEMENTS


To test an application, one needs to perform a number of user actions on it. To perform
any operations on the web application such as double-click, selecting drop-down boxes, etc. the
actions class is required. Actions class is an ability provided by Selenium for handling keyboard
and mouse events. The Action class handles advanced user interactions in Selenium, like mouse
movements, keyboard inputs, and context-click (right-click) actions. More control and flexibility
in automated testing scenarios are possible since it makes it possible to simulate intricate user
interactions that are impossible to accomplish with simple WebDriver instructions.

Methods of Action Class


 click(WebElement element): The click() function is for clicking on a web element. The
purpose of this technique is to mimic a left click on a designated web element. It is
frequently used to interact with clickable items like checkboxes, buttons, and links.
 doubleClick(WebElement element): doubleClick() helps do a double click on a web
element. A specific web element can be double-clicked using the DoubleClick technique. It
is frequently employed in situations when a double click is necessary to start a process
contextClick(WebElement element): contextClick() lets you right-click on a web element.
This technique mimics a context-click, or right-click, on a designated web element. It comes
in useful when engaging with context menus and initiating right-click operations.
 moveToElement(WebElement element): moveToElement() moves the mouse pointer to
the middle of a web element. The mouse pointer is moved to the center of the designated
web element using the moveToElement function. Hovering over components that display
hidden options or activate dropdown menus is typical usage for it.
 dragAndDrop(WebElement source, WebElement target): dragAndDrop() allows
dragging one element and dropping it onto another. By dragging an element from its present
place and dropping it onto another element, you can execute a drag-and-drop operation using
this approach. It can be used to simulate user operations like rearranging objects or
transferring components between containers.
Example of Action Class in Selenium:
Actions actions = new Actions(driver);
WebElementelement = driver.findElement(By.id("elementId"));
actions.click(element).build().perform();

There are only 5 basic commands that can be executed on an element:


i. click (applies to any element)
ii. send keys (only applies to text fields and content editable elements)
iii. clear (only applies to text fields and content editable elements)
iv. submit (only applies to form elements)
v. select (see Select List Elements)

i. Click
The element click command is executed on the center of the element. If the center of the element
is obscured for some reason, Selenium will return an element click intercepted error.
Eg :
WebElement element=driver.findElement(By.Id("buttonId"));
element.click();
ii. Send keys
The element send keys command types the provided keys into an editable element. Typically,
this means an element is an input element of a form with a text type or an element with
a content-editable attribute. If it is not editable, an invalid element state error is returned.

Eg : WebElement element=driver.findElement(By.Id("inputId"));
element.sendKeys(“Hello”);

iii. Clear
The element clear command resets the content of an element. This requires an element to
be editable, and resettable. Typically, this means an element is an input element of a form with
a text type or an element with a content-editable attribute. If these conditions are not met, an
invalid element state error is returned.

Eg : WebElement element=driver.findElement(By.Id("inputId"));
element.clear();
iv. submit() method
The submit() action can be taken on a Form or on an element, which is inside a Form element.
This is used to submit a form of a web page to the server hosting the web application.
Eg :
WebElement form=driver.findElement(By.Id("formId"));
Form.submit();
v. select
Selenium provide Select class to interact with dropdowns and select options. It can
create an instance of the Select class, locate the drop down element and then use methods like
selectByValue( ), selectByIndex( ), etc to choose the desired option
Eg :
WebElement element=driver.findElement(By.Id("dropdownId"));
Select sel = new Select(dropdown);
Sel.selectByIndex(1)

Methods of Action Class


Action class is useful mainly for mouse and keyboard actions. In order to perform such actions,
Selenium provides various methods.
Mouse Actions in Selenium:

1. doubleClick(): Performs double click on the element

2. clickAndHold(): Performs long click on the mouse without releasing it

3. dragAndDrop(): Drags the element from one point and drops to another

4. moveToElement(): Shifts the mouse pointer to the center of the element

5. contextClick(): Performs right-click on the mouse


Keyboard Actions in Selenium:

1. sendKeys(): Sends a series of keys to the element

2. keyUp(): Performs key release

3. keyDown(): Performs keypress without release

Different WebElements will have different actions that can be taken on them. For
example, in a textbox element, we can type in some text or clear the text that is already typed
in it. Similarly, for a button, we can click on it, get the dimensions of it, and so on, but we
cannot type into a button, and for a link, we cannot type into it.
So, though all the actions are listed in one WebElement interface, it is the test script
developer's responsibility to use the actions that are supported by the target element. In case we
try to execute the wrong action on a WebElement, we don't see any exception or error thrown
and we don't see any action get executed; WebDriver ignores such actions silently.

5.5.1 Getting element properties and attributes


There are various methods to retrieve value and properties from the WebElement interface.

1. The getText() method:


It will return visible text if the element contains any text on it, otherwise it will return
nothing. The API syntax for the getText() method is as follows:
java.lang.String getText()
2. The getCssValue() method
The getCssValue method can be called on all the WebElements. This method is used to fetch a
CSS property value from a WebElement. CSS properties can be fontfamily, background-color,
color, and so on. This is useful when you want to validate the CSS styles that are applied to your
WebElements through your test scripts.
Eg: System.out.println("Font of the box is: " + searchBox.getCssValue("font-family"));

3.The getLocation() method


The getLocation method can be executed on all the WebElements. This is used to get the relative
position of an element where it is rendered on the web page. This position is calculated relative
to the top-left corner of the web page of which the (x, y) coordinates are assumed to be (0, 0).
This method will be of use if your test script tries to validate the layout of your web page.
Eg : System.out.println("Location of the box is: " + searchBox.getLocation());

4. The getSize() method


The getSize method can also be called on all the visible components of HTML. It will return the
width and height of the rendered WebElement.
The code for that is as follows:
System.out.println("Size of the box is: " + searchBox.getSize());

5.The getTagName() method


The getTagName method can be called from all the WebElements. This will return the HTML tag
name of the WebElement.

5.5.2 Checking the WebElement state


There are methods to check whether the WebElement is displayed in the Browser window,
whether it is editable, and if the WebElement is Radio Button of Checkbox, we can determine
whether it's selected or unselected. Let's see how we can use the methods available in the
WebElement interface.

1. The isDisplayed() method


The isDisplayed action verifies whether an element is displayed on the web page and can be
executed on all the WebElements. The API syntax for the isDisplayed() method is as
follows: boolean isDisplayed()
The preceding method returns a Boolean value specifying whether the target element is
displayed on the web page. The following is the code to verify whether the Search box is
displayed, which obviously should return true in this case:
WebElement searchBox = driver.findElement(By.name("q"));
System.out.println("Search box is displayed: " + searchBox.isDisplayed());
The preceding code uses the isDisplayed() method to determine whether the element is
displayed on a web page. The preceding code returns true for the Search box:
Search box is displayed: true

2. The isEnabled() method


The isEnabled action verifies whether an element is enabled on the web page and can be
executed on all the WebElements. The API syntax for the isEnabled() method is as follows:
boolean isEnabled()
System.out.println("Search box is enabled: " + searchBox.isEnabled());

3. The isSelected() method


The isSelected method returns a boolean value if an element is selected on the web page and can
be executed only on a radio button, options in select, and checkbox WebElements. When
executed on other elements, it will return false.
System.out.println("Search box is selected: " + searchBox.isSelected());

5.6 DIFFERENT WEB DRIVERS


The WebDriver implementation for Mozilla Firefox, Google Chrome, Microsoft Internet
Explorer, Microsoft Edge, and Safari are given below. With WebDriver becoming a W3C
specification, all of the major browser vendors now support WebDriver natively.

1. Firefox Driver
The new driver for Firefox is called Geckodriver. The Geckodriver provides the HTTP
API described by the W3C WebDriver Protocol to communicate with Gecko browsers, such as
Firefox. It translates calls into the Firefox Remote Protocol (Marionette) by acting as a proxy
between the local and remote ends.
Using Headless Mode
Headless mode is a very useful way to run Firefox for automated testing with Selenium
WebDriver. In headless mode, Firefox runs as normal only you don't see the UI components.
This makes Firefox faster and tests run more efficiently, especially in the CI (Continuous
Integration) environment. During the execution, you will not see the Firefox window on the
screen but the test will beexecuted in headless mode.

2. Chrome Driver
The ChromeDriver is the WebDriver implementation for Google Chrome. It enables
Selenium to automate interactions with the Chrome browser. It works similar to the Geckodriver
and implements the W3C WebDriver protocol.
Using Headless Mode
Similar to Firefox, we can run tests in headless mode with ChromeDriver. This makes Chrome
tests run faster and tests run more efficiently, especially in the CI (Continuous Integration)
environment. We can run Selenium tests in headless mode by configuring the Chrome .

3. Internet Explorer Driver:


In order to execute test scripts on the Internet Explorer browser, we need WebDriver's Internet
ExplorerDriver.

4. Edge Driver
Microsoft Edge is the latest web browser launched with Microsoft Windows 10. Microsoft Edge
was one of the first browsers to implement the W3C WebDriver standard and provides built-in
support for Selenium WebDriver.
5. Safari Driver
Apple provides Safari Driver built into the browser. In order to work it with Selenium
WebDriver, we have to set a Develop or Allow Remote Automation option from Safari's main
menu.
Each Web driver has its own specific configuration requirements such as downloading and
setting up the correct driver ensuring the compatibility with the browser version used. The web
driver implementation act as intermediate between Selenium Web driver API and the respective
Web browsers. This enables integration and automation with the browser actions for testing
purposes.
Comparison of different WebDrivers in Selenium

Web driver Supported Features


Browser
Firefox Driver Fire Fox Flexible, Extensible and supports a variety of
add-ons
Chrome Driver Chrome Fast , stable and easy to use

Internet Explorer Internet Explorer Supports older version of IE


Driver
Edge Driver Microsoft Edge New and improves Web driver for Edge

Safari Driver Apple Safari Supports latest version of Safari

How to choose the right driver ?


Browser Compatibility: Ensure the driver matches the browser version.
Application Under Test: Different drivers perform better with different technologies (e.g.,
JavaScript frameworks).
Features Required: Support for headless mode, handling alerts, etc.
Performance and Execution Speed: Headless drivers typically offer faster execution.
Supported OS Environments: Consider where tests will be executed.

What are the challenges and limitations in Web drivers?


Browser-Specific Issues: Different browsers may render pages differently.
Frequent Updates: Need to regularly update WebDrivers to avoid incompatibility.
Performance Variations: Execution speed can differ between drivers.
Limitations: Certain elements (like CAPTCHAs) may not be automatable.

5.7 : UNDERSTANDING WEB DRIVER EVENTS

What are Web Driver Events?


Web Driver Events are events that are fired by the Web Driver API. These events can be used to
monitor the state of the browser and to react to changes in the browser's state.

Why are Web Driver Events Important ? Web


Driver Events can be used to:
 Monitor the state of the browser: Web Driver Events can be used to monitor the state
of the browser, such as the page title, the URL and the visibility of elements. This
information can be used to verify that the browser is in the correct state.
 React to changes in the browser's state: Web Driver Events can be used to react to
changes in the browser's state. For example, it could use a Web Driver Event to be
notified when a new page is loaded.
 Log events: Web Driver Events can be used to log events. This can be useful for
debugging and for tracking the progress of its automation script.
How to use WebDriver Events ?
 To use WebDriver Events, it needs to implement the WebDriverEventListener interface.
This interface defines a number of methods that are called when certain events occur.
 Once it has implemented the WebDriverEventListener interface, it can register its
listener with the WebDriver. This can be done using the register() method.
 Once its listener is registered, it will be notified when certain events occur. The events
that are notified depend on the implementation of the WebDriverEventListener interface.
 WebDriver Events are a powerful tool that can be used to monitor the state of the
browser and to react to changes in the browser's state. They can be used for a variety of
purposes, such as verifying that the browser is in the correct state, reacting to changes in
the browser's state and logging events.
 Understanding WebDriver events in Selenium involves being aware of the various events
that occur during test execution and how they can be utilized to enhance test automation.

WebDriver events provide hooks or listeners that allow testers to observe and interact with
different stages of the automation process. Here are the key aspects to understand about
WebDriver events in Selenium:

Event listeners:
 Selenium WebDriver provides an event-driven architecture that allows the registration of
event listeners. Event listeners are objects that implement specific interfaces, such as
WebDriverEventListener or EventListener, to handle different types of events during test
execution.

Types of WebDriver events:


WebDriver events cover various stages and actions during test execution. Some of the
commonly used events include:
 beforeNavigateTo: Triggered before navigating to a new URL
 afterNavigateTo: Triggered after successfully navigating to a new URL.
 beforeNavigate Back: Triggered before navigating back in the browser history.
 afterNavigate Back: Triggered after successfully navigating back in the browser history.
 beforeNavigate Forward: Triggered before navigating forward in the browser history.
 afterNavigate Forward: Triggered after successfully navigating forward in the browser
history.
 beforeFindBy: Triggered before locating a web element on the page.
 afterFindBy: Triggered after successfully locating a web element on the page.
 beforeClickOn: Triggered before clicking on a web element.
 afterClickOn: Triggered after successfully clicking on a web element.
 beforeChangeValueOf: Triggered before changing the value of a web element.
 afterChangeValueOf: Triggered after successfully changing the value of a web element.
 beforeScript: Triggered before executing JavaScript code.
 afterScript: Triggered after successfully executing JavaScript code.
 onException: Triggered when an exception occurs during test execution.
 These are just a few examples of the available WebDriver events. Testers can choose to
implement listeners for specific events that are relevant to their testing needs.
Implementing event listeners:
 To utilize WebDriver events, it needs to create a custom event listener by implementing
the appropriate interface. For example, implementing the WebDriverEventListener
interface allows it to override the methods associated with different WebDriver events.
Inside these methods, it can define custom actions or assertions based on the event being
triggered.
Registering event listeners:
 After creating the event listener implementation, it needs to register it with the
WebDriver instance using the register() method. This ensures that the event listener is
actively listening for events during test execution.
 Example (Java):
WebDriver driver = new ChromeDriver();
Event Firing WebDriver eventDriver = new EventFiringWebDriver(driver);
MyEventListener eventListener = new MyEventListener(); // Custom event listener implementation
eventDriver.register(eventListener);

 In the example above, the EventFiringWebDriver class is used to wrap the original
WebDriver instance. This allows the event listener to intercept WebDriver events.

Customizing event listeners:


 Event listeners can be customized to perform specific actions based on the events they
handle. For example, it can take screenshots on onException events, log messages on
beforeNavigate To and afterNavigate To events, or validate element visibility on
beforeClickon and afterClickOn events.

Example (Java):
public class MyEventListener implements WebDriverEventListener {
// Implement methods for desired events
@Override
public void beforeClickOn(WebElement element, WebDriver driver) {
// Perform custom actions before clicking on an element System.out.println("About to click
on element: + element);
}
@Override
public void afterClickOn(WebElement element, WebDriver driver) {
// Perform custom actions after clicking on an element System.out.println("Clicked on
element: " + element);
}
// Implement other event methods as needed
}
 The example above demonstrates a custom event listener that logs messages before and
after clicking on web elements.
By utilizing WebDriver events and implementing custom event listeners, testers can gain
more control and insight into the test execution process. This allows for custom actions,
logging, error handling, and validation based on specific events during test automation,
leading to enhanced reporting and improved debugging capabilities
5.8 TestNG
TestNG is an open-source automated testing framework; where NG means Next Generation.
The design goal of TestNG is to cover a wider range of test categories: unit, functional, end-to-
end, integration, etc., with more powerful and easy-to-use functionalities.

5.8.1 UNDERSTANDING TestNG.XML


TestNG.xml file is a configuration file that helps in organizing our tests. It allows testers to
create and handle multiple test classes, define test suites and tests.
It makes a tester’s job easier by controlling the execution of tests by putting all the test
cases together and run it under one XML file. This is a beautiful concept, without which, it is
difficult to work in TestNG.

Advantages Of TestNG.xml
 It provides parallel execution of test methods.
 It allows the dependency of one test method on another test method.
 It helps in prioritizing our test methods.
 It allows grouping of test methods into test groups.
 It supports the parameterization of test cases using @Parameters annotation.
 It helps in Data-driven testing using @DataProvider annotation.
 It has different types of assertions that help in validating the expected results with the
actual results.
 It has different types of HTML reports, Extent reports, etc. for a better and clear
understanding of our test summary.
 It has listeners who help in creating logs.

Concepts Used in TestNG.xml

#1) A Suite is represented by one XML file. It can contain one or more tests and is defined by
the <suite> tag.
Example: <suite name=”Search Suite”>

#2) A Test is represented by <test> and can contain one or more TestNG classes.
Example: <test name=”Search Test”>

#3) A Class is a Java class that contains TestNG annotations. Here it is represented by the
<class> tag and can contain one or more test methods. Example:
<classes>
<class name="com.example.SearchTest"/>
</classes>
#4) Method : This defines a test method. A test method is a Java method that is annotated with
@Test annotation

#5) Listeners : Listeners are used to listen to events during test execution

TestNG.xml Example:

<?xml version="1.0" encoding="UTF-8"?>


<suite parallel="false" name="Test Suite">
<test name="Test">
<classes>
<class name="com.example.SearchTest"/>
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->

Steps to create a TestNG XML file and execute it:


1. Create a new XML file: Use a text editor or an XML editor to create a new XML file.
Give it a meaningful name, such as "testng.xml".

2. Define XML structure: Define the basic structure of the XML file by adding the root
element. The root element in TestNG XML is typically <suite.
xml
<suite name="Test Suite">
<- Add test configurations and test tags here ->
</suite>
3. Add test configurations: Within the <suite> element, it can add various test
configurations. These configurations may include details such as test parameters, test
environment setup or other global settings.

xml
<suite name="Test Suite">
<parameter name="browser" value="chrome' />
<!-- Add more configurations as needed -->
</suite>
4. Define test tags: Within the <suite> element, it can define one or more <test> tags to
represent individual tests or test groups. Each <test> tag can have a unique name and
may contain one or more <classes> or <packages> tags to specify the test classes or
packages to be executed.

xml
<suite name="Test Suite">
<test name "Test 1">
<classes>
<class name="com.example.tests.TestCase1"/>
<class name="com.example.tests TestCase2"/>
</classes>
</test>
<!-- Add more tests as needed -->
</suite>

5. Save the XML file: Save the testng.xml file with the defined structure and
configurations.
6. Execute TestNG XML file: To execute the TestNG XML file, it can use various
methods depending on its environment and tools.

a. Command line: Open a command prompt or terminal, navigate to the project


directory and use the TestNG command to execute the testng.xml file.

shell
$ testng testng.xml

b. IDE integration: Many integrated development environments (IDEs) provide


built-in support for executing TestNG XML files. In itr IDE, import the project,
right-click on the testng.xml file, and select the option to run or execute it as a
TestNG test.

7. Build automation tools: If it is using build automation tools like Maven or Gradle, it
can configure the build script to execute the TestNG XML file as part of the build
process.

These steps provide a basic overview of creating and executing a TestNG XML file. The
specific details may vary based on itr project setup, environment and testing requirements. It's
recommended to refer to the TestNG documentation or consult the documentation of its specific
IDE or build automation tool for more detailed instructions on creating and executing TestNG
XML files.

5.9 : ADDING CLASSES, PACKAGES, METHODS TO TEST:

5.9.1 : ADDING CLASSES :


To add classes in a testing.xml file, it need to follow these general steps:
1. Determine the purpose: Understand the purpose of adding classes in the testing.xml file. Is it
adding classes for test cases, test data, configuration, or any other specific use ?
2. Define XML structure: Decide on the XML structure or schema for representing the
classes. Determine the elements, attributes and hierarchy that will be used to define the classes.
3. Open the XML file: Open the testing.xml file using a text editor or an XML editor. Make
sure it has the necessary permissions to modify the file.
4. Locate the appropriate section: Identify the section in the XML file where it wants to add
the classes. This could be an existing section or a new section specifically designated for
classes.
5. Add XML elements: Add XML elements to represent the classes within the appropriate
section. Use the defined XML structure or schema to ensure consistency and clarity.
6. Set attributes: If needed, set attributes for the class elements to provide additional
information or metadata about the classes. These attributes could include class names,
identifiers, descriptions or any other relevant details.
7. Specify class properties: Within each class element, specify the properties or characteristics
of the class. This could include details like class names, access modifiers, methods, variables or
any other relevant information.
8. Save the XML file: Once it has added the classes, save the testing.xml file.
 It's important to note that the specific steps for adding classes in a testing.xml file can
vary depending on the context and the intended use of the file. The above steps provide a
general guideline, but the actual implementation may differ based on the XML structure,
tool or framework it is using for testing.
 If we are working with a specific testing framework or tool, refer to its documentation or
guidelines to understand the recommended approach for adding classes in the testing.xml
file associated with that framework or tool.
 Here are the steps on how to add classes in testing.xml:
1. Create a new XML file and save it as "testing.xml".
2. Open the XML file in a text editor.
3. Add the following code to the XML file.
XML
<suite name="MyTestSuite">
<test name="MyTest">
<class name="com.example.MyTestClass1"/>
<class name="com.example.MyTestClass2"/>
</tost>
</suite>
4. Save the XML file.
5. Run the tests by running the "testing.xml" file from the command line or from an IDE.
 The code in the XML file defines a test suite called "MyTestSuite" that contains two
tests, "MyTest1" and "MyTest2"., The "MyTest1" test runs the class
“com.example.MyTestClass1” and the "MyTest2" test runs the class "com. example.
MyTestClass2".
 To add more classes to the XML file, it simply needs to add more "class" elements to the
"test" element. For example, to add a class called "com.example.MyTestClass3" to the
XML file, it would add the following code:

XML
<class name="com.example.MyTestClass3"/>
 Once it has added the classes to the XML file, it can run the tests by running the
"testing.xml" file from the command line or from an IDE.
 Here are some additional tips for adding classes to testing.xml:
 The name of the class element must match the fully qualified name of the Java class.
 The order in which the classes are defined in the XML file determines the order in which
the tests will be executed.
 It can use the "groups" element to group the classes together. This can be helpful for
running the tests in parallel or filtering them out.

5.9.2 : Adding Packages


 In software testing, packages in XML are used to group together classes and methods
that are related to a specific functionality or feature. This can be helpful for organizing
the tests and for running them in parallel.
 To add packages to an XML file for software testing, it can use the <packages> element.
The <packages> element contains a list of <package> elements, each of which defines a
package name. For example, the following XML code defines two packages:

XML
<packages>
<package name="com.example.mypackage1"/>
<package name="com.example.mypackage2"/>
</packages>
 Once it has added the packages to the XML file, it can run the tests by running the XML
file from the command line or from an IDE.
 To define packages in a testing.xml file, it can follow these steps:

1. Determine the organization strategy: Decide on the organization strategy for its test cases.
Identify how it wants to group and categorize them using packages. Consider factors such as
functionality, modules, features or any other meaningful criteria.
2. Define XML structure: Determine the XML structure or schema for representing
packages in the testing.xml file. Decide on the appropriate XML elements, attributes and
hierarchy that will be used to define packages
3. Open the XML file: Open the testing.xml file using a text editor or an XML editor. Ensure
that it has the necessary permissions to modify the file.
4. Identify the appropriate section: Identify the section in the XML file where it wants to add
the packages. This section could be specifically designated for packages or any other section
suitable for organizing test cases.
5. Add package elements: Within the appropriate section, add XML elements to represent the
packages. Use the defined XML structure or schema to ensure consistency.
6. Set package attributes: For each package element, set the necessary attributes to define the
package. This could include attributes like package names, identifiers, descriptions or any other
relevant metadata.
7. Nest packages if required: If its organization strategy involves nested or hierarchical
packages, create the necessary nested package elements within the appropriate parent packages.
8. Associate test cases: Associate the relevant test cases with their respective packages. It can
use XML elements or attributes to reference or include test cases within the package elements.
9. Save the XML file: Once it has defined the packages and associated test cases, save the
testing.xml file.

 The above steps provide a general guideline, but the actual implementation may
differ based on the requirements of its specific testing environment.
 Here are some of the benefits of using packages in XML for software testing:
• Improved organization: Packages can help to improve the organization of the tests by
grouping together classes and methods that are related to a specific functionality or
feature.
• Parallel execution: Packages can be used to run tests in parallel, which can help to
improve the performance of the testing process.
• Filtering: Packages can be used to filter out tests that it do not want to run.
Overall, packages in XML can be a valuable tool for organizing and running tests in
software testing.

5.9.3 : Adding Methods to Test


 When it comes to testing XML files, there are several methods it can employ depending
on its specific needs and goals. Here are a few common approaches:
1. Manual inspection: This is the simplest method and involves visually inspecting the
XML file to ensure its structure, content and any defined rules are correct. It can use a
text editor or an XML-specific tool to review the file.
2. XML validators: XML validation tools automatically check the XML file against a
specified schema or Document Type Definition (DTD). This method ensures that the
(XML conforms to the defined rules and structure) Examples of XML validators include
XMLSpy, Xerces and XML.Starlet.
3. Unit testing: If it is using XML as part of a larger software system, it can write unit tests
specifically designed to validate the XML processing logic. These tests can verify that
the XML is parsed correctly, data is extracted properly and any transformations or
manipulations produce the expected results.
4. XML schema testing: If its XML uses a schema definition, it can write test cases that
cover different scenarios based on the schema's rules. These tests can check for valid and
invalid inputs, edge cases and boundary conditions to ensure the XML behaves as
expected.
5. XPath testing: XPath is a query language used to navigate XML documents. It can write
XPath expressions to select specific elements or attributes within the XML and validate
that the results match its expectations. XPath testing is particularly useful when it needs
to extract data from XML files or verify specific values.
6. Integration testing: In scenarios where XML files are exchanged between different
systems or services, integration testing can be performed to verify the end-to-end flow.
This involves testing the XML generation, transmission and consumption by the
receiving system, ensuring proper data exchange and handling.
7. Performance testing: If it is dealing with large XML files or high-volume XML
processing, performance testing can help identify bottlenecks, optimize processing times,
and ensure the system can handle the expected load. Performance testing tools can
simulate various workloads and measure XML processing performance.
• By applying these testing methods to its XML file, it can ensure its correctness, validate
its adherence to defined rules, verify data extraction, test integration scenarios and assess
performance characteristics.
 The testing methods it choose depend on the specific requirements and context of its
XML usage. It's often beneficial to employ a combination of these techniques to
thoroughly test XML files and their related processes.
• Suppose it has an XML file that represents a collection of books. Each book has attributes
such as title, author, publication year, and price. Here's a sample XML file:
xml
<library>
<book>
<title>The Great Gatsby</title>
<author>F. Scott Fitzgerald</author>
<year>1925</year>
<price>10.99</price>
</book> <book>
<title>To Kill a Mockingbird</title>
<author>Harper Lee</author>
<year>1960</year>
<price>12.99</price>
</book> </library>

5.10 : Test Reports


Reporting is the most important part of any test execution, as it helps the user understand the
result of the test execution, point of failure, and the reasons for failure.

TestNG generates multiple reports as part of its test execution. These reports mainly include:
 TestNG HTML report: This is the default report that is generated and is the most
commonly used. It provides a detailed overview of the test execution, including the
test cases that were run, their results and any errors or failures that occurred.
 TestNG email-able report: This is a formatted version of the HTML report that is
optimized for sending as an email attachment.
 TestNG report XML: This is an XML version of the report that can be used for
further processing or analysis.
 JUnit report XML: This is an XML version of the report that is compatible
with JUnit, another popular testing framework.

The following is an example of a testng.xml file that generates test reports:


<?xml version="1.0" encoding="UTF-8"?>
<suite name="My Test Suite">
<test name="My Test">
<classes> <class name="com.example.MyTest"/>
</classes>
</test>
</suite>

This file will generate the following reports:


test-output/index.html: The TestNG HTML report.
test-output/emailable-report.html: The TestNG email-
able report. test-output/report.xml: The TestNG report
XML
test-output/junitreport.xml: The JUnit report XML

To view the reports, it can open them in a web browser. The HTML reports will provide
a graphical overview of the test execution, while the XML reports can be used for
further processing or analysis.

The test report in XML format consists of three main sections:


<metadata>, <summary>, and <testcases>

1. The <metadata> section contains metadata about the test execution. It includes a
timestamp (<timestamp>) indicating when the test report was generated, the tester's
name (<tester>), and the version of the software being tested (<version>).
2. The <summary> section provides an overview of the test execution. It includes the
total number of tests executed (<totalTests>), the number of tests that passed
(<passedTests>), the number of tests that failed (<failedTests), the number of tests
that were skipped (<skippedTests>), and the overall duration of the test execution
(<duration>).
3. The <testcases> section contains individual <testcase> elements for each test case
executed. Each <testcase> element includes information such as the test case name
(<name>), the status of the test case (<status>), the duration of the test case execution
(<duration>), and any log messages generated during the test case execution (<logs>).
4. The <logs> element contains a list of <log> elements that capture log messages
related to the test case. Each <log> element has a level attribute to indicate the log
level, such as "info", "error", or "warning".
5. The <error> element, if present, captures details about the error that occurred during
the test case execution. It includes an error message (<message>) and a stack trace
(<stacktrace>).

• By using this XML structure, it can generate comprehensive test reports that capture
metadata, summary information, individual test case details, logs and error information,
providing valuable insights into the software testing process.
5.11 Case Studies:
Case Study 1: E-commerce Website Testing

One of the most common use cases for Selenium is testing e-commerce websites. Testing
teams can use Selenium to simulate user interactions such as clicking on buttons, filling out
forms, and navigating through the website. This level of automation not only saves time but
also ensures that the website functions correctly across different browsers and devices.
According to recent statistics, e-commerce websites experience an average of 80% increase
in conversion rates when using automated testing tools like Selenium. This shows the
significant impact that Selenium can have on the overall success of an online business.

Challenges Faced:

 Dynamic web elements that change frequently


 Cross-browser compatibility testing
 Handling pop-ups and alerts

Solutions:

Testers can use dynamic locators like Xpath or CSS selectors to identify web elements that
change dynamically. Cross-browser testing can be automated using Selenium Grid, which
allows testers to run tests on multiple browsers simultaneously. Handling pop-ups and alerts
can be done using Selenium's Alert API.

Case Study 2: Banking Application Testing

Another critical use case for Selenium is testing banking applications. With sensitive user
data at stake, it is vital to ensure that banking applications are thoroughly tested for security
vulnerabilities and functional correctness. Selenium's robust testing capabilities make it an
ideal choice for testing complex banking applications.
Studies have shown that banking applications tested with Selenium have 50% fewer defects
compared to manual testing. This highlights the importance of using automation tools like
Selenium for critical applications where accuracy is paramount.

Challenges Faced:

 Handling OTP (One-time password) authentication


 Testing multi-step workflows
 Security testing

Solutions:

Testers can use Selenium's Actions class to simulate keyboard inputs for handling OTP
authentication. Multi-step workflows can be tested using Selenium's test suite capabilities to
sequence test cases. Security testing can be automated using Selenium plugins like OWASP
ZAP to identify vulnerabilities.
In conclusion, Selenium is a versatile and powerful automation tool that can significantly
improve the efficiency and effectiveness of testing processes. By leveraging Selenium's
capabilities, testing teams can overcome challenges faced in automation testing and ensure
the quality of web applications.
By incorporating Selenium into their testing strategies, companies can benefit from faster
testing cycles, reduced manual errors, and improved test coverage. With the right techniques
and approaches, testers can harness the full potential of Selenium and achieve successful
testing outcomes.

Selenium in Action: Real-world Case Studies and Solutions

This is the part where we explore real-world case studies and solutions that showcase the best
practices for optimizing Selenium scripts.

The Power of Selenium

Selenium is a versatile tool that offers a wide range of features for automating web
application testing. With Selenium, developers can write test scripts in various programming
languages such as Java, Python, and C#, making it accessible to a wide audience.
Additionally, Selenium supports multiple browsers, including Chrome, Firefox, and Safari,
allowing developers to test their applications across different environments with ease.
One of the key benefits of using Selenium is its ability to run tests in parallel, reducing the
overall testing time and improving efficiency. By leveraging Selenium's parallel execution
capabilities, developers can run multiple tests simultaneously, speeding up the testing process
and providing faster feedback on the application's performance.

Real-world Case Studies

Case Study 1: E-commerce Website

A leading e-commerce company was facing challenges with regression testing due to the
complexity of its web application. Using Selenium, the company was able to automate the
testing process and significantly reduce the time spent on regression testing. By leveraging
Selenium's capabilities for interacting with web elements and performing actions like clicking
buttons and filling out forms, the company was able to achieve faster and more reliable test
results.
// Sample Selenium code snippet for interacting with web elements WebElement searchBox =
driver.findElement(By.id("search")); searchBox.sendKeys("Product Name"); WebElement
searchButton = driver.findElement(By.id("searchButton")); searchButton.click();
Case Study 2: Software as a Service (SaaS) Platform

A SaaS platform was struggling with cross-browser compatibility issues, leading to customer
complaints and revenue loss. By implementing Selenium for automated cross-browser
testing, the platform was able to identify and fix compatibility issues across different
browsers and ensure a seamless user experience for all customers. Selenium's ability to
handle multiple browsers simultaneously helped the platform streamline its testing process
and deliver a more reliable product to its users.

Best Practices for Optimizing Selenium Scripts


 Use an Object-Oriented Approach: Organize your test scripts using a modular, object-
oriented design to improve code reusability and maintainability.
 Implement Wait Strategies: Use explicit and implicit waits to ensure that your test
scripts wait for elements to load properly before interacting with them.
 Use Page Object Model: Separate your test logic from the page structure by
implementing the Page Object Model, which helps improve test maintainability and
readability.
 Optimize Test Execution: Run your test scripts in parallel to reduce testing time and
improve efficiency, especially for large test suites.

By following these best practices, developers can optimize their Selenium scripts and
streamline the testing process, leading to faster feedback on the application's performance and
improved overall quality.
In conclusion, Selenium is a powerful tool that offers real-world solutions for automating
web application testing. By leveraging Selenium's features and best practices, developers can
optimize their test scripts and achieve faster, more reliable results. As software development
services continue to evolve, Selenium remains a valuable asset for ensuring the quality and
efficiency of modern web applications.

You might also like