0% found this document useful (0 votes)
11 views199 pages

Manual Testing Study Material

The document outlines a comprehensive manual testing training program, detailing various topics such as software testing principles, risks and causes of defects, and the differences between Quality Assurance (QA) and Quality Control (QC). It emphasizes the importance of early testing, defect detection, and the software development lifecycle, particularly the V-model. The training aims to equip participants with the knowledge and skills necessary to ensure high-quality software through effective testing practices.

Uploaded by

vbksaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views199 pages

Manual Testing Study Material

The document outlines a comprehensive manual testing training program, detailing various topics such as software testing principles, risks and causes of defects, and the differences between Quality Assurance (QA) and Quality Control (QC). It emphasizes the importance of early testing, defect detection, and the software development lifecycle, particularly the V-model. The training aims to equip participants with the knowledge and skills necessary to ensure high-quality software through effective testing practices.

Uploaded by

vbksaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Manual Testing

Sr no. Topic Duration Session Session


(Mins) No (2 No (4
Hours) Hours)
1 Introduction 2 hrs 1 1
to software
testing
2 Software 2 hrs 2 1
development
process
3 Levels and 4 hrs 3,4 2
types of
testing
4 Testing 3 hrs
techniques 5 3
5 Testing 4 hrs 6 3
process and
test case 7 4
writing
6 Bug 2 hrs 8 4
reporting ,
Test metrics
, RTM and
test
enviroment
7 Web testing , 3 hrs 9 5
DB testing
and cloud 10
testing
Manual Testing

TOPICS

1. Introduction to Software testing

Information

A. Introduction to software testing

• Software testing is the process of evaluating a


software application or system to ensure that it meets
the specified requirements and functions as expected.
• It is an essential part of the software development
lifecycle and helps identify defects, errors, or gaps in
the software.
• The primary goal of software testing is to ensure the
quality and reliability of the software.
• Testing can be done at various levels, including unit
testing, integration testing, system testing, and
acceptance testing.
• Each level of testing focuses on different aspects of
the software and helps in identifying different types
of issues.
• Software testing involves the execution of test cases,
which are specific scenarios or inputs designed to
validate the behaviour of the software.
• Test cases can be created based on functional
requirements, user stories, or specific use cases.
Manual Testing
• The results of the test cases are compared with the
expected results to identify any discrepancies or
failures.

The benefits of software testing include:

1. Finding and fixing defects early in the development


process, reducing the cost and effort of fixing them
later.
2. Ensuring that the software meets the specified
requirements and functions as intended.
3. Enhancing the quality and reliability of the software,
leading to better user satisfaction.
4. Increasing confidence in the software and reducing
the risk of failures or errors in production.
5. Facilitating maintenance and future enhancements by
providing a stable and well-tested foundation.

• Overall, software testing plays a crucial role in


ensuring the success of a software project by
identifying and resolving issues before the software is
deployed to production.
• It helps in delivering high-quality software that meets
the needs and expectations of the users.

B. Testing Principles

• Testing principles are fundamental guidelines that


help guide the testing process and ensure effective
and efficient testing of software applications.
Manual Testing
• These principles are based on industry best practices
and are followed to achieve reliable and high-quality
software.

Some key testing principles:

1. Testing shows the presence of defects:


a. The primary objective of testing is to identify
defects or discrepancies between the expected
and actual behaviour of the software.
b. Testing helps uncover defects and provides
valuable feedback for improving the software.

2. Exhaustive testing is impossible:


a. It is practically impossible to test every possible
input and scenario for a complex software
application.
b. Instead of aiming for exhaustive testing, testing
efforts should focus on areas with higher risks
and prioritize testing activities based on critical
functionalities and user requirements.

3. Early testing:
a. Testing should be started as early as possible in
the software development lifecycle.
b. By identifying and fixing defects early in the
process, the cost and effort of fixing them later
are reduced.
c. Early testing also helps in preventing defects
from propagating into subsequent stages of
development.
Manual Testing
4. Defect clustering:
a. It is often observed that a small number of
modules or components are responsible for a
significant number of defects.
b. This phenomenon is known as defect clustering.
c. Testing efforts should be focused on these critical
areas to maximize defect detection and ensure
effective mitigation.

5. Pesticide paradox:
a. Repeated execution of the same set of test cases
may result in diminishing defect detection.
b. The pesticide paradox suggests that using the
same testing techniques and test cases over a
long period of time can lead to overlooking new
defects.
c. Test cases should be regularly reviewed and
updated to ensure test coverage and
effectiveness.

6. Testing is context-dependent:
a. Testing strategies and techniques should be
tailored to the specific context of the project.
b. The nature of the software, its complexity, the
target users, and other factors influence the
testing approach.
c. The testing process should be flexible and
adaptable to suit the project requirements.

7. Absence-of-errors fallacy:
a. The absence of errors in testing does not
guarantee the absence of defects in the software.
Manual Testing
b. Testing can only provide information about the
presence of defects within the scope of the
executed test cases.
c. Testers should exercise caution and avoid making
assumptions based solely on the absence of
errors in testing.

• These testing principles provide valuable guidance for


testers and testing teams to plan, execute, and
improve their testing efforts.
• By adhering to these principles, organizations can
achieve better software quality, reduce risks, and
deliver software products that meet the expectations
of end-users.

C. Risks and causes of Defects

• Defects refer to flaws or issues in the software that


deviate from its expected behaviour or functionality.
• Defects can lead to failures, affect the user experience,
and impact the overall quality of the software.
Understanding the risks and causes of defects is
essential for effective software testing and quality
assurance.

Risks of Defects:

1. Functional Risks:
a. Defects can cause functional issues, such as
incorrect calculations, missing features, or
improper data processing.
Manual Testing
b. These risks can lead to software failures,
inaccurate results, or user dissatisfaction.

2. Performance Risks:
a. Defects related to performance, such as slow
response times, high resource utilization, or
scalability issues, can impact the software's
efficiency and user experience.
b. Performance risks can result in system crashes,
poor responsiveness, or inability to handle
concurrent users.

3. Security Risks:
a. Defects that introduce security vulnerabilities,
such as insufficient input validation, weak
authentication mechanisms, or improper access
controls, pose significant risks to the software.
b. Security risks can lead to data breaches,
unauthorized access, or compromised system
integrity.

4. Usability Risks:
a. Defects affecting the usability of the software,
such as confusing user interfaces, non-intuitive
workflows, or inconsistent behavior, can result in
user frustration and difficulty in performing
tasks.
b. Usability risks can impact user adoption,
satisfaction, and overall user experience.

5. Maintenance Risks:
Manual Testing
a. Defects that make the software difficult to
maintain or enhance can increase the cost and
effort required for ongoing support and updates.
b. Maintenance risks include code complexity, poor
documentation, or dependencies on deprecated
technologies.

Causes of Defects:

1. Requirements Issues:
a. Defects can occur due to incomplete, ambiguous,
or inaccurate requirements.
b. Lack of clarity in requirements can lead to
misunderstandings, incorrect implementation, or
missing functionality.

2. Design Flaws:
a. Defects can arise from flaws or weaknesses in the
software design.
b. Inadequate design decisions, improper
architecture, or lack of adherence to best
practices can introduce defects that impact the
software's behaviour or performance.

3. Coding Errors:
a. Defects can result from mistakes made during
the coding phase, such as syntax errors, logic
errors, or incorrect data handling.
b. Coding errors can lead to unexpected behaviours,
system crashes, or incorrect outputs.
Manual Testing
4. Integration Issues:
a. Defects can emerge when individual components
or modules of the software do not integrate
correctly.
b. Incompatible interfaces, data inconsistencies, or
communication failures between system
components can introduce defects.

5. Testing Limitations:
a. Defects can go undetected if the testing process is
inadequate or incomplete.
b. Insufficient test coverage, ineffective test cases,
or lack of testing in real-world scenarios can
leave defects unnoticed.

6. Environmental Factors:
a. Defects can be influenced by the underlying
environment, such as hardware variations,
operating system differences, or network
conditions.
b. Incompatibilities or dependencies on specific
environments can lead to defects in certain
configurations.

• Identifying and addressing these risks and causes of


defects is crucial in software testing.
• Through comprehensive testing strategies, adherence
to quality standards, and continuous improvement,
organizations can minimize the occurrence of defects
and deliver high-quality software products to their
users.
Manual Testing
D. Meaning of term Error

• An error refers to a mistake or deviation from the


intended behaviour in a software system.
• It is a human action or oversight that leads to a fault
or defect in the software code or design.
• Errors can occur at various stages of the software
development lifecycle, including requirements
gathering, design, coding, or testing.

E. Meaning of term fault

• A fault refers to a defect or an imperfection in the


software code or design that can potentially cause a
failure or incorrect behavior of the system.
• It is a specific manifestation of an error that can lead
to a fault or bug in the software.

F. Meaning of term bugs

• A bug refers to a flaw or an error in the software that


causes it to behave in an unintended or incorrect
manner.
• Bugs can range from minor issues that have minimal
impact to critical defects that can lead to system
failures.

G. Meaning of term defects and failure


Manual Testing
• Defects and failures are two related concepts that
describe issues or problems encountered in a
software application.

1. Defects:
a. A defect, also known as a software bug or issue,
refers to a flaw or error in the software code or
its design that causes the software to behave in
an unintended or incorrect way.
b. Defects can occur due to programming mistakes,
logic errors, incorrect implementation of
requirements, data handling issues, or other
factors.
c. Defects can manifest as functional issues,
performance problems, security vulnerabilities,
or usability concerns.
d. When defects are identified, they are reported to
the development team to be fixed.

2. Failures:
a. A failure occurs when the software does not
deliver the expected or desired results or does
not meet the specified requirements.
b. Failures are the visible or observable
consequences of defects.
c. For example, if a defect causes an application to
crash, freeze, or produce incorrect output, it
results in a failure from a user's perspective.
d. Failures can occur during testing or in the
production environment when users encounter
issues while using the software.
Manual Testing
• In the software testing process, the goal is to detect
and report defects before they cause failures in the
production environment.
• Through various testing techniques such as functional
testing, performance testing, security testing, and
usability testing, testers aim to identify and document
defects, allowing developers to fix them and prevent
failures from occurring when the software is used by
end-users.
• It is important to note that not all defects lead to
failures, as some defects may exist in the code but
never manifest themselves in actual usage scenarios.
However, the presence of defects increases the risk of
failures and negatively impacts the quality and
reliability of the software.
• By detecting and addressing defects early in the
software development lifecycle, organizations can
minimize the occurrence of failures, enhance user
satisfaction, and ensure the software meets the
intended requirements and objectives.

H. QA and QC comparison

• QA (Quality Assurance) and QC (Quality Control) are


two distinct but closely related activities that focus on
ensuring the quality of the software.

1. QA (Quality Assurance):
a. Quality Assurance is a proactive and preventive
approach to ensure that the software
development process is carried out in a way that
leads to high-quality software.
Manual Testing
b. It involves defining and implementing processes,
standards, and guidelines to prevent defects and
ensure that the software meets the desired
quality criteria. QA activities typically include:

I. Requirement analysis and validation: QA teams


collaborate with stakeholders to understand and
validate the requirements, ensuring they are clear,
complete, and testable.

II. Test planning and strategy: QA teams develop test


plans and strategies based on the project
requirements, identifying the scope of testing, test
objectives, and test deliverables.

III. Test case development: QA teams design and create


test cases that cover different functional and non-
functional aspects of the software. Test cases are
designed to verify that the software meets the
specified requirements.

IV. Test execution and defect management: QA teams


execute test cases, report and track defects, and
work with development teams to ensure timely
resolution of issues.

V. Continuous improvement: QA teams analyze testing


results, identify areas for improvement, and
implement process enhancements to prevent future
defects.

2. QC (Quality Control):
Manual Testing
a. Quality Control focuses on the identification and
correction of defects in the software.
b. It is a reactive approach that involves inspecting,
reviewing, and testing the software to identify
and eliminate defects.
c. QC activities typically include:

I. Defect detection and diagnosis: QC teams perform


various testing activities, such as functional testing,
regression testing, performance testing, and usability
testing, to identify defects in the software.

II. Defect reporting and tracking: QC teams document


and report defects, including detailed steps to
reproduce them, so that development teams can
investigate and fix them.

III. Defect resolution and verification: QC teams work


closely with development teams to ensure that
identified defects are properly addressed and
resolved. They also perform verification testing to
ensure that the fixes are effective and do not
introduce new defects.

The key difference between QA and QC can be summarized


as follows:

• QA is focused on preventing defects by establishing


processes, standards, and guidelines, while QC is
focused on detecting and correcting defects through
testing and inspection.
Manual Testing
• QA is a proactive approach that aims to ensure quality
throughout the software development lifecycle, while
QC is a reactive approach that focuses on identifying
and fixing defects after they have been introduced.
• QA is concerned with process improvement and
adherence to quality standards, while QC is concerned
with defect identification, reporting, and resolution.
• Both QA and QC are essential components of a
comprehensive software testing and quality
assurance strategy.
• By combining proactive QA practices with reactive QC
activities, organizations can improve the overall
quality of their software products and deliver reliable
and user-friendly solutions.

2. Software development process

Information

A. Overview of Software Development Life Cycle-v-model

• The V-model is a software development life cycle


(SDLC) model that emphasizes the importance of
testing throughout the development process.
• It is called the V-model because of its V-shaped
graphical representation, which illustrates the
relationship between different phases of development
and testing.
Manual Testing

Overview of the V-model:

1. Requirements Gathering:
a. In the V-model, the software development
process starts with requirements gathering.
b. During this phase, project stakeholders and
business analysts collaborate to gather and
document the software requirements, including
functional and non-functional specifications.

2. System Design(HLD):
a. Once the requirements are finalized, the system
design phase begins.
b. System architects and designers create a high-
level design that defines the overall system
structure, modules, and interfaces.
c. This phase establishes the blueprint for the
software to be developed.

3. Component Design(LLD):
Manual Testing
a. In this phase, the high-level design is further
refined into detailed component designs.
b. Software architects and designers define the
internal structure of each software component,
including class diagrams, data models, and
algorithms.

4. Implementation:
a. The implementation phase involves writing the
actual code based on the design specifications.
b. Developers translate the design documents into
executable code using programming languages
and development tools.
c. This phase focuses on coding, unit testing, and
integration of components.

5. Unit Testing:
a. Unit testing is performed on individual
components or units of code to ensure their
correctness and functionality.
b. Developers write test cases and conduct testing
to identify and fix bugs at the unit level.
c. Unit testing helps detect and resolve issues early
in the development process.

6. Integration Testing:
a. Once the units are tested, they are integrated and
tested together.
b. Integration testing verifies the interactions
between different components and ensures that
they work together as intended.
Manual Testing
c. This phase identifies defects that may arise due
to the integration of different modules.

7. System Testing:
a. System testing involves testing the integrated
system as a whole to ensure that it meets the
specified requirements.
b. Testers perform functional and non-functional
testing to validate the system's behavior,
performance, and reliability.
c. System testing aims to identify any defects or
inconsistencies in the overall system.

8. Acceptance Testing:
a. Acceptance testing is performed to determine
whether the system meets the customer's
requirements and is ready for deployment.
b. It involves user acceptance testing (UAT), where
end-users or stakeholders validate the system's
functionality and usability.

9. Deployment:
a. After successful testing and approval, the
software is deployed to the production
environment.
b. This phase involves installation, configuration,
and release management activities to make the
software available to users.

10. Maintenance and Support:


a. Once the software is deployed, it enters the
maintenance and support phase.
Manual Testing
b. This phase includes bug fixing, enhancements,
and ongoing support to ensure the software's
smooth operation and address any issues that
arise.

• The V-model emphasizes the importance of testing at


each stage of the development process.
• Testing activities are planned and executed in parallel
with the corresponding development activities,
ensuring that defects are identified and resolved early.
• This approach promotes higher quality and reduces
the risk of major issues arising during later stages of
the project.

B. Spiral

• The Spiral model is a software development life cycle


(SDLC) model that combines elements of both
waterfall and iterative approaches.
• It is called the Spiral model because it follows a spiral-
shaped progression, where each iteration of the spiral
represents a phase of the development process.
Manual Testing

Overview of the Spiral model:

1. Planning:
a. The planning phase involves defining the project
goals, objectives, and constraints.
b. This phase includes activities such as identifying
the stakeholders, determining the project
requirements, and establishing the project scope.
c. Risk analysis is also performed during this phase
to identify potential risks and develop strategies
to mitigate them.

2. Risk Analysis:
a. In the Spiral model, risk analysis is a crucial
phase that occurs concurrently with the other
development activities.
b. The objective of risk analysis is to identify,
analyze, and prioritize potential risks associated
with the project.
c. This includes technical risks, schedule risks, and
budget risks.
d. Risk mitigation strategies are developed to
address these risks throughout the project
lifecycle.

3. Engineering:
a. The engineering phase focuses on the actual
development of the software.
b. It involves activities such as requirements
gathering, system design, coding, testing, and
integration.
Manual Testing
c. Each iteration of the spiral represents a cycle of
these activities, allowing for incremental
development and refinement of the software.

4. Evaluation:
a. The evaluation phase is performed at the end of
each iteration.
b. It involves reviewing the progress, evaluating the
developed software, and gathering feedback from
stakeholders.
c. This phase helps in assessing the project's status,
identifying any deviations from the plan, and
making necessary adjustments for subsequent
iterations.

5. Planning the Next Iteration:


a. Based on the evaluation and feedback from the
previous iteration, the next iteration is planned.
b. This includes refining the requirements,
identifying new features or changes, and
updating the project plan.
c. The cycle of planning, risk analysis, engineering,
and evaluation continues until the software is
deemed complete.

• The Spiral model is particularly suited for projects


that have high levels of uncertainty and complexity.
• It allows for iterative development, frequent risk
assessment, and adaptation to changing
requirements.
Manual Testing
• The model emphasizes risk management throughout
the development process, ensuring that potential
risks are identified and addressed early on.
• By following the Spiral model, development teams can
take an iterative and incremental approach to
software development, enabling them to gather
feedback, manage risks, and deliver a high-quality
product.
• It provides a flexible framework that accommodates
changes, promotes stakeholder involvement, and
ensures continuous improvement throughout the
project.

C. Agile methodologies-SCRUM methodology

• The Agile methodology, specifically the Scrum


framework, is a popular approach to software
development that emphasizes iterative and
incremental delivery.
Manual Testing

Overview of the Scrum methodology within the Agile


software development life cycle (SDLC):

1. Product Backlog:
a. The product backlog is a prioritized list of
features, user stories, and tasks that define the
requirements for the project.
b. It represents the overall scope of the product and
is managed by the product owner.

2. Sprint Planning:
a. In the Sprint Planning phase, the development
team selects a set of items from the product
backlog to work on during the sprint.
b. The team determines the goals for the sprint and
breaks down the selected items into smaller,
manageable tasks.

3. Sprint:
Manual Testing
a. A sprint is a time-boxed iteration typically lasting
2-4 weeks, during which the development team
works on the selected backlog items.
b. The team collaborates daily in short meetings
called daily stand-ups to discuss progress,
challenges, and plan the work for the day.

4. Sprint Review:
a. At the end of each sprint, a sprint review is
conducted to showcase the completed work to
stakeholders and gather feedback.
b. The product owner and stakeholders evaluate
the increment and provide input for future
iterations.

5. Sprint Retrospective:
a. The sprint retrospective is a meeting held after
the sprint review to reflect on the sprint process
and identify areas for improvement.
b. The team discusses what went well, what could
be improved, and action items for the next sprint.

6. Incremental Development:
a. The development process in Scrum is
incremental, with each sprint delivering a
potentially shippable product increment.
b. The product evolves through successive sprints,
with new features added, issues resolved, and
feedback incorporated.

7. Scrum Roles:
Manual Testing
a. Scrum defines three primary roles: the product
owner, the development team, and the Scrum
master.
b. The product owner represents the stakeholders
and manages the product backlog.
c. The development team is responsible for
delivering the product increment.
d. The Scrum master facilitates the Scrum process,
removes impediments, and ensures adherence to
Scrum principles.

8. Continuous Planning and Adaptation:


a. Agile methodologies, including Scrum, emphasize
flexibility and adaptability.
b. The product backlog is continuously refined and
reprioritized based on changing requirements
and feedback.
c. The Scrum team adapts and refines the
development approach based on lessons learned
from each sprint.

• Scrum provides a collaborative and iterative approach


to software development, promoting transparency,
frequent communication, and rapid feedback.
• It enables teams to deliver high-quality software in
shorter cycles and allows for flexibility in responding
to changing customer needs.
• By breaking down work into manageable sprints and
involving stakeholders throughout the process, Scrum
fosters a collaborative and customer-centric
development environment.
D. TDD
Manual Testing

• Test-Driven Development (TDD) is a software


development approach that follows a specific process
and cycle to ensure high-quality code.

Overview of the Software Development Life Cycle (SDLC)


in the context of Test-Driven Development (TDD):

1. Test Creation:
a. In TDD, the development cycle begins with
creating a test.
b. The test is written before any code is
implemented and serves as a clear specification
of the desired behavior or functionality.

2. Test Execution:
a. Once the test is written, it is executed against the
existing codebase.
b. Since the code has not been implemented yet, the
test will fail at this stage.

3. Code Implementation:
a. The next step is to write the actual code that will
make the test pass.
b. The code should be focused on fulfilling the
requirements specified by the test.

4. Test Execution and Verification:


a. After the code implementation, the test is
executed again.
Manual Testing
b. This time, the test is expected to pass since the
code has been written to fulfill the specified
requirements.

5. Refactoring:
a. Once the test passes, the code can be refactored
to improve its structure, readability, and
performance.
b. Refactoring ensures that the code remains clean,
maintainable, and adheres to coding standards.

6. Repeat the Cycle:


a. The TDD cycle is repeated for each new feature
or functionality.
b. A new test is created, the code is implemented,
and the test is executed to verify its success.
c. This iterative process continues until all the
desired features have been implemented.

• By following the TDD approach, developers can


ensure that their code meets the specified
requirements and is thoroughly tested.
• TDD helps in improving the overall quality of the
software by catching potential issues early in the
development process.
• It also promotes a more robust and maintainable
codebase as developers are constantly refactoring and
improving their code.
• TDD is often integrated with continuous integration
and continuous delivery (CI/CD) pipelines to
automate the testing and deployment processes.
Manual Testing
• This helps in achieving faster feedback loops and
more frequent releases, leading to faster development
cycles and increased productivity.

E. BDD

• Behavior-Driven Development (BDD) is a software


development approach that focuses on collaboration
between developers, testers, and stakeholders to
ensure that the software meets the desired business
outcomes.

Overview of the Software Development Life Cycle (SDLC)


in the context of Behavior-Driven Development (BDD):

1. Discovery and Requirements Gathering:


a. The BDD process begins with the discovery
phase, where stakeholders, business analysts,
developers, and testers collaborate to identify
and define the desired behavior and
requirements of the software.
b. This involves discussions, workshops, and
capturing user stories or scenarios.

2. Feature Specification:
a. Once the requirements are gathered, they are
translated into feature specifications using a
specific BDD syntax, typically written in a natural
language format such as Gherkin.
Manual Testing
b. These feature specifications serve as executable
documentation that describes the behavior of the
software from a user's perspective.

3. Test Creation:
a. Based on the feature specifications, test
scenarios or acceptance criteria are defined.
b. These tests focus on describing the expected
behavior of the software in a specific situation or
context.
c. The tests are written using the BDD syntax and
are often expressed as Given-When-Then
statements.

4. Test Execution:
a. The tests created in the previous step are
executed against the software.
b. This involves automating the tests using BDD
testing frameworks or tools, which interpret the
BDD syntax and execute the tests.
c. The tests verify whether the software behaves as
expected based on the defined scenarios.

5. Collaboration and Feedback:


a. BDD promotes close collaboration between
developers, testers, and stakeholders throughout
the development process.
b. Test results and feedback are shared with the
team, allowing for discussions and refinements of
the software's behavior and requirements.

6. Iterative Development:
Manual Testing
a. BDD follows an iterative and incremental
development approach.
b. After executing the tests and receiving feedback,
the development team works on implementing
the required features or changes to align the
software with the specified behavior.

7. Continuous Integration and Delivery:


a. BDD is often integrated with continuous
integration and continuous delivery (CI/CD)
pipelines.
b. This allows for frequent testing, automated
builds, and rapid deployment of new features or
updates.

• By following the BDD approach, teams can ensure


that the software is developed based on the desired
behavior and requirements.
• BDD helps in improving collaboration, reducing
misunderstandings, and aligning the development
process with business goals.
• It encourages a shared understanding of the
software's behavior and promotes transparency
throughout the SDLC.

3. Levels and types of testing


Manual Testing

Information

I. Levels of testing

A. Understand levels of unit testing

• Unit testing is a level of software testing that focuses


on verifying the functionality of individual units or
components of a software system.
• A unit can be a small piece of code, a function, a
method, or a module.
• The goal of unit testing is to ensure that each unit
functions correctly in isolation before integrating
them into the larger system.

1. Purpose:
a. Unit testing is primarily concerned with testing
the smallest testable parts of a software system
to ensure their correctness and reliability.
b. It aims to identify any defects or bugs in
individual units and fix them early in the
development process.

2. Scope:
a. Unit testing focuses on testing individual units in
isolation, independent of other units or external
dependencies.
b. It helps ensure that each unit performs its
intended functionality correctly and meets the
specified requirements.
Manual Testing

3. Characteristics:
a. Unit tests are typically written by the developers
themselves using frameworks or tools specific to
the programming language.
b. They are designed to be fast, isolated, and
repeatable.
c. Unit tests should be independent of other units
and should not rely on external resources or
environments.

4. Testing Techniques:
a. Unit tests are designed to cover different aspects
of the unit's functionality, including boundary
conditions, error handling, and normal
operation.
b. Techniques such as stubs, mocks, and fakes are
often used to simulate dependencies and
external interactions.

5. Test Coverage:
a. The goal of unit testing is to achieve high test
coverage for individual units.
b. Test coverage measures the percentage of code
or functionality that is exercised by unit tests.
c. The higher the test coverage, the more
confidence there is in the correctness of the
units.

6. Automation:
a. Unit tests are typically automated and integrated
into the development workflow.
Manual Testing
b. They are executed frequently, often after each
code change or build, to catch any regressions or
introduced defects.
c. Automation ensures that unit tests can be
executed reliably and efficiently.

7. Benefits:
a. Unit testing provides several benefits, including
early bug detection, improved code quality, faster
debugging, easier refactoring, and increased
confidence in the reliability of individual units.
b. It also helps in promoting modularity, reusability,
and maintainability of the codebase.

• In summary, unit testing is an essential part of the


software development process that focuses on testing
individual units or components in isolation.
• It helps ensure that each unit functions correctly and
meets the specified requirements.
• By catching and fixing defects at an early stage, unit
testing contributes to building reliable and high-
quality software systems.

B. Understand levels integration testing

• Integration testing is a level of software testing that


focuses on testing the interactions between different
components or modules of a software system.
• It aims to identify any defects or issues that may arise
when the integrated components work together as a
whole. Integration testing ensures that the individual
Manual Testing
components are properly integrated and that the
system functions as intended.

1. Purpose:
a. The purpose of integration testing is to verify
that the integrated components or modules of a
software system work together correctly and
produce the expected results.
b. It helps detect any interface or communication
issues between components and ensures smooth
interoperability.

2. Scope:
a. Integration testing focuses on testing the
interactions between different components, such
as modules, services, or subsystems.
b. It can be performed at different levels, including
module-level integration, system-level
integration, and external system integration.

3. Testing Techniques:
a. Integration testing employs various techniques
to verify the interactions and interfaces between
components.
b. These techniques may include top-down testing,
bottom-up testing, sandwich testing, or a
combination of these approaches.
c. Integration testing may involve both functional
and non-functional testing aspects.

4. Test Environment:
Manual Testing
a. Integration testing requires a suitable test
environment that closely resembles the
production environment.
b. It may involve setting up mock or stub
components to simulate the behavior of
dependent components that are not yet available
or stable.
c. The test environment should accurately
represent the expected integration scenarios.

5. Dependencies and Stubs:


a. During integration testing, stubs or mock objects
may be used to simulate the behavior of
components that are not fully developed or are
not easily accessible for testing.
b. These stubs help isolate and test the interactions
between the integrated components.

6. Validation and Verification:


a. Integration testing validates that the integrated
system meets the specified requirements and
verifies that the expected outputs are produced.
b. It focuses on identifying defects related to
component integration, such as interface
mismatches, data inconsistencies, or incorrect
dependencies.

7. Collaboration:
a. Integration testing often requires collaboration
between development teams responsible for
different components.
Manual Testing
b. It involves coordinating the integration process,
sharing test cases, and resolving any issues or
discrepancies discovered during testing.

• Integration testing plays a crucial role in ensuring that


the integrated software system functions as a
cohesive whole.
• By identifying and addressing integration issues early
in the development lifecycle, it helps prevent
problems from escalating and ensures the overall
stability and reliability of the software system.

C. Understand levels of system testing

• System testing is a level of software testing that


focuses on testing the entire software system as a
whole.
• It is performed after integration testing and aims to
verify the system's compliance with the specified
requirements and its overall functionality,
performance, and reliability.
• System testing involves testing the system's behavior
in different scenarios and environments to ensure its
readiness for deployment.

1. Purpose:
a. The purpose of system testing is to evaluate the
system's overall behavior and performance in
real-world conditions.
Manual Testing
b. It aims to identify any defects or inconsistencies
in the system's functionality, performance,
security, and usability.
c. System testing ensures that the software system
meets the requirements and performs as
expected in its intended environment.

2. Scope:
a. System testing encompasses the entire software
system, including all integrated components,
modules, and subsystems.
b. It involves testing the system's features,
functions, interfaces, data flows, and interactions
with external systems or users.
c. System testing is typically black-box testing,
where the internal structure and implementation
details are not known to the testers.

3. Testing Techniques:
a. System testing employs various techniques to
validate the system's behavior and performance.
b. It includes functional testing to verify the
system's compliance with functional
requirements, performance testing to assess its
responsiveness and scalability, security testing to
ensure protection against vulnerabilities,
usability testing to assess user-friendliness, and
compatibility testing to ensure compatibility
with different platforms or browsers.

4. Test Environment:
Manual Testing
a. System testing requires a test environment that
closely resembles the production environment in
which the software system will operate.
b. It should include representative hardware,
software, and network configurations.
c. The test environment should replicate the
intended usage scenarios and data conditions to
simulate real-world conditions.

5. Test Coverage:
a. System testing aims to achieve broad test
coverage by testing various aspects of the system,
including positive and negative test cases,
boundary cases, error handling, and stress or
load testing.
b. It ensures that the system behaves as expected in
different scenarios and under different
conditions.

6. Regression Testing:
a. System testing includes regression testing to
ensure that changes or fixes in one area of the
system do not introduce new issues or impact
the existing functionality.
b. Regression test cases are executed to verify that
previously tested features still work correctly
after modifications.

7. Validation and Verification:


a. System testing validates that the entire software
system meets the specified requirements and
Manual Testing
verifies its functionality, performance, and other
quality attributes.
b. It involves comparing the actual system behavior
with the expected behavior to identify any
discrepancies or defects.

8. Documentation and Reporting:


a. System testing requires documentation of test
plans, test cases, and test results.
b. Testers create detailed reports summarizing the
test execution, identified defects, and any
recommendations or observations.
c. These reports help stakeholders make informed
decisions about the system’s readiness for
deployment.

• System testing is crucial for ensuring that the


software system functions as intended, meets user
expectations, and performs well in its operational
environment.
• By conducting thorough system testing, organizations
can mitigate risks, enhance system quality, and deliver
reliable and robust software to their users.

D. Understand levels of acceptance testing

• Acceptance testing is a level of software testing that


determines whether a system meets the specified
requirements and is acceptable for delivery to the
end-users or stakeholders.
Manual Testing
• It focuses on validating the system's functionality,
usability, and overall fitness for purpose from the
perspective of the end-users. Acceptance testing is
typically performed after system testing and before
the system is deployed or released.

1. Purpose:
a. The purpose of acceptance testing is to ensure
that the software system meets the business
requirements, user expectations, and contractual
agreements.
b. It aims to verify that the system is complete,
accurate, and usable, and that it satisfies the
acceptance criteria defined by the stakeholders.
c. Acceptance testing validates whether the system
is ready for deployment and acceptance by the
intended users.

2. Types of Acceptance Testing:

There are two main types of acceptance testing:

a. User Acceptance Testing (UAT):


i. UAT involves testing the system by end-
users or representatives of the intended
user community.
ii. It focuses on validating the system's
functionality, usability, and suitability for the
users' needs. UAT often includes test
scenarios or test cases created by the users
themselves to simulate real-world usage
scenarios.
Manual Testing

b. Business Acceptance Testing (BAT):


i. BAT is conducted by business stakeholders
or subject matter experts to verify that the
system meets the business requirements
and aligns with the organization's goals.
ii. BAT may involve testing specific business
processes, workflows, or integration points
to ensure that the system operates as
expected in the business context.

3. Test Environment:
a. Acceptance testing is usually performed in an
environment that closely resembles the
production environment.
b. It may involve using real or representative data,
simulating realistic usage scenarios, and
incorporating any necessary test data or test
environments.
c. The goal is to ensure that the acceptance testing
accurately reflects the system's behavior in the
intended operational environment.

4. Acceptance Criteria:
a. Acceptance testing is driven by a set of
predefined acceptance criteria that define the
minimum requirements or conditions for the
system to be considered acceptable.
b. These criteria are typically established in
collaboration between the development team
and the stakeholders.
Manual Testing
c. Acceptance testing verifies that the system
satisfies these criteria and meets the defined
quality standards.

5. User Feedback and Validation:


a. Acceptance testing relies heavily on user
feedback and validation.
b. The end-users or business stakeholders actively
participate in the testing process, providing
input, reporting any issues or concerns, and
validating that the system meets their
expectations.
c. User feedback plays a crucial role in evaluating
the system's usability, user experience, and
overall satisfaction.

6. Documentation and Sign-Off:


a. Acceptance testing requires proper
documentation of test plans, test cases, and test
results.
b. Testers document their findings, identified
defects, and any recommendations or
observations.
c. Once the acceptance testing is successfully
completed and all acceptance criteria are met,
stakeholders typically provide formal sign-off or
approval, indicating their acceptance of the
system.

• Acceptance testing plays a vital role in ensuring that


the software system meets the intended business and
user requirements.
Manual Testing
• It validates that the system is fit for its intended
purpose and provides confidence to the stakeholders
that the system is ready for deployment.
• Through acceptance testing, organizations can gain
valuable feedback from end-users and stakeholders,
refine the system based on their inputs, and deliver a
high-quality product that meets the expectations of
the users.

II. Types of testing

A. Static testing - Desk checking

• Desk checking is a static software testing technique


that involves reviewing and analyzing the code or
documentation manually without executing the
software.
• It is a form of peer review where one or more
individuals, typically developers or testers,
thoroughly examine the code or other artifacts to
identify errors, defects, or areas of improvement.
• During desk checking, the reviewers carefully review
the code or documentation line by line, looking for
syntax errors, logical flaws, design issues, or any
other potential problems.
• They may follow a set of predefined guidelines or
standards to ensure consistency and adherence to
best practices.
• The goal is to detect and rectify issues early in the
development process, reducing the likelihood of those
issues causing problems during execution or testing.
Manual Testing

Desk checking can be performed in various forms:

1. Code Review:
a. Developers review each other's code to identify
coding errors, bugs, and potential performance
issues.
b. They also ensure compliance with coding
standards, design patterns, and best practices.

2. Document Review:
a. Technical documents, such as requirement
specifications, design documents, or test plans,
are thoroughly reviewed for completeness,
accuracy, and clarity.
b. The reviewers provide feedback and suggest
improvements.

3. Walkthrough:
a. The code or documentation is presented to a
group of reviewers who actively participate in
discussions, ask questions, and provide feedback.
b. It encourages collaboration and knowledge
sharing among team members.

4. Inspection:
a. A formal review process is followed, involving a
team of reviewers who systematically examine
the code or documentation using a checklist or
defined set of criteria.
b. The focus is on detecting defects and ensuring
high-quality deliverables.
Manual Testing

The benefits of static testing include:

1. Early detection of errors and defects before executing


the software.
2. Improved code quality and adherence to coding
standards.
3. Knowledge sharing and learning opportunities among
team members.
4. Identification of potential design flaws or
performance bottlenecks.
5. Reduction in the cost and effort of fixing defects in
later stages of development.
6. Desk checking is a valuable static technique that
complements other testing methods by catching
issues early and promoting collaboration among team
members.
7. It is a cost-effective way to improve software quality
and reliability.

B. Walkthroughs

• Walkthroughs are a type of static software testing


technique that involves a group of people collectively
reviewing and discussing the code or documentation
to identify defects, clarify requirements, and improve
the overall quality of the deliverables.
• It is a collaborative approach to ensure that the
software meets the desired objectives and follows the
specified standards.
Manual Testing
• During a walkthrough, the team members, including
developers, testers, stakeholders, and subject matter
experts, come together to analyze the code or
documentation in a structured manner.
• The primary goal is to gather feedback, gain a shared
understanding, and address potential issues early in
the development process.

The walkthrough process typically includes the following


steps:

1. Planning:
a. The session is scheduled and the relevant
artifacts, such as code files, design documents, or
test plans, are distributed to the participants in
advance.
b. The objectives, scope, and roles of the attendees
are defined.

2. Introduction:
a. The facilitator provides an overview of the
walkthrough objectives, sets the context, and
explains the ground rules for the session.
b. The walkthrough leader may also outline the
specific areas of focus or the questions to be
addressed.

3. Step-by-step Review:
a. The participants review the code or
documentation line by line, discussing each
component, functionality, or requirement.
Manual Testing
b. They may raise questions, offer suggestions, and
provide feedback on potential improvements,
clarity, completeness, and adherence to
standards.

4. Discussion and Clarification:


a. The walkthrough leader encourages active
participation and open discussions among the
attendees.
b. They address the questions, concerns, and
suggestions raised by the participants, fostering a
collaborative environment for knowledge sharing
and problem-solving.

5. Issue Identification and Recording:


a. Any defects, inconsistencies, or areas of
improvement identified during the walkthrough
are documented.
b. These issues are logged for further analysis,
resolution, or inclusion in the project's issue
tracking system.

6. Follow-up Actions:
a. Once the walkthrough session is completed, the
identified issues are assigned to the respective
individuals or teams for resolution.
b. The actions are tracked and followed up to
ensure proper closure.

• Walkthroughs are an essential component of the


static testing process, promoting early defect
Manual Testing
identification and fostering collaboration among team
members.
• By leveraging the knowledge and expertise of the
participants, walkthroughs contribute to the overall
success of software development projects.

C. Reviews and Inspection

• Reviews and inspections are static software testing


techniques that involve a systematic examination and
evaluation of software artifacts, such as code, design
documents, or requirements specifications, to identify
defects and improve the quality of the deliverables.
• These techniques rely on peer-based evaluations and
objective criteria to ensure that the software meets
the desired standards and requirements.

Reviews and inspections typically follow a structured


process, involving the following steps:

1. Planning:
a. The review or inspection is planned, including
the selection of the appropriate artifacts,
participants, and review criteria.
b. The objectives, scope, and roles of the
attendees are defined.

2. Preparation:
a. The relevant artifacts are distributed to the
participants well in advance of the review or
inspection session.
Manual Testing
b. The participants individually examine the
materials, identifying potential defects and
areas for improvement.

3. Review Meeting:
a. The participants come together for a meeting
to discuss and share their findings.
b. The focus is on identifying defects, clarifying
requirements, and improving the overall
quality of the software.
c. The meeting is facilitated by a moderator who
ensures that the review process remains on
track.

4. Defect Identification and Documentation:


a. Any defects, inconsistencies, or areas of
improvement identified during the review or
inspection are documented.
b. These issues are recorded for further analysis,
resolution, or inclusion in the project's issue
tracking system.
c. The defects are typically classified based on
severity or priority.

5. Follow-up Actions:
a. Once the review or inspection session is
completed, the identified issues are assigned to
the respective individuals or teams for
resolution.
b. The actions are tracked and followed up to
ensure proper closure.
Manual Testing
• Reviews and inspections are important
components of the static testing approach,
promoting early defect identification, knowledge
sharing, and process improvement.
• By leveraging the collective knowledge and
expertise of the participants, these techniques
contribute to the overall success of software
development projects.

Reviews several benefits, including:

1. Defect detection:
a. By involving multiple reviewers, reviews and
inspections help uncover defects and issues early
in the development process, reducing the cost
and effort of fixing them later.

2. Quality improvement:
a. The systematic evaluation of software artifacts
ensures that the software meets the desired
standards and requirements, leading to improved
quality and reliability.

3. Knowledge sharing:
a. Reviews provide a platform for knowledge
exchange among team members, facilitating a
shared understanding of the code, design, or
requirements.

4. Process improvement:
Manual Testing
a. Through the identification of common defects or
recurring issues, reviews and inspections
contribute to process improvement efforts,
helping to prevent similar problems in future
projects.

5. Collaboration and learning:


a. By involving team members in the evaluation
process, reviews foster collaboration and
learning, allowing individuals to gain insights
from their peers and enhance their own skills
and expertise.

IV. Functional and non-functional testing

C. Define functional and non-functional testing

1. Functional Testing:
a. Functional testing is a type of software testing
that focuses on verifying the functional
requirements and specifications of a system or
application.
b. It involves testing the features, functionality, and
behaviour of the software to ensure that it meets
the intended functionality and works correctly.
c. The goal of functional testing is to validate that
the software performs as expected and meets the
user's requirements.

Key aspects of functional testing include:


Manual Testing

1. Test Cases:
a. Functional testing involves creating test cases
based on the functional requirements and
specifications of the software.
b. These test cases are designed to cover different
scenarios and validate the expected functionality.

2. Input and Output Validation:


a. Functional testing verifies that the system
accepts the correct inputs, processes them
accurately, and produces the expected outputs.
b. It ensures that the software functions as
intended and performs the necessary
calculations, validations, and transformations.

3. Feature Testing:
a. Functional testing examines each feature of the
software to ensure that it behaves correctly and
produces the desired outcomes.
b. It tests various functionalities such as user
interactions, data manipulation, system
integration, and error handling.

4. Boundary Testing:
a. Functional testing includes boundary testing to
validate the system's behaviour at the
boundaries of input ranges and limits.
b. It verifies how the software handles minimum
and maximum values, edge cases, and boundary
conditions.
Manual Testing
5. Regression Testing:
a. Functional testing is often combined with
regression testing to ensure that new changes or
enhancements do not introduce any issues or
regressions in the existing functionality.
b. It validates that the software continues to
function correctly after modifications.
Build verification Test (smoke and sanity)
A. Understand smoke testing

• Smoke testing, also known as build verification


testing, is a type of testing that focuses on quickly and
superficially checking the basic functionality of a
software system.
• It is typically performed after a new build or release
to ensure that the critical features and functionalities
are working as expected before conducting more
comprehensive testing.

1. Purpose:
a. The purpose of smoke testing is to identify major
defects or issues that could prevent further
testing or hinder the basic functioning of the
software.
b. It is not an in-depth or exhaustive test but rather
a quick check to ensure that the critical
components of the system are functioning
properly.

2. Scope:
Manual Testing
a. Smoke testing targets the essential
functionalities or core features of the software
system.
b. It does not cover all the detailed functionalities
or edge cases but focuses on the primary
functionality that should work consistently
across builds or releases.

3. Automation:
a. Smoke testing can be automated to save time and
effort in executing the test cases.
b. Automated smoke tests can be scheduled to run
automatically after each build or release,
providing immediate feedback on the stability of
the software.

4. Continuous Integration:
a. Smoke testing is often integrated into the
continuous integration (CI) or continuous
delivery (CD) pipelines to ensure that each build
or release meets the minimum quality standards
before progressing to more comprehensive
testing or deployment stages.

• Smoke testing is a valuable practice in software


development and testing as it helps identify critical
issues early on, allowing teams to address them
promptly and avoid wasting time and resources on
further testing if the basic functionality is
compromised.
Manual Testing
• It provides an initial indication of the system's
stability and helps ensure a smoother testing and
deployment process.

B. Understand sanity testing

• Sanity testing, also known as sanity check or build


verification test (BVT), is a type of software testing
that aims to quickly evaluate whether the system is
stable and ready for further testing.
• It focuses on verifying the basic functionality of the
software after a minor change or a specific set of
changes have been made.

1. Purpose:
a. The purpose of sanity testing is to ensure that the
software is in a reasonable and stable condition
to proceed with more comprehensive testing.
b. It is performed to quickly check if the critical
functionalities are working as expected and to
identify any major issues that could hinder
further testing.

2. Scope:
a. Sanity testing typically covers the key areas or
critical features of the software that are affected
by recent changes.
b. It is not an exhaustive or comprehensive test but
rather a targeted assessment to determine if the
recent changes have not introduced any severe
defects.
Manual Testing

3. Automation:
a. Sanity testing can be automated to streamline the
process and save time in executing the test cases.
b. Automated sanity tests can be incorporated into
the build or release pipeline to provide
immediate feedback on the stability of the
software after specific changes.

4. Continuous Integration:
a. Sanity testing is often integrated into the
continuous integration (CI) or continuous
delivery (CD) workflows to ensure that the
software remains in a stable state throughout the
development and deployment cycles.
b. It helps catch any critical issues early on and
prevents the propagation of defective changes to
subsequent stages.

• Sanity testing acts as a quick health check for the


software system and helps ensure that the recent
changes have not introduced any significant
regressions.
• It provides confidence to the testing team and
stakeholders that the system is stable and ready for
further testing or release.

2. Non-Functional Testing:
Manual Testing
• Non-functional testing focuses on evaluating the
performance, reliability, usability, and other non-
functional aspects of a system or application.
• Unlike functional testing, which verifies the functional
requirements, non-functional testing checks the
quality attributes of the software.
• It ensures that the software performs well under
different conditions and meets the user's expectations
beyond the basic functionality.

Common types of non-functional testing include:

1. Performance Testing:
a. Performance testing evaluates how the system
performs in terms of response time, scalability,
reliability, and resource usage.
b. It helps identify any performance bottlenecks or
issues under normal and peak load conditions.

2. Security Testing:
a. Security testing ensures that the software is
secure from unauthorized access, data breaches,
and other security vulnerabilities.
b. It tests for potential security risks, checks access
controls, and validates the integrity and
confidentiality of data.

3. Usability Testing:
a. Usability testing assesses the user-friendliness
and ease of use of the software.
b. It focuses on factors such as navigation, layout,
responsiveness, and overall user experience to
Manual Testing
ensure that the software is intuitive and efficient
for end-users.

4. Compatibility Testing:
a. Compatibility testing verifies that the software
works correctly across different platforms,
browsers, devices, and operating systems.
b. It ensures that the software is compatible and
functions consistently across various
environments.

• Both functional and non-functional testing are crucial


for delivering high-quality software.
• While functional testing focuses on the expected
functionality and user requirements, non-functional
testing addresses the performance, security, usability,
and other aspects that contribute to the overall
quality and user satisfaction of the software.

V. Understanding types of non-functional testing:

A. Performance testing-Load

a. Performance Testing - Load Testing:


• Load testing is a type of performance testing that
focuses on evaluating the performance of a system
under specific workload conditions.
• It tests the system's ability to handle a specific load or
user concurrency and measures its response time,
throughput, and resource utilization.
Manual Testing
• The goal of load testing is to identify performance
bottlenecks, scalability issues, and determine if the
system can handle the expected workload without
degradation in performance.

Key aspects of load testing include:

1. Simulating User Load:


a. Load testing involves simulating the expected
user load on the system by generating concurrent
virtual users or requests.
b. This load can be generated using tools or scripts
that mimic real user behaviour.

2. Measuring Response Time:


a. Load testing measures the response time of the
system for various transactions, requests, or user
interactions.
b. It helps identify any performance degradation or
delays as the load increases.

3. Identifying Performance Issues:


a. Load testing helps identify performance issues
such as slow response times, timeouts, database
bottlenecks, network congestion, or resource
exhaustion.
b. By pinpointing these issues, load testing helps
developers and performance engineers optimize
the system's performance and improve its
scalability.
Manual Testing
• Load testing is essential to ensure that a system can
handle the expected user load and deliver optimal
performance.
• By identifying and resolving performance bottlenecks,
load testing helps ensure that the system can meet the
performance requirements and provide a satisfactory
user experience even under high load conditions.

B. Understanding types of non-functional testing: Stress

Stress Testing:
• Stress testing is a type of non-functional testing that
evaluates the behavior and performance of a system
under extreme conditions beyond its normal
operating capacity.
• It aims to determine the system's stability and
reliability by subjecting it to excessive load, limited
resources, or unfavorable environmental conditions.
• The goal of stress testing is to identify the system's
breaking point and observe how it recovers from
stress conditions.

Key aspects of stress testing include:

1. Overloading the System:


a. Stress testing involves pushing the system to its
limits by overloading it with excessive load, data
volume, or concurrent users.
b. This is done to evaluate how the system handles
and recovers from such extreme conditions.
Manual Testing
2. Testing Resource Exhaustion:
a. Stress testing focuses on testing the system's
behaviour when critical resources like CPU,
memory, disk space, or network bandwidth are
severely constrained or exhausted.
b. It helps identify how the system responds to
resource scarcity and whether it gracefully
recovers from resource exhaustion.

3. Observing System Behaviour:


a. During stress testing, the system's behaviour is
closely monitored and analyzed.
b. It includes monitoring performance metrics,
analyzing logs, observing error messages, and
assessing the overall system stability and
responsiveness.

• Stress testing is crucial to ensure that a system can


withstand high levels of load, resource constraints, or
adverse conditions without failing or exhibiting
undesirable behavior.
• By identifying potential weaknesses and areas for
improvement, stress testing helps enhance the
system's resilience and ensure its robustness under
challenging scenarios.

C. Understanding types of non-functional testing: Soak

Soak Testing:
• Soak testing, also known as endurance testing or
longevity testing, is a type of non-functional testing
Manual Testing
that evaluates the system's behaviour and
performance over an extended period under normal
operational conditions.
• The purpose of soak testing is to assess the system's
stability, reliability, and performance when subjected
to sustained usage and continuous operation.

Key aspects of soak testing include:

1. Continuous Operation:
a. Soak testing involves running the system
continuously for an extended period, typically for
several hours or even days.
b. It aims to simulate real-world scenarios where
the system is expected to operate without
interruption.

2. Identifying Performance Degradation:


a. The primary goal of soak testing is to identify
any performance degradation or issues that may
occur over time.
b. It helps detect memory leaks, resource leaks,
slow memory allocation, database connection
issues, or other problems that may arise during
prolonged system usage.

3. Assessing Stability:
a. Soak testing aims to evaluate the system's
stability under sustained usage.
b. It helps identify any potential issues related to
memory leaks, resource exhaustion, or other
Manual Testing
stability-related problems that may affect the
system's ability to function reliably over time.

4. Checking for Data Corruption:


a. Soak testing may involve validating the integrity
of data stored or processed by the system during
the extended testing period.
b. This ensures that the system can handle large
volumes of data without data corruption or data
loss.

• The objective of soak testing is to ensure that the


system can handle prolonged usage without any
degradation in performance or stability.
• By subjecting the system to continuous operation,
soak testing helps identify and address any issues that
may arise due to prolonged usage, ensuring that the
system remains reliable and performs optimally over
time.

D. Understanding types of non-functional testing: Spike


testing

Spike Testing:
• Spike testing is a type of non-functional testing that
aims to assess the system's performance and stability
when subjected to sudden and extreme changes in
workload or user traffic.
• It involves simulating a sudden spike or surge in user
activity to evaluate how the system handles the
increased load and whether it can recover gracefully.
Manual Testing
Key aspects of spike testing include:

1. Simulating High Load:


a. Spike testing involves generating a sudden and
significant increase in user activity or workload
to test the system's response.
b. This can be achieved by rapidly increasing the
number of concurrent users, requests, or
transactions sent to the system.

2. Evaluating Performance under Stress:


a. The main objective of spike testing is to evaluate
how the system performs under stress and
whether it can handle the sudden surge in load.
b. It helps identify any performance bottlenecks,
scalability issues, or resource limitations that
may affect the system's ability to handle
increased workload.

3. Monitoring Key Metrics:


a. During spike testing, various performance
metrics are monitored, such as response time,
throughput, CPU and memory usage, and
network latency.
b. By analyzing these metrics, testers can identify
any performance issues or abnormalities that
may occur during the spike in load.

4. System Stability and Resilience:


a. Spike testing also evaluates the system's stability
and resilience under sudden load changes.
Manual Testing
b. It helps uncover any issues related to resource
exhaustion, memory leaks, database connection
limits, or other factors that may impact the
system's stability during high-load situations.

• The goal of spike testing is to ensure that the system


can handle sudden and extreme variations in
workload without significant performance
degradation or system failure.
• By subjecting the system to spikes in user activity,
testers can identify any performance bottlenecks,
scalability issues, or other limitations that need to be
addressed to ensure the system's stability and
reliability in real-world scenarios.

E. Understanding types of non-functional testing:


Usability testing

Usability Testing:
• Usability testing is a type of non-functional testing
that focuses on evaluating the user-friendliness and
effectiveness of a system or application.
• It involves testing the system with real users to gather
feedback and assess how easily users can accomplish
their intended tasks.
• The primary goal of usability testing is to identify any
usability issues, improve user experience, and ensure
that the system meets the needs and expectations of
its target audience.
Manual Testing
Key aspects of usability testing include:

1. User-Centric Approach:
a. Usability testing puts the user at the center of the
testing process.
b. It involves observing and collecting feedback
from real users as they interact with the system.
c. The focus is on understanding how users
perceive and interact with the system, identifying
any usability issues, and gathering insights to
improve the user experience.

2. User Feedback and Observation:


a. Usability testing involves direct interaction with
users through interviews, surveys, and
observation.
b. Testers may use various methods such as
thinking aloud, task completion, and
retrospective feedback to gather user insights.
c. This feedback helps identify pain points, areas of
confusion, and usability issues that can be
addressed to enhance the overall user
experience.

3. Improving User Experience:


a. The insights gained from usability testing are
used to enhance the user experience by making
design improvements, simplifying user
interfaces, improving navigation, and addressing
usability issues.
Manual Testing
b. Usability testing helps ensure that the system is
intuitive, easy to use, and meets the needs of its
target users.

• By conducting usability testing, organizations can


gain valuable insights into how users interact with
their system and make informed decisions to improve
the user experience.
• Usability testing helps identify and address usability
issues early in the development process, resulting in a
more user-friendly and successful product.

F. Understanding types of non-functional testing:


Security testing

Security Testing:
• Security testing is a type of non-functional testing that
focuses on evaluating the security aspects of a system
or application.
• It involves assessing the system's ability to protect
sensitive data, prevent unauthorized access, and
withstand potential security threats or attacks.
• The primary goal of security testing is to identify
vulnerabilities, weaknesses, and potential risks to the
system's security and ensure that appropriate
measures are in place to mitigate them.

Key aspects of security testing include:

1. Identification of Security Risks:


Manual Testing
a. Security testing aims to identify potential
security risks and vulnerabilities in the system.
b. Testers analyze the system architecture, design,
and implementation to identify any potential
weaknesses or loopholes that can be exploited by
attackers.
c. This may involve conducting penetration testing,
vulnerability scanning, and code review to
uncover security issues.

2. Authentication and Authorization:


a. Security testing assesses the system's
authentication and authorization mechanisms.
b. It verifies that only authorized users can access
the system and that appropriate access controls
are in place.
c. Testers may simulate different scenarios to
ensure that user authentication and
authorization are functioning correctly and that
access privileges are enforced accurately.

3. Data Protection:
a. Security testing evaluates the system's ability to
protect sensitive data.
b. This includes testing encryption mechanisms,
secure transmission of data, storage and retrieval
of data securely, and proper handling of
personally identifiable information (PII) or
sensitive customer data.
c. Testers verify that data is adequately protected
throughout its lifecycle within the system.
Manual Testing
4. Vulnerability Assessment:
a. Security testing involves conducting vulnerability
assessments to identify potential security
weaknesses.
b. This may include analyzing the system's network
infrastructure, configuration settings, and
application vulnerabilities.
c. Testers may use automated tools or manual
techniques to identify common vulnerabilities,
such as SQL injection, cross-site scripting (XSS),
or insecure direct object references (IDOR).
• By conducting security testing, organizations can
identify and address potential security risks,
vulnerabilities, and weaknesses in their systems.
• It helps protect sensitive data, maintain user trust,
and ensure compliance with security standards.
Security testing plays a crucial role in ensuring that
systems are robust, secure, and resilient against
potential threats and attacks.

G. Understanding types of non-functional testing:


Compatibility testing

Compatibility Testing:
• Compatibility testing is a type of non-functional
testing that focuses on ensuring that a software
application or system is compatible with various
hardware, operating systems, browsers, devices, and
network environments.
• The goal of compatibility testing is to verify that the
application functions correctly and consistently
Manual Testing
across different configurations, ensuring a seamless
user experience for all users.

Key aspects of compatibility testing include:

1. Hardware Compatibility:
a. Compatibility testing assesses the application's
compatibility with different hardware
configurations, such as different processors,
memory capacities, graphics cards, and
peripherals.
b. It ensures that the application functions properly
and efficiently on various hardware setups
without any hardware-specific issues or
limitations.

2. Operating System Compatibility:


a. Compatibility testing verifies that the application
works seamlessly across different operating
systems (e.g., Windows, macOS, Linux, Android,
iOS) and different versions of each operating
system.
b. It ensures that the application's features and
functionality are consistent across different
platforms, and there are no compatibility issues
specific to a particular operating system.

3. Browser Compatibility:
a. Compatibility testing focuses on evaluating the
application's behavior and performance across
different web browsers (e.g., Chrome, Firefox,
Safari, Edge, Internet Explorer).
Manual Testing
b. It ensures that the application's layout, design,
and functionality are consistent across browsers
and that there are no rendering or scripting
issues that could affect user experience.

4. Device Compatibility:
a. With the proliferation of mobile devices and
tablets, compatibility testing also considers the
application's compatibility with various devices,
screen sizes, and resolutions.
b. It ensures that the application is responsive and
adapts well to different devices, providing an
optimal user experience regardless of the device
being used.

5. Network Compatibility:
a. Compatibility testing verifies that the application
functions correctly under different network
conditions, such as various bandwidths, network
speeds, and network types (wired, wireless,
cellular).
b. It ensures that the application can handle
network-related scenarios gracefully and
performs well under different network
constraints.

6. Database Compatibility:
a. In cases where the application interacts with a
database, compatibility testing ensures that the
application is compatible with different database
management systems (e.g., MySQL, Oracle, SQL
Server) and versions.
Manual Testing
b. It verifies that the application can establish
connections, retrieve and manipulate data, and
handle database-specific functionalities correctly.

• By conducting compatibility testing, organizations can


ensure that their software applications are
compatible with a wide range of environments and
configurations.
• This helps in reaching a broader audience, delivering
a consistent user experience, and avoiding
compatibility-related issues that may arise when the
application is used in different settings.
• Compatibility testing plays a crucial role in
maximizing the application's reach and usability
across various platforms and configurations.

C. Understand re-testing

• Re-testing is a type of testing that focuses on verifying


that a specific defect or issue reported in the software
has been fixed correctly.
• It involves re-executing the test cases that failed
previously due to the identified issue to ensure that
the fix has resolved the problem and has not
introduced any new defects.

1. Purpose:
a. The purpose of re-testing is to validate that the
specific defect or issue reported earlier has been
resolved successfully.
Manual Testing
b. It aims to ensure that the fix has effectively
addressed the problem and that the functionality
related to the issue is now working as expected.
2. Scope:
a. Re-testing is typically limited to the areas or
features of the software that were affected by the
identified defect.
b. It focuses on validating the changes made to fix
the issue rather than retesting the entire
application.

3. Test Execution:
a. During re-testing, the test cases that failed
previously due to the reported defect are
executed again.
b. The primary goal is to verify that the test cases
now pass, indicating that the fix has rectified the
issue and the impacted functionality is
functioning correctly.

• Re-testing is an essential part of the defect resolution


process.
• It validates that the reported issues have been fixed
correctly and that the affected functionality is now
functioning as intended.
• By performing re-testing, organizations can ensure
the reliability and quality of their software products.

D. Understand regression testing


Manual Testing
• Regression testing is a type of software testing that is
performed to verify that changes or enhancements
made to an application do not unintentionally
introduce new defects or regressions in previously
tested functionality.
• It involves re-executing select test cases to ensure that
the existing features of the software are still
functioning correctly after modifications have been
made.
1. Purpose:
a. The main purpose of regression testing is to
ensure that the existing functionality of the
software remains intact and unaffected by recent
changes.
b. It aims to identify any unexpected issues or
regressions that may have been introduced as a
result of modifications to the code, configuration,
or environment.

2. Test Coverage:
a. Regression testing typically focuses on testing
the critical and high-risk areas of the software
that are likely to be impacted by the changes.
b. It may involve a combination of manual and
automated tests, depending on the complexity
and nature of the application.

3. Test Execution:
a. The selected test cases are executed to ensure
that the modified functionality is working as
expected and that no unintended side effects
have occurred in other parts of the software.
Manual Testing
b. Both positive and negative scenarios are
considered to cover a wide range of possible
interactions.

4. Test Automation:
a. Regression testing can be time-consuming and
resource-intensive, especially for large and
complex applications.
b. Therefore, test automation is commonly
employed to streamline the process and improve
efficiency.
c. Automated regression tests can be executed
repeatedly and consistently, allowing for faster
identification of any potential regressions.
d. Regression testing should be performed in an
environment that closely resembles the
production environment to ensure accurate
results.

• By conducting regular regression testing,


organizations can minimize the risk of introducing
new defects and ensure the stability and quality of
their software over time.
• It provides confidence that existing features continue
to function as expected, even in the presence of
changes and updates

E. Understand Adhoc testing- buddy testing

• Adhoc testing, also known as buddy testing or


exploratory testing, is an informal and unplanned
Manual Testing
approach to testing where testers rely on their
experience, domain knowledge, and intuition to
perform testing without predefined test cases or
scripts.
• It involves the tester freely exploring the application,
trying different scenarios, and reporting any issues or
observations they come across.

1. Purpose:
a. Adhoc testing is typically performed to
complement formal testing approaches and to
discover defects or issues that may not be easily
identified through structured test cases.
b. It allows testers to think creatively and exercise
their critical thinking skills to uncover hidden or
unusual defects.

2. Test Coverage:
a. Adhoc testing aims to cover areas that may not
have been thoroughly tested in the formal test
scenarios.
b. Testers may focus on specific features, user
workflows, or functional areas that they consider
important or likely to have issues.

3. Testing Approach:
a. Unlike traditional testing approaches that follow
predefined test cases, adhoc testing is more
flexible and open-ended.
b. Testers have the freedom to explore the
application in a non-linear manner, trying
Manual Testing
different inputs, configurations, and interactions
based on their own judgment.

4. Time Constraints:
a. Adhoc testing is usually performed within a
limited timeframe or as an informal part of the
testing process.
b. It may not cover the entire application, but
instead focus on specific areas or aspects that the
testers deem important or relevant.

5. Experience and Expertise:


a. Adhoc testing heavily relies on the experience
and expertise of the testers.
b. Testers with deep domain knowledge and
familiarity with the application are more likely to
uncover potential defects or areas of concern.

• Adhoc testing or buddy testing can be an effective way


to supplement structured testing approaches by
providing a fresh perspective and uncovering
unforeseen defects.
• However, it should not replace formal testing
methodologies and should be used as a
complementary technique to ensure comprehensive
test coverage.

F. Understand pair wise testing

• Pairwise testing, also known as all-pairs testing, is a


combinatorial testing technique that helps identify
Manual Testing
defects or issues in software systems by testing all
possible combinations of input parameters or
variables in pairs.
• It aims to maximize test coverage while minimizing
the number of test cases needed.

1. Purpose:
a. Pairwise testing is used to efficiently test
different combinations of input parameters or
variables in a software system.
b. It is based on the observation that most defects
or issues are caused by the interactions or
combinations of a few parameters rather than
the individual parameters themselves.

2. Test Coverage:
a. Pairwise testing ensures that all possible
combinations of input parameters are tested at
least once, covering a significant portion of the
input space.
b. It helps identify defects that may arise due to
specific combinations of parameters.

3. Combinatorial Approach:
a. Pairwise testing uses a combinatorial algorithm
to generate the minimum set of test cases that
covers all possible pairs of input parameters.
b. It selects a representative value for each
parameter and systematically combines them to
create the test cases.
Manual Testing
• Pairwise testing is particularly useful when there are
a large number of input parameters or when the
interaction between parameters is critical.
• It helps ensure a high level of test coverage while
keeping the number of test cases manageable.
• However, it should be used in conjunction with other
testing techniques to achieve comprehensive
coverage.

G. Understand exploratory testing

• Exploratory testing is a testing approach that focuses


on simultaneous learning, design, and execution of
test cases.
• It is an unscripted and ad-hoc testing technique
where testers explore the software system
dynamically, without predefined test cases or detailed
test scripts.
• The primary goal of exploratory testing is to uncover
defects and gain insights into the behavior, usability,
and overall quality of the system.

1. Approach:
a. Exploratory testing is a hands-on and iterative
approach where testers actively explore the
software system, interact with it, and observe its
behavior.
b. Testers use their domain knowledge, intuition,
and experience to design and execute test cases
on the fly.

2. Learning and Adaptation:


Manual Testing
a. During exploratory testing, testers learn about
the system by exploring different features,
functionalities, and user workflows.
b. They adapt their testing approach based on their
findings, insights, and evolving understanding of
the system.

3. Test Design:
a. Exploratory testing does not rely on pre-scripted
or predefined test cases.
b. Testers design test cases on the go based on their
exploration, observations, and the information
they gather during testing.

• Exploratory testing is valuable in situations where


there is limited documentation, evolving
requirements, or complex and unfamiliar systems.
• It complements scripted testing approaches by
providing a fresh perspective and uncovering defects
that may not be caught through traditional test cases.
• Exploratory testing encourages critical thinking,
adaptability, and creativity in testers, making it an
effective technique for finding defects and enhancing
overall software quality.

H. Understand Mutation testing

• Mutation testing is a type of software testing


technique that focuses on evaluating the effectiveness
of a test suite by intentionally introducing small
changes, known as mutations, into the source code.
Manual Testing
• The objective of mutation testing is to determine if
the test suite can detect these artificial defects or
mutations, thus assessing the thoroughness and
quality of the tests.

1. Mutation Operators:
a. Mutation testing involves the use of mutation
operators, which are specific rules or algorithms
that define how mutations are introduced into
the code.
b. These operators modify the code by making
small changes such as changing an operator,
removing a statement, or altering a condition.

2. Mutants:
a. The mutated versions of the code, known as
mutants, are created by applying the mutation
operators.
b. Each mutant represents a potential defect or
fault in the code.
c. The mutations are typically introduced in a
systematic and controlled manner, targeting
specific areas of the code.

• Mutation testing is a challenging and resource-


intensive technique, as it requires the generation of a
large number of mutants and the execution of the test
suite against each mutant.
• It is often used as an advanced technique in software
testing to assess the adequacy of the test suite and
identify areas for improvement.
Manual Testing
• Mutation testing helps ensure that the test suite is
capable of identifying different types of defects,
thereby enhancing the overall reliability and quality
of the software.

I. Understand monkey testing

• Monkey testing, also known as random testing or


monkey test, is a type of software testing technique
where the system or application is subjected to
random and unpredictable inputs to uncover
potential defects or unexpected behaviour.
• It is a form of exploratory testing that aims to test the
robustness and stability of the software by simulating
real-world scenarios.

1. Random Inputs:
a. In monkey testing, the tester or a tool generates
random inputs and feeds them into the system.
b. These inputs can include random keystrokes,
mouse clicks, gestures, or any other form of user
interactions that the system is expected to
handle.

2. Unpredictable Behavior:
a. The purpose of monkey testing is to observe how
the system responds to unexpected or
unpredictable inputs.
b. The tester does not follow any predefined test
cases or scenarios but rather explores the system
in an unstructured and ad-hoc manner.
Manual Testing

3. Stress and Stability Testing:


a. Monkey testing is particularly useful for stress
testing and evaluating the stability of the system.
b. By subjecting the system to a barrage of random
inputs, it helps uncover potential crashes,
freezes, or other unexpected behaviors that may
arise under heavy usage or unusual
circumstances.

4. Automation:
a. Monkey testing can be performed manually by
human testers who randomly interact with the
system, or it can be automated using specialized
tools that simulate random inputs.
b. Automation allows for more extensive and
repetitive testing, making it easier to discover
potential issues.

5. Risk of Data Loss:


a. Due to the nature of random inputs, monkey
testing may pose a risk of data loss or unwanted
modifications, especially when performed on live
systems.
b. Therefore, it is essential to perform monkey
testing on isolated test environments or with
appropriate safeguards in place.

• Monkey testing is not intended to replace formal


testing methods but rather to complement them by
uncovering issues that may not be found through
traditional test scenarios.
Manual Testing
• It is a useful technique for stress testing, identifying
corner cases, and evaluating the overall stability and
robustness of the system.

4. Testing techniques

Information

A. Black box techniques :


Definition
• Black box techniques, also known as behavioural
testing or functional testing, are software testing
techniques where the internal structure,
implementation details, and code logic of the system
under test are not known or considered.
• Instead, the focus is solely on the inputs and outputs
of the system, treating it as a "black box."
• The goal is to validate the functionality and behaviour
of the system based on its specified requirements and
expected outputs.
• In black box testing, the tester is not concerned with
how the system achieves the desired outputs.
• They do not have access to the source code or
knowledge of the system's internal workings.
• The testing is based on the system's external
interfaces, inputs, and expected outcomes.
• It is primarily focused on validating the system's
functionality, usability, and compliance with the
requirements.
Manual Testing

Black box techniques include various testing methods such


as:

1. Equivalence Partitioning:
a. This technique divides the input domain into
groups or partitions and selects representative
test cases from each partition to ensure that
different input conditions are covered.

2. Boundary Value Analysis:


a. It focuses on testing the system's behaviour at
the boundaries of input values, as these are often
where errors are more likely to occur.

3. Decision Table Testing:


a. It involves creating a table that maps different
combinations of inputs and their corresponding
expected outputs, making it easier to identify and
test different scenarios.

4. State Transition Testing:


a. It is used to test systems that have different
states and transitions between those states.
b. Test cases are designed to cover various state
transitions and verify the system's behaviour.

5. Error Guessing:
a. This technique relies on the tester's experience
and intuition to identify potential areas of failure
and design test cases based on likely error-prone
scenarios.
Manual Testing

6. Exploratory Testing(use case based testing)


a. It involves dynamically exploring the system,
executing tests, and simultaneously learning
about the system's behaviour, finding defects,
and identifying potential areas of improvement.

• These black box techniques help ensure that the


system functions correctly, meets the specified
requirements, and behaves as expected from an end-
user perspective.
• By focusing on the system's inputs and outputs
without considering the internal implementation
details, black box testing provides an objective
evaluation of the system's functionality and helps
uncover defects or discrepancies that might occur
during actual usage.

A. Black box techniques: Equivalance


partitioning

• Equivalence partitioning is a black box testing


technique used to divide the input domain of a system
into groups or partitions, where each partition
represents a set of equivalent inputs that should
exhibit similar behaviour from the system.
• The goal of equivalence partitioning is to reduce the
number of test cases while still ensuring adequate
test coverage.
Manual Testing
• In equivalence partitioning, the input values are
classified into different equivalence classes based on
their expected behaviour.
• Test cases are then designed to represent each
equivalence class, rather than testing every possible
input value individually.
• By testing a representative value from each
equivalence class, it is assumed that the behaviour of
other values within the same class will be similar.

The process of equivalence partitioning typically involves


the following steps:

1. Identify the input variables:


a. Determine the inputs that are relevant to the
system or feature being tested.
b. These could be user inputs, data inputs, or any
other input that affects the system's behaviour.

2. Define equivalence classes:


a. Divide the range of input values for each variable
into groups that exhibit similar behaviour.
b. The goal is to identify classes that are likely to
result in the same output or trigger the same set
of actions from the system.
c. For example, if the input variable is age, the
equivalence classes could be "underage" (0-17),
"adult" (18-64), and "senior" (65 and above).

3. Select representative test cases:


Manual Testing
a. From each equivalence class, choose a
representative test case that will cover the
expected behavior of that class.
b. The test cases should be designed to validate
both valid and invalid inputs within the class.
c. For example, for the "adult" equivalence class, a
representative test case could be an age of 25.

4. Execute the test cases:


a. Run the selected test cases and observe the
behavior of the system.
b. The focus is on verifying that the system
responds consistently within each equivalence
class and handles different inputs correctly.

• The advantages of equivalence partitioning include


reducing the number of test cases needed while still
providing reasonable coverage, as well as identifying
potential defects or errors that might occur within a
particular equivalence class.

B. Black box techniques: BVA


• Boundary Value Analysis (BVA) is a black box testing
technique that focuses on testing the boundaries or
limits of input values.
• It is based on the assumption that errors are more
likely to occur at the boundaries of input ranges
rather than within the range itself.
• By testing inputs at the lower and upper boundaries,
as well as just inside and outside those boundaries,
BVA aims to uncover defects related to boundary
conditions.
Manual Testing

The process of Boundary Value Analysis typically involves


the following steps:

1. Identify the input variables:


a. Determine the input variables that have defined
ranges or limits.
b. These could be numeric values, strings, dates, or
any other input that has specific boundaries.

2. Determine the boundary values:


a. For each input variable, identify the lower and
upper boundaries.
b. These boundaries represent the minimum and
maximum valid values for the input.
c. Additionally, determine the values just above and
below these boundaries, known as the invalid
values.

3. Design test cases:


a. Create test cases that cover each boundary value
and a few values just inside and outside the
boundaries.
b. The test cases should include both valid and
invalid inputs to verify the system's behavior at
the edges of the input range.

4. Execute the test cases:


a. Run the test cases and observe the system's
response.
Manual Testing
b. Pay close attention to how the system handles
inputs at or near the boundaries.
c. Verify that the system behaves as expected and
handles the different input ranges correctly.

• The main objective of BVA is to ensure that the system


handles boundary conditions properly.
• This is because errors are more likely to occur at the
edges of input ranges due to boundary-related
calculations, comparisons, or validations.
• By testing boundary values and their surrounding
values, BVA helps identify potential defects or errors
in the system.

C. Black box techniques: Decision Tables

• Decision Tables is a black box testing technique used


to capture complex business logic and the
corresponding inputs and outputs.
• It helps testers systematically identify and test
various combinations of inputs and conditions to
ensure that the software behaves as expected in
different scenarios.
• In Decision Tables, the logic of a system is represented
in a tabular format, where each row corresponds to a
specific combination of inputs and conditions, and
each column represents a specific action or output.
• The inputs and conditions are typically represented
as variables or factors, while the actions or outputs
are represented as possible outcomes or decisions.
Manual Testing
The process of creating and using Decision Tables involves
the following steps:

1. Identify the inputs and conditions:


a. Determine the various inputs and conditions that
influence the behaviour of the system.
b. These can include user inputs, system states,
external factors, or any other factors that impact
the logic of the software.

2. Define the possible values:


a. For each input and condition, identify the
possible values or states that they can take.
b. This helps in creating a comprehensive Decision
Table that covers all possible scenarios.

3. Create the Decision Table:


a. Construct the Decision Table by mapping the
inputs and conditions to the corresponding
actions or outputs.
b. Each row in the table represents a unique
combination of inputs and conditions, while each
column represents a specific action or output.

4. Define the rules:


a. For each combination of inputs and conditions,
define the expected action or output based on the
business rules and requirements.
b. These rules are typically represented as logical
expressions or statements.

5. Test the Decision Table:


Manual Testing
a. Use the Decision Table to guide the testing
process by selecting specific combinations of
inputs and conditions to verify the expected
actions or outputs.
b. Test cases can be derived from the Decision Table
to ensure that all possible scenarios are covered.

• The advantages of using Decision Tables in testing


include their ability to capture complex logic in a
structured and organized manner, their visual
representation that makes it easier to understand and
review the test scenarios, and their ability to identify
missing or redundant test cases.
• Decision Tables help ensure comprehensive coverage
of different combinations of inputs and conditions,
leading to more effective and efficient testing.
• However, it's important to note that Decision Tables
should be used alongside other testing techniques
and not as a standalone method.
• They are particularly useful when dealing with
complex business rules and multiple inputs and
conditions that interact with each other.

D. Black box techniques : State Transition


Diagrams.

• State Transition Diagrams can be used to analyse and


test the behaviour of a system by identifying and
validating the expected state transitions based on
different inputs and events.
• Test cases can be derived from the diagram to ensure
that all possible scenarios and transitions are covered.
Manual Testing

Example:
Let's consider an example of a simple traffic light system
with three states: Green, Yellow, and Red.

Initial State: Green


Transitions:
Green -> Yellow (Triggered by a timer)
Yellow -> Red (Triggered by a timer)
Red -> Green (Triggered by a timer)

• In this example, the system starts in the initial state of


Green. After a certain time, it transitions to the Yellow
state.
• Then, after another time interval, it transitions to the
Red state. Finally, after a specific duration, it
transitions back to the Green state.
• Test scenarios can be designed to validate the correct
behaviour of the traffic light system, such as ensuring
that the transitions occur at the expected time
intervals and that the system remains in each state for
the appropriate duration.
• State Transition Diagrams are beneficial for
understanding and testing systems with complex
behaviour and multiple states.
• They provide a visual representation that aids in
identifying potential issues or missing transitions and
can help ensure comprehensive testing coverage.
Manual Testing
B. Dynamic techniques - White box
techniques: Definition

• White box testing is a dynamic software testing


technique that focuses on the internal structure, code,
and logic of a software application.
• It involves examining the internal components, data
flow, and control flow of the software to ensure that it
functions correctly according to the specified
requirements and design.
• In white box testing, the tester has knowledge of the
internal workings of the software, including the
source code, algorithms, and system architecture.
• This allows them to design test cases that target
specific paths, conditions, and functions within the
code.
• The goal is to achieve thorough coverage of the code
to uncover any potential defects or vulnerabilities.

Some common white box testing techniques include:

1. Statement Coverage:
a. This technique aims to test every individual
statement in the code, ensuring that each line of
code is executed at least once during testing.

2. Branch Coverage:
a. Branch coverage focuses on testing all possible
branches or decision points within the code.
b. It ensures that both true and false conditions of
if-else statements or switch cases are tested.
Manual Testing
3. Path Coverage:
a. Path coverage involves testing all possible
execution paths through the code, ensuring that
every possible combination of statements and
branches is exercised.

• White box testing is typically performed by


developers or testers who have access to the source
code.
• It complements other testing techniques, such as
black box testing, by providing insights into the
internal workings of the software and verifying its
correctness at a detailed level.

A. Statement coverage

• Statement coverage is a white box testing technique


that aims to ensure that each statement in a program
is executed at least once during testing.
• It is a metric used to measure the degree to which the
source code has been exercised by the test cases.
• The goal of statement coverage is to verify that every
line of code has been executed and to identify any
dead code that may not be reachable or executed.
• By achieving high statement coverage, developers and
testers can gain confidence that the code has been
thoroughly tested and that potential errors or bugs
have been identified.
• By systematically going through the code and
executing each statement, testers can ensure that all
Manual Testing
parts of the code are exercised and potential issues
are uncovered.
• Statement coverage is typically expressed as a
percentage, indicating the proportion of statements
that have been executed during testing.
• For example, if a program has 100 statements and 80
of them have been executed, the statement coverage
would be 80%.
• While statement coverage is a useful metric, it does
not guarantee that all possible scenarios and
conditions have been tested. It only ensures that each
statement has been executed at least once.

B. Decision coverage

• Decision coverage, also known as branch coverage,


is a white box testing technique that focuses on
testing the logical decisions or branches within a
program.
• It aims to ensure that each decision point in the
code has been exercised by the test cases.
• In decision coverage, the goal is to test all possible
outcomes of a decision or branch, including both
true and false conditions.
• The purpose is to verify that all possible decision
paths have been executed and that potential errors
or bugs in the decision logic are identified.
• By testing all possible decision outcomes, decision
coverage provides a higher level of confidence in
the correctness of the code.
Manual Testing
• It ensures that all decision branches have been
taken and that potential issues, such as missing or
incorrect conditions, are identified.
• Decision coverage is typically measured as a
percentage, indicating the proportion of decision
outcomes that have been executed during testing.
• For example, if a decision has two possible
outcomes (true and false) and 80% of the outcomes
have been executed, the decision coverage would be
80%.
• While decision coverage is a valuable metric, it does
not guarantee that all possible combinations of
decisions have been tested.
• It only ensures that each decision outcome has
been executed at least once.

C. Path Coverage:
• Path coverage is a white box testing technique that
aims to test all possible paths or sequences of
statements within a program.
• It ensures that every possible execution path,
including all loops, branches, and conditions, is
exercised during testing.
• In path coverage, the goal is to create test cases that
traverse each unique path through the program.
• A path refers to a specific sequence of statements that
are executed during the program's execution.
• It includes both the main control flow and any
alternative or exceptional paths that may occur.
Manual Testing
• The purpose of path coverage is to uncover errors or
bugs that may occur due to specific combinations of
statements or control flow paths.
• By testing all possible paths, it helps ensure that the
program behaves as expected under different
scenarios and conditions.

The advantages of white box testing include:

1. Thorough coverage:
a. White box testing allows for comprehensive
coverage of the code, ensuring that all paths,
conditions, and statements are tested.

2. Early defect detection:


a. By focusing on the internal structure and logic,
white box testing can uncover defects early in the
development cycle, reducing the cost and effort
of fixing them later.

3. Increased code quality:


a. White box testing helps improve the quality of
the code by identifying areas for improvement,
such as code optimization, error handling, or
boundary value testing.

4. Validation of internal logic:


a. White box testing verifies that the internal logic
and algorithms of the software are implemented
correctly, ensuring that the application functions
as intended.
Manual Testing
5. Efficient debugging:
a. When defects are found during white box testing,
the tester has access to the source code, making
it easier to identify the root cause of the issue
and debug it effectively.

• It's important to note that white box testing requires a


deep understanding of programming languages,
software architecture, and coding practices.
• Testers or developers proficient in programming are
typically involved in conducting white box testing to
ensure its effectiveness and accuracy.

A. Coverage tools and criteria for selection of coverage


methods and sample problem solving

• Coverage tools are software tools that help measure


the coverage achieved by the test cases in terms of the
code or program elements executed.
• They provide insights into which parts of the code
have been exercised during testing and which parts
have not.
• The selection of coverage methods and tools depends
on various factors, such as the programming
language, the complexity of the code, and the specific
requirements of the project.

When choosing coverage methods and tools, consider the


following criteria:

1. Coverage Metrics:
Manual Testing
a. Different coverage metrics measure different
aspects of code coverage, such as statement
coverage, branch coverage, condition coverage,
and path coverage.
b. Evaluate which metrics are most relevant for
your project and choose tools that support those
metrics.

2. Integration with Testing Frameworks:


a. Ensure that the coverage tools can seamlessly
integrate with your testing framework.
b. This allows for automatic tracking of coverage
during test execution and simplifies the reporting
process.

3. Code Instrumentation:
a. Some coverage tools require the code to be
instrumented with additional statements or
annotations to track coverage.
b. Consider whether you are comfortable with
modifying the code and if the instrumentation
process is straightforward.

4. Reporting and Visualization:


a. Evaluate the reporting capabilities of the
coverage tools.
b. Look for tools that provide clear and concise
reports, visualizations, and metrics to help you
understand the coverage results easily.

5. Compatibility and Support:


Manual Testing
a. Ensure that the coverage tools are compatible
with your development environment and
programming language.
b. Also, consider the support and documentation
provided by the tool vendors to assist you in case
of any issues or questions.

6. Performance Impact:
a. Coverage tools can sometimes introduce
overhead and impact the performance of the
tested application.
b. Consider the potential impact on the execution
time and resource usage of your tests.

As for sample problem-solving using coverage methods,


let's consider a simple scenario:

• Suppose you are developing a calculator application


and want to test the addition functionality.
• You have written a test suite with multiple test cases,
each covering different scenarios.
• To ensure adequate coverage, you decide to use
statement coverage as a coverage criterion.
• You run your test suite with a coverage tool that
supports statement coverage.
• The tool instruments your code and tracks which
statements are executed during test execution.
• After running the tests, the coverage tool generates a
report indicating the statement coverage achieved.
• Upon analysing the report, you find that one of the if-
else statements in your addition function was not
covered by any of the test cases.
Manual Testing
• This indicates that the particular condition was not
tested, and there is a potential gap in your test
coverage.
• To improve coverage, you modify your test suite to
include additional test cases that cover that specific
condition.
• By rerunning the tests and analyzing the coverage
report again, you can verify that the newly added test
cases have increased the statement coverage,
ensuring that all code paths are tested.
• This process of using coverage tools and criteria helps
you identify areas of your code that need additional
testing and ensures that your test suite provides
thorough coverage.
• It helps uncover potential bugs or issues that might
otherwise go unnoticed, improving the overall quality
of your software.

2. McCabe's cyclomatic complexity

• McCabe's Cyclomatic Complexity is a metric used in


software testing to measure the complexity of a
program's control flow.
• It provides a quantitative measure of the number of
linearly independent paths through the code.
• The higher the cyclomatic complexity, the more
complex the code and the greater the likelihood of
defects.
• Cyclomatic complexity is based on the control flow
graph of a program, which represents the possible
paths that can be taken during its execution.
Manual Testing
• It is calculated using the following formula:

V(G) = E - N + 2P

Where:

i. V(G) represents the cyclomatic complexity of the


code.
ii. E is the number of edges in the control flow graph.
iii. N is the number of nodes in the control flow graph.
iv. P is the number of connected components (entry
points) in the control flow graph.

• The cyclomatic complexity value provides insight into


the number of independent paths and the level of
testing required to achieve full coverage.
• It helps identify areas of the code that may be more
prone to errors or difficult to maintain.
• A higher cyclomatic complexity suggests that the code
has more decision points, loops, and branching,
making it more challenging to test and potentially
increasing the likelihood of defects.
• On the other hand, a lower cyclomatic complexity
indicates simpler code with fewer paths, which may
be easier to understand, test, and maintain.

By analyzing the cyclomatic complexity of a program,


developers and testers can:

Identify complex areas:


Manual Testing
a. High cyclomatic complexity values indicate areas
of the code that may require additional attention
and thorough testing.
b. These areas are likely to have more conditional
statements, loops, and nested structures.

Assess testing coverage:


c. The cyclomatic complexity metric helps assess
the adequacy of testing.
d. Aim for test coverage that exercises all
independent paths in the code to ensure
comprehensive testing.

Improve code quality:


e. High cyclomatic complexity values may indicate
the need for refactoring or simplification of the
code.
f. Reducing complexity can improve readability,
maintainability, and overall code quality.

Estimate testing effort:


g. Cyclomatic complexity can assist in estimating
the effort required for testing by considering the
number of independent paths that need to be
covered.

• To calculate the cyclomatic complexity manually, you


can create a control flow graph for your code and
count the number of edges, nodes, and connected
components.
Manual Testing
• Alternatively, there are static analysis tools and
plugins available that can automatically calculate
cyclomatic complexity for your codebase.
• By monitoring and managing the cyclomatic
complexity of your code, you can improve its quality,
maintainability, and testability, leading to more
reliable software systems.

5. Testing process and test case writing

Information

A. Testing as a process: STLC - Test Strategy

• The Test Strategy is a high-level document that


outlines the approach and guidelines for testing a
software system within the Software Testing Life
Cycle (STLC).
• It provides an overview of the testing objectives,
scope, test levels, test types, and test environments.
• The Test Strategy sets the direction for the entire
testing process and ensures that the testing activities
align with the project goals.

The Test Strategy document typically includes the


following components:

1. Objective:
a. It defines the main goal or purpose of the testing
activities.
Manual Testing
b. This can include ensuring the software meets the
specified requirements, identifying defects,
validating functionality, or achieving specific
quality goals.

2. Scope:
a. It outlines the boundaries or extent of the testing.
b. This includes the features or modules to be
tested and any specific areas that will be
excluded from testing.

3. Test Levels:
a. It specifies the different levels of testing that will
be performed, such as unit testing, integration
testing, system testing, and acceptance testing.
b. Each level has its own objectives and focus areas.

4. Test Types:
a. It identifies the types of testing to be conducted,
such as functional testing, performance testing,
security testing, usability testing, etc.
b. Each test type focuses on specific aspects of the
software system.

5. Test Techniques:
a. It outlines the approaches or methodologies to
be used for test design and execution.
b. This can include black-box testing, white-box
testing, or a combination of both.
c. The choice of techniques depends on the project
requirements and the nature of the application.
Manual Testing
6. Test Environment:
a. It describes the hardware, software, and network
setup required for testing.
b. This includes the configuration of test machines,
databases, servers, and any specific tools or
technologies needed.
c. The test environment should closely resemble
the production environment.

7. Test Data:
a. It defines the data sets and scenarios to be used
during testing.
b. This includes both positive and negative test
cases that cover various use cases and edge
cases.
c. The test data should be realistic and
representative of real-world scenarios.

8. Test Schedule:
a. It provides a timeline or schedule for the
different testing activities.
b. This includes milestones, deadlines, and any
dependencies on other project activities.
c. The test schedule should be aligned with the
overall project timeline.

9. Roles and Responsibilities:


a. It outlines the roles and responsibilities of the
testing team members, stakeholders, and other
related parties involved in the testing process.
b. This ensures clear communication and
accountability.
Manual Testing

10. Risks and Mitigation Strategies:


a. It identifies potential risks and challenges that
may impact the testing process and outlines
mitigation strategies to address them.
b. This helps in proactive risk management.

Example:

• Let's consider an example of a web application


testing.
• The Test Strategy document for this application might
include the following information:

1. Objective:
a. To ensure the web application meets the
functional requirements, performs well under
different load conditions, and provides a user-
friendly experience.

2. Scope:
a. The testing will cover all the modules and
features of the web application, including user
registration, login, product browsing, shopping
cart functionality, and payment processing.
b. The scope does not include testing third-party
integrations.

3. Test Levels:
a. The testing will be performed at the unit,
integration, system, and acceptance levels.
Manual Testing

4. Test Types:
a. The testing will include functional testing,
usability testing, performance testing, and
security testing.

5. Test Techniques:
a. The testing will primarily focus on black-box
testing techniques, using test cases derived from
requirements and user stories.
b. Some white-box testing techniques may be used
for unit testing.

6. Test Environment:
a. The testing will be conducted in a dedicated test
environment consisting of multiple machines
with various browsers and operating systems.
b. The application will be hosted on a test server
with a database backend.

7. Test Data:
a. The test data will include sample user accounts,
product data, and test scenarios covering
different use cases such as successful login,
invalid inputs, and edge cases.

8. Test Schedule:
a. The testing activities will be aligned with the
development sprints, with each sprint having a
specific testing phase.
b. The overall testing effort is expected to be
completed within a timeframe of four weeks.
Manual Testing

9. Roles and Responsibilities:


a. The testing team will include testers, a test lead,
and a test manager.
b. The development team will collaborate closely
with the testers to address any identified issues.

10. Risks and Mitigation Strategies:


a. Potential risks such as tight timelines, limited
resources, and changes in requirements will be
identified. Mitigation strategies may include
prioritizing testing activities, conducting risk-
based testing, and maintaining open
communication with stakeholders.

• The Test Strategy document serves as a guiding


document for the testing team, ensuring that testing
activities are planned and executed effectively to
achieve the desired quality objectives.

B. Testing as a process: STLC - Test Plan

• The Test Plan is a detailed document that outlines the


approach, scope, objectives, and schedule of testing
activities within the Software Testing Life Cycle
(STLC).
• It provides a comprehensive overview of the testing
strategy, test objectives, test deliverables, test
environments, and test resources.
Manual Testing
• The Test Plan serves as a roadmap for the testing
process, ensuring that all necessary activities are
planned and executed systematically.

The Test Plan typically includes the following components:

1. Test Objectives:
a. It defines the specific goals and objectives of the
testing effort.
b. This includes ensuring the software meets the
specified requirements, validating functionality,
identifying defects, and achieving specific quality
goals.

2. Test Scope:
a. It outlines the boundaries or extent of the testing.
b. This includes the features, modules, or
components of the software system that will be
tested.
c. It also specifies any areas that will be excluded
from testing.

3. Test Approach:
a. It describes the overall approach or strategy that
will be followed for testing.
b. This includes the selection of test techniques, test
levels, and test types. It also defines the sequence
of testing activities and the criteria for test
completion.
Manual Testing
4. Test Deliverables:
a. It lists the various documents, artifacts, and
outputs that will be produced during the testing
process.
b. This can include test cases, test scripts, test data,
test reports, and defect reports.

5. Test Environment:
a. It describes the hardware, software, and network
setup required for testing.
b. This includes the configuration of test machines,
databases, servers, and any specific tools or
technologies needed.
c. The test environment should closely resemble
the production environment.

6. Test Schedule:
a. It provides a timeline or schedule for the
different testing activities.
b. This includes milestones, deadlines, and any
dependencies on other project activities.
c. The test schedule should be aligned with the
overall project timeline.

7. Test Resources:
a. It identifies the resources needed for testing,
including human resources (testers, test leads,
etc.) and technical resources (test machines,
software licenses, etc.).
b. It also outlines any training or skill requirements
for the testing team.
Manual Testing

8. Test Risks and Mitigation Strategies:


a. It identifies potential risks and challenges that
may impact the testing process and outlines
mitigation strategies to address them.
b. This helps in proactive risk management and
ensures smooth execution of the testing
activities.

9. Test Execution and Reporting:


a. It describes the process of executing the tests,
capturing test results, and generating test
reports.
b. It includes the criteria for passing or failing tests
and defines the steps to be taken in case of test
failures.

Example:
• Let's consider an example of a Test Plan for a web
application.
• The Test Plan document for this application might
include the following information:

1. Test Objectives: To verify that the web application


meets the functional requirements, performs well
under different load conditions, and provides a user-
friendly experience.

2. Test Scope: The testing will cover all the modules and
features of the web application, including user
registration, login, product browsing, shopping cart
functionality, and payment processing.
Manual Testing

3. Test Approach: The testing will follow a combination


of manual and automated testing approaches. It will
include functional testing, usability testing,
performance testing, and security testing.

4. Test Deliverables: The test deliverables will include


test cases, test scripts, test data, test reports, and
defect reports.

5. Test Environment: The testing will be conducted in a


dedicated test environment that closely resembles the
production environment. It will include multiple test
machines, various browsers and operating systems,
and a database backend.

6. Test Schedule: The testing activities will be aligned


with the development sprints. Each sprint will have a
specific testing phase, and the overall testing effort is
expected to be completed within a timeframe of four
weeks.

7. Test Resources: The testing team will include testers,


a test lead, and a test manager. The team will have
access to the necessary hardware, software, and
testing tools. Training will be provided to the team as
required.

8. Test Risks and Mitigation Strategies: Potential risks


such as tight timelines, limited resources, and changes
in requirements have been identified. Mitigation
strategies include prioritizing testing activities,
Manual Testing
conducting risk-based testing, and maintaining open
communication with stakeholders.

9. Test Execution and Reporting: The tests will be


executed following the defined test cases and test
scripts. Test results will be captured and documented
in test reports. Any defects or issues found during
testing will be reported and tracked until resolution.

• The Test Plan document provides a clear roadmap for


the testing process, ensuring that all necessary
activities are planned and executed systematically.
• It helps in managing resources effectively, mitigating
risks, and achieving the desired quality objectives for
the web application.

C. Testing as a process: STLC - Test Design

• Test Design is a crucial phase in the Software Testing


Life Cycle (STLC) where test cases are designed and
test data is prepared based on the test requirements
and objectives.
• It involves identifying the test conditions, determining
the test coverage, and creating detailed test cases that
will be used to verify the functionality and behaviour
of the software system.
• During the Test Design phase, testers analyze the
software requirements, system specifications, and
other relevant documents to gain a thorough
understanding of the application under test.
Manual Testing
• They identify the different test scenarios and
conditions that need to be validated.
• Test cases are then created to cover these scenarios
and conditions, ensuring maximum test coverage.

The Test Design phase typically includes the following


steps:

1. Test Scenario Identification:


a. Testers identify and document the different test
scenarios based on the software requirements
and functional specifications.
b. A test scenario represents a specific functionality,
use case, or business process that needs to be
tested.

2. Test Case Creation:


a. Testers create detailed test cases for each
identified test scenario.
b. Test cases outline the steps to be performed, the
expected results, and any preconditions or
dependencies.
c. They may include positive test cases to validate
expected behaviour and negative test cases to
verify error handling and boundary conditions.

3. Test Data Preparation:


a. Test data is prepared to support the execution of
test cases.
Manual Testing
b. Test data includes input values, expected outputs,
and any specific data conditions required for
testing.
c. Testers ensure that the test data covers a wide
range of scenarios and conditions, including valid
and invalid inputs.

4. Test Coverage Analysis:


a. Testers analyze the test coverage to ensure that
all important aspects of the software system are
adequately covered.
b. This involves identifying any gaps in the test
coverage and making necessary adjustments to
the test cases to achieve comprehensive testing.

5. Test Case Review:


a. The created test cases are reviewed by peers or
senior testers to ensure their accuracy,
completeness, and effectiveness.
b. This helps in identifying any potential issues or
improvements in the test cases before they are
executed.

Example:
• Let’s consider an example of Test Design for a banking
application.
• In this case, one of the identified test scenarios is
"User Registration."
• The test case for this scenario may look like the
following:
Manual Testing
Test Scenario: User Registration
Test Case ID: TC001
Preconditions:

• The application is accessible.


• The user registration page is displayed.

Steps:

1. Enter valid user details in the registration form


(name, email, password).
2. Click on the "Submit" button.
3. Verify that the user is successfully registered and
redirected to the login page.
4. Verify that the user's information is stored in the
database.

Expected Results:

Step 1: User details are entered successfully.


Step 2: Registration form is submitted without any errors.
Step 3: User is redirected to the login page.
Step 4: User's information is stored in the database.

• In this example, the test case covers the steps to


perform user registration and the expected results at
each step.
• The test data for this test case would include valid
user details for registration.
• The Test Design phase ensures that the testing
activities are planned and executed systematically,
Manual Testing
providing a solid foundation for effective test
execution.
• It helps in ensuring maximum test coverage,
identifying potential defects, and validating the
functionality of the software system.

D. Testing as a process: STLC - Test Execution

• Test Execution is a critical phase in the Software


Testing Life Cycle (STLC) where the designed test
cases are executed to validate the functionality and
behaviour of the software system.
• It involves running the test cases, recording the actual
results, and comparing them with the expected
results to identify any discrepancies or defects.

During the Test Execution phase, testers perform the


following activities:

1. Test Environment Setup:


a. Testers ensure that the required test
environment is set up and ready for executing the
test cases.
b. This includes configuring the hardware, software,
network, and any other components necessary
for testing.

2. Test Case Execution:


a. Testers execute the test cases according to the
test plan and test schedule.
Manual Testing
b. They follow the predefined steps in each test
case, input the test data, and record the actual
results of the test.
c. They may also capture screenshots or video
recordings to document the execution process.

3. Defect Reporting:
a. If any discrepancies or defects are identified
during the test execution, testers report them in a
defect tracking system.
b. They provide detailed information about the
defect, including steps to reproduce it, expected
and actual results, and any relevant attachments
or supporting documentation.

4. Test Result Documentation:


a. Testers document the test results, which include
the status (pass/fail) of each test case and any
observed issues or defects.
b. They may also provide additional notes or
comments related to the test execution process.

5. Test Logs and Artifacts:


a. Testers maintain logs of the test execution
activities, including the test case execution status,
executed test scripts, test data used, and any logs
generated by the software system during testing.
b. These artifacts serve as a reference for future
analysis and troubleshooting.

Example:
Manual Testing
• Let’s continue with the example of the banking
application.
• In the Test Execution phase, the previously designed
test cases for user registration are executed.
• Here is an example of the execution status for one of
the test cases:

Test Case ID: TC001


Test Scenario: User Registration
Execution Status: Passed

Execution Steps and Results:

1. Enter valid user details in the registration form


(name, email, password).
a. Result: User details entered successfully.
2. Click on the "Submit" button.
a. Result: Registration form submitted without any
errors.
3. Verify that the user is successfully registered and
redirected to the login page.
a. Result: User successfully redirected to the login
page.
4. Verify that the user's information is stored in the
database.
a. Result: User's information found in the database.

• In this example, the test case execution status is


"Passed," indicating that all the steps were executed
successfully, and the expected results matched the
actual results.
Manual Testing
• The Test Execution phase is crucial as it determines
whether the software system meets the specified
requirements and behaves as expected.
• It helps in identifying defects, validating the
functionality, and ensuring the quality of the software
system.
• The execution results and the identified defects serve
as valuable inputs for further analysis and
improvement in the testing process.

E. Testing as a process: STLC - Test Closure Activity

• Test Closure is the final phase of the Software Testing


Life Cycle (STLC) where all testing activities are
completed, and the testing process is formally closed.
• It involves reviewing the test results, generating test
closure reports, and conducting a final evaluation of
the testing process.

During the Test Closure Activity, the following activities


are performed:

1. Test Result Analysis:


a. The test results and metrics are analyzed to
evaluate the overall quality of the software
system.
b. The test cases executed, defects found, and other
relevant data are reviewed to identify any
patterns or trends.
Manual Testing
c. This analysis helps in understanding the
effectiveness of the testing effort and provides
insights for future testing improvements.

2. Defect Analysis and Closure:


a. The open defects are reviewed, and their status is
updated based on their resolution.
b. Defects that have been fixed are retested to verify
their closure, and their status is changed to
"Closed" in the defect tracking system.
c. Any remaining open defects are prioritized and
documented for further action or future releases.

3. Test Closure Reports:


a. Test closure reports are generated to summarize
the testing activities and provide a
comprehensive overview of the testing process.
b. These reports may include details such as the
number of test cases executed, pass/fail status,
defect statistics, test coverage achieved, and
lessons learned during the testing process.

4. Documentation and Archiving:


a. All the test artifacts, including test plans, test
cases, test scripts, test data, and other relevant
documents, are properly documented and
archived for future reference.
b. This ensures that the testing documentation is
available for audits, compliance, or future
maintenance purposes.

5. Stakeholder Communication:
Manual Testing
a. The test closure activities and outcomes are
communicated to the stakeholders, such as the
project manager, development team, and other
relevant parties.
b. This includes sharing the test closure reports,
discussing the overall test results, and
highlighting any important findings or
recommendations.

Example:
• In the banking application example, the Test Closure
Activity involves analyzing the test results and
generating the test closure reports.
• Here are some key points from the test closure report:

Test Cases Executed: 100


Passed: 95
Failed: 5
Defects Identified: 10
Defects Closed: 8
Defects Pending: 2

• The report also includes an analysis of the test


coverage, defect trends, and recommendations for
future testing improvements.
• The test closure report is shared with the project
manager and other stakeholders to provide insights
into the testing process and the quality of the
software system.
• The Test Closure Activity ensures that all necessary
testing activities have been completed, defects have
Manual Testing
been addressed, and the testing process is formally
concluded.
• It serves as a final evaluation of the testing effort and
provides valuable insights for process improvement
in future projects.

A. Identification a of Test Scenarios from Requirments


and Test Plan Identifying test conditions and
designing test cases

• Identifying test scenarios, test conditions, and


designing test cases is an important aspect of the
software testing process.
• It involves analyzing the requirements and test plan
to determine the specific scenarios that need to be
tested, identifying the relevant test conditions, and
then designing test cases to validate those conditions.

Here's an explanation of the steps involved in identifying


test scenarios, test conditions, and designing test cases:

1. Requirements Analysis:
a. The first step is to thoroughly analyze the
software requirements.
b. This involves understanding the functionality,
features, and user interactions specified in the
requirements document.
c. By understanding the requirements, you can
identify the different scenarios that need to be
tested.
Manual Testing

2. Test Plan Review:


a. Review the test plan, which outlines the testing
objectives, scope, and approach.
b. The test plan provides guidance on the areas to
be tested and any specific test conditions or
criteria to consider.
c. It helps in determining the focus of the testing
effort and the key aspects to be validated.

3. Identify Test Scenarios:


a. Based on the requirements and test plan, identify
the different test scenarios.
b. A test scenario represents a specific condition or
situation that needs to be tested.
c. It may involve multiple test cases that cover
different aspects of the scenario.
d. For example, in an e-commerce application, a test
scenario could be "User registration and login
process."

4. Define Test Conditions:


a. Once the test scenarios are identified, break them
down into specific test conditions.
b. Test conditions are the individual aspects or
variables within a test scenario that need to be
validated.
c. For example, in the test scenario "User
registration and login process," the test
conditions could include valid username and
password, incorrect password, or empty fields.
Manual Testing
5. Design Test Cases:
a. With the test conditions defined, design test
cases to validate each condition.
b. A test case includes the necessary steps, inputs,
and expected results to test a specific condition.
Each test case should have a clear objective and
cover a single test condition.
c. Consider different combinations of inputs and
scenarios to ensure comprehensive test coverage.

Example:
• Let's consider a requirement for a calculator
application that specifies the addition and subtraction
operations.
• Based on this requirement, we can identify the
following test scenarios:

1. Addition Test Scenario:

• Test Condition 1: Positive addition with two positive


numbers
• Test Condition 2: Positive addition with a positive and
zero
• Test Condition 3: Addition with a negative and
positive number

2. Subtraction Test Scenario:

• Test Condition 1: Positive subtraction with two


positive numbers
• Test Condition 2: Positive subtraction with a positive
and zero
Manual Testing
• Test Condition 3: Subtraction with a negative and
positive number

• For each test condition, we can design specific test


cases that include the necessary steps, inputs, and
expected results.

• For example, for the test condition "Positive addition


with two positive numbers," a test case could be:

Test Case:
Objective: Validate the addition operation with two
positive numbers
Steps:

1. Enter the number 5 into the calculator.


2. Press the addition button.
3. Enter the number 7 into the calculator.
4. Press the equals button.

Expected Result: The calculator should display the result


as 12.

• By identifying test scenarios, test conditions, and


designing test cases, you ensure that the software is
thoroughly tested, and all relevant aspects are
validated.
• It helps in achieving comprehensive test coverage and
ensures the reliability and accuracy of the software
system.

B. Test case writing process


Manual Testing

• The test case writing process is an essential part of


software testing.
• It involves systematically documenting the steps,
inputs, and expected results for each test scenario to
ensure comprehensive test coverage.
1. Identify Test Scenarios:
a. Begin by identifying the different test scenarios
based on the requirements and specifications of
the software.
b. Test scenarios represent specific situations or
events that need to be tested.
c. Each test scenario should focus on a particular
aspect or functionality of the software.

2. Define Test Objectives:


a. For each test scenario, clearly define the
objectives of the test.
b. What specific aspect or functionality are you
testing with this scenario? What outcome or
behavior are you expecting from the software?

3. Break Down Test Scenarios:


a. Break down each test scenario into individual
test conditions.
b. Test conditions represent the specific inputs,
actions, or preconditions required to execute a
test.
c. Each test condition should focus on a single
aspect or condition to be validated.

4. Design Test Cases:


Manual Testing
a. Based on the test conditions, design test cases to
validate each condition.
b. A test case should include the necessary steps,
inputs, and expected results to test a specific
condition.
c. Ensure that each test case is independent, clear,
and easy to understand.

5. Write Test Case Steps:


a. In each test case, document the sequential steps
to be followed to execute the test.
b. The steps should be clear, concise, and
unambiguous.
c. Include any necessary setup or preconditions
before executing the test steps.

6. Specify Test Data:


a. Identify the specific inputs or data required for
each test case.
b. Document the test data in a way that is easy to
understand and reproduce.
c. Include both valid and invalid test data to cover
different scenarios.

7. Define Expected Results:


a. For each test case, specify the expected results or
outcomes.
b. Clearly describe the expected behavior or
response of the software when the test case is
executed.
c. This helps in determining whether the software
is functioning as expected.
Manual Testing

8. Review and Validate:


a. Once the test cases are written, review them for
accuracy, completeness, and clarity.
b. Validate that each test case covers the intended
test scenario and that the steps and expected
results are logical and feasible.

Example Test Case:

Test Scenario: User Registration Process


Objective: Validate the user registration functionality of a
web application.

Test Case: Successful User Registration


Steps:

1. Launch the web application.


2. Click on the "Sign Up" button.
3. Fill in the registration form with valid user details
(name, email, password).
4. Click on the "Submit" button.
5. Expected Result: The user should be successfully
registered and redirected to the login page with a
success message displayed.

Test Data:

Name: John Doe


Email: johndoe@[Link]
Password: Password123
Manual Testing
• By following the test case writing process, you can
ensure that your testing efforts are systematic, well-
documented, and effective.
• Clear and well-written test cases help in efficient test
execution, bug identification, and facilitating
communication within the testing team.

C. Test data generation positive, negative test cases, BVT


(boundary values)

• Test data generation is the process of creating input


data to be used during software testing.
• It involves designing test cases that cover both
positive and negative scenarios, as well as boundary
values.

1. Positive Test Cases:


a. Positive test cases aim to validate the expected
behaviour of the software when provided with
valid and expected inputs.
b. These test cases focus on the correct flow of the
application and ensure that it functions as
intended.

Example:
Test Scenario: User Login
Test Case: Successful Login
Steps:

1. Launch the application.


2. Enter a valid username and password.
3. Click on the "Login" button.
Manual Testing
a. Expected Result: The user should be successfully
logged in and redirected to the dashboard page.

4. Negative Test Cases: Negative test cases are designed


to test the application's behavior when provided with
invalid or unexpected inputs. These test cases check if
the application handles errors, exceptions, and edge
cases gracefully.

Example:
Test Scenario: User Registration
Test Case: Invalid Email Format
Steps:

1. Launch the application.


2. Fill in the registration form with an invalid email
address format (e.g., "invalid_email").
3. Click on the "Register" button.
a. Expected Result: The application should display
an error message indicating that the email
format is invalid.

Boundary Value Test (BVT): Boundary value testing is


a technique where test cases are designed to test the
boundaries of valid and invalid input values. It helps
identify any issues or errors that may occur at the
limits of the application's data range.

Example:
Test Scenario: Age Verification
Test Case: Minimum Age Limit
Steps:
Manual Testing

1. Launch the application.


2. Enter the minimum age value allowed (e.g., 18).
3. Click on the "Verify" button.
a. Expected Result: The application should accept
the minimum age value and display a success
message.

Test Case: Maximum Age Limit


Steps:

1. Launch the application.


2. Enter the maximum age value allowed (e.g., 99).
3. Click on the "Verify" button.
a. Expected Result: The application should accept
the maximum age value and display a success
message.

• By incorporating positive and negative test cases, as


well as boundary value testing, you can thoroughly
validate the software under different scenarios.
• This helps uncover bugs, errors, and exceptional
situations, ensuring the reliability and robustness of
the application.

D. Test sheet generation

• Test sheet generation is the process of creating a


structured document or spreadsheet that contains the
details of the test cases to be executed during
software testing.
Manual Testing
• It serves as a comprehensive reference for the testing
team and helps ensure that all planned tests are
executed and documented properly.

1. Test Sheet Format:


a. The test sheet typically includes columns for test
case ID, test case description, test steps, expected
results, actual results, pass/fail status, and any
additional notes or comments.
b. The format may vary depending on the
organization's testing standards and
requirements.

2. Test Case Identification:


a. Each test case is assigned a unique identifier or
test case ID, which helps in tracking and
referencing the test case throughout the testing
process.

3. Test Case Description:


a. The test case description provides a brief
explanation of what the test case aims to achieve.
b. It should clearly define the objective of the test
case and the specific scenario it covers.

4. Test Steps:
a. The test steps outline the sequence of actions to
be performed during the test case execution.
Each step should be detailed and easy to follow,
ensuring that the tester understands what needs
to be done.
Manual Testing
5. Expected Results:
a. The expected results specify the outcome or
behaviour that is expected from the software
under test when the test case is executed
successfully.
b. This helps in determining whether the actual
results match the expected results.

6. Actual Results:
a. The actual results column is filled in by the tester
during the test execution phase.
b. It captures the actual outcome observed during
the test case execution.

7. Pass/Fail Status:
a. The pass/fail status indicates whether the test
case passed or failed based on a comparison
between the actual and expected results.
b. The tester marks the appropriate status for each
test case.

8. Notes/Comments:
9. The notes or comments column provides a space to
document any additional information, observations,
or issues related to the test case execution.

Example test sheet :


Manual Testing

Test
Case Test Case Expected Actual
ID Description Test Steps Results Results Pass/Fail Notes

Application
1. Launch is
User the successfully
TC001 Registration application launched

Registration
2. Fill in the form is
registration filled
form successfully

3. Click on User is
"Submit" successfully
button registered

Application
Login with 1. Launch is
Invalid the successfully
TC002 Email application launched
Manual Testing

Test
Case Test Case Expected Actual
ID Description Test Steps Results Results Pass/Fail Notes

Error
message is
2. Enter an displayed
invalid for invalid
email email

3. Click on
"Login" User is not
button logged in

• The test sheet provides a structured overview of the


test cases, making it easier to track the execution
progress, identify any failures or issues, and maintain
documentation for future reference.
• It helps in organizing and managing the testing
process effectively.

E. Test case management

• Test case management refers to the process of


organizing, tracking, and managing test cases
throughout the software testing lifecycle.
Manual Testing
• It involves creating, documenting, executing, and
monitoring test cases to ensure comprehensive test
coverage and efficient test execution.
• Test case management tools and platforms are often
used to facilitate this process.

1. Test Case Repository:


a. A test case management tool provides a
centralized repository where test cases are
stored.
b. The repository allows for easy access, version
control, and collaboration among team members.

2. Test Case Creation:


a. Test cases are created based on the requirements
and specifications of the software being tested.
b. Each test case should have a unique identifier, a
clear description, and detailed steps to be
executed.

3. Test Case Organization:


a. Test cases can be organized into folders or
categories based on different criteria such as
functionality, modules, or test types.
b. This helps in efficient test case management and
allows for easier identification and retrieval of
specific test cases.

4. Test Case Prioritization:


a. Test cases can be prioritized based on factors like
criticality, risk, or business impact.
Manual Testing
b. Prioritization helps in focusing on high-priority
test cases and ensures that the most important
functionality is thoroughly tested.

5. Test Case Execution:


a. Test cases are executed as per the defined test
plan and schedule.
b. Testers follow the steps outlined in each test
case, record the actual results, and compare them
with the expected results.

6. Test Case Status and Reporting:


a. The status of each test case (pass, fail, blocked,
etc.) is recorded during execution.
b. Test case management tools provide reporting
features to generate test execution reports, track
the overall progress, and identify any issues or
bottlenecks.

Example Test Case Management:

• In a test case management tool, the test cases can be


organized into a hierarchical structure, as shown
below:

1. Project
a. Module 1
i. Feature 1
1. Test Case 1.1
2. Test Case 1.2
ii. Feature 2
1. Test Case 2.1
Manual Testing
2. Test Case 2.2
b. Module 2
i. Feature 1
1. Test Case 1.1
2. Test Case 1.2
ii. Feature 2
1. Test Case 2.1
2. Test Case 2.2

Each test case includes:

• Test case ID: A unique identifier for the test case.


• Description: A clear and concise description of the
test case.
• Steps: Detailed steps to be followed during test
execution.
• Expected Results: The expected outcome or behavior
of the software when the test case is executed
successfully.
• Actual Results: The actual outcome observed during
test execution.
• Status: The current status of the test case (pass, fail,
blocked, etc.).
• Comments/Notes: Additional information or
comments related to the test case.
• The test case management tool allows for easy
navigation, search, and filtering of test cases based on
various criteria.
• It provides visibility into the test coverage, execution
progress, and overall test quality.
Manual Testing
• Test case management plays a crucial role in ensuring
comprehensive testing and effective communication
among team members.

Practical: TestLink and TestRail

6. Bug reporting , test metrics, RTM and test environment

Information

A. Bug life cycle

• The bug life cycle in software testing refers to the


different stages that a bug goes through from the time
it is identified until it is resolved and closed.
• It helps track and manage the progress of bug fixing
and ensures that all identified issues are properly
addressed.

The bug life cycle typically consists of the following stages:

1. New:
a. This is the initial stage when a bug is reported or
identified. It is assigned a unique identifier and
entered into the bug tracking system.

2. Open:
a. Once the bug is reported, it is reviewed by the
development or testing team.
b. If the bug is valid and reproducible, it is marked
as "open" and assigned to the appropriate
developer or tester for further investigation.
Manual Testing

3. In Progress:
a. In this stage, the assigned developer or tester
starts working on fixing the bug or investigating
it further.
b. They analyze the root cause of the bug and
develop a solution.

4. Fixed:
a. When the developer or tester has successfully
resolved the bug, they mark it as "fixed."
b. The fix is implemented in the codebase, and the
bug is ready for retesting.

5. Retest:
a. In this stage, the fixed bug is retested by the
testing team to ensure that the issue has been
resolved and does not introduce any new defects.
b. The bug is marked as "retest" while awaiting
verification.

6. Verified:
a. Once the bug passes the retest, it is marked as
"verified."
b. This means that the fix has been verified and the
bug no longer exists in the software.

7. Closed:
a. After the bug is verified, it is marked as "closed."
b. This indicates that the bug has been successfully
fixed and validated, and it can be considered
resolved.
Manual Testing

8. Reopened:
a. If the bug reappears or the fix is found to be
ineffective during retesting, it is marked as
"reopened."
b. The bug goes back to the “open” or “in progress”
stage for further investigation and resolution.

• The bug life cycle may vary slightly depending on the


organization’s specific processes and bug tracking
system.
• It is essential to have a well-defined bug life cycle to
ensure that bugs are properly tracked, addressed, and
resolved in a systematic manner.
• Effective bug tracking and management are crucial for
delivering high-quality software products.

B. Bug severity and priority

• Bug severity and priority are two important aspects


of bug management in software testing.
• They help prioritize and allocate resources for bug
fixing based on the impact and urgency of the bugs.

Bug Severity:

• Bug severity refers to the impact or seriousness of a


bug on the functionality or usability of the software.
• It represents how severe the bug is and the extent to
which it affects the normal functioning of the
software.
Manual Testing
• Severity is typically categorized into several levels,
such as:

1. Critical:
a. Bugs that cause system crashes, data corruption,
or loss of essential functionality.
b. The software cannot be used until the bug is
fixed.

2. High:
a. Bugs that significantly impact the usability or
functionality of the software, but it is still usable
with workarounds or alternative paths.

3. Medium:
a. Bugs that have a moderate impact on the
software's functionality or usability, but the
software can still be used without major
disruptions.

4. Low:
a. Minor bugs that have a minimal impact on the
software's functionality or usability.
b. They do not significantly affect the user
experience.

Bug Priority:

• Bug priority, on the other hand, determines the order


in which bugs should be fixed based on their
importance and urgency.
Manual Testing
• It reflects the business impact and the need for timely
resolution.
• Priority is usually assigned based on factors such as:

1. High:
a. Bugs that have a significant business impact and
need immediate attention.
b. They may affect critical functionalities or pose a
significant risk to the project or users.

2. Medium:
a. Bugs that have a moderate impact on the
business or project and require attention in the
near future, but not as urgent as high priority
bugs.

3. Low:
a. Bugs that have a minimal impact on the business
or project and can be addressed later.
b. They are typically cosmetic or minor issues that
do not affect critical functionalities.

• Assigning severity and priority to bugs is important


for efficient bug management.
• It helps development and testing teams prioritize
their efforts, allocate resources effectively, and ensure
that critical issues are addressed promptly.
• The specific criteria for determining severity and
priority may vary depending on the project and the
organization's guidelines.
• It's worth noting that severity and priority are
subjective and can be influenced by the specific
Manual Testing
context of the project, user expectations, and the
project's goals.
• Regular communication and collaboration among
team members are crucial to properly assess and
assign severity and priority to bugs.

B. Bug reporting using Jira

• Bug reporting using Jira is a widely used practice in


software testing to efficiently track and manage bugs
throughout the software development lifecycle.
• Jira is a popular issue tracking tool that provides a
comprehensive platform for bug tracking, task
management, and collaboration among team
members.

1. Issue Creation:
a. In Jira, bugs are reported as issues.
b. Testers or anyone who identifies a bug can create
a new issue in Jira to report it.
c. They provide relevant information such as bug
description, steps to reproduce, expected and
actual results, environment details, and any
supporting attachments.

2. Issue Types and Fields:


a. Jira allows you to define different issue types
based on your project needs.
b. For bug reporting, the common issue type used is
"Bug" or "Defect."
c. Jira also provides various fields to capture
specific information related to the bug, such as
Manual Testing
severity, priority, assignee, reporter, due date,
and more.
d. These fields can be customized to match your
project requirements.

3. Bug Assignment:
a. Once the bug is reported, it needs to be assigned
to the appropriate developer or team for
investigation and resolution.
b. This assignment can be done manually by
selecting the assignee from the available options
in Jira.

4. Bug Tracking and Workflow:


a. Jira provides a customizable workflow that
represents the different stages of the bug's
lifecycle, such as "Open," "In Progress,"
"Resolved," and "Closed."
b. As the bug progresses through these stages, the
status is updated in Jira, allowing team members
to track its progress.

5. Comments and Collaboration:


a. Jira allows team members to collaborate by
adding comments to the bug issue.
b. Testers, developers, and other stakeholders can
provide additional information, ask questions, or
discuss potential solutions within the issue's
comment section.
c. This facilitates effective communication and
collaboration.
Manual Testing
6. Attachments and Screenshots:
a. Jira allows you to attach files, screenshots, or
other supporting documents to the bug issue.
b. This helps provide additional context and
evidence to aid in bug reproduction and
resolution.

7. Notifications and Updates:


a. Jira provides notifications to keep stakeholders
informed about updates on the bug issue.
b. Notifications can be configured to send emails or
notifications within Jira whenever there are
changes, comments, or updates to the bug.

8. Bug Resolution and Closure:


a. Once the bug is fixed, the developer can mark it
as "Resolved" in Jira.
b. The bug then goes through the necessary testing
and verification steps.
c. If the bug passes the verification, it can be
marked as "Closed" to indicate its resolution.

• Bug reporting using Jira offers a centralized and


structured approach to track and manage bugs
effectively.
• It provides a transparent and collaborative
environment for testers, developers, and stakeholders
to work together towards bug resolution.
• The flexibility and customizable nature of Jira allow
teams to adapt the bug reporting process to their
specific project needs.
Manual Testing
C. What is Test metrics?

• Test metrics in software testing refer to the


quantitative measures or indicators used to assess the
effectiveness, efficiency, and quality of the testing
process.
• These metrics provide valuable insights into the
progress of testing, identify areas for improvement,
and help stakeholders make data-driven decisions.

1. Test Coverage:
a. Test coverage metrics measure the extent to
which the system or application has been tested.
b. It includes metrics such as code coverage,
requirements coverage, and functional coverage.
c. These metrics help assess the completeness and
thoroughness of the testing efforts.

2. Defect Metrics:
a. Defect metrics provide information about the
number, severity, and status of defects found
during testing.
b. It includes metrics like defect density (defects
per size or complexity), defect aging (time taken
to fix defects), defect distribution by severity, and
defect closure rate.
c. These metrics help track the quality and stability
of the software under test.

3. Test Execution Metrics:


Manual Testing
a. Test execution metrics focus on the progress and
efficiency of test execution.
b. It includes metrics like test case execution status
(pass/fail), test execution time, test effort
(person-hours spent on testing), and test cycle
time (time taken to complete a test cycle).
c. These metrics help assess the productivity and
efficiency of the testing process.

4. Test Effectiveness Metrics:


a. Test effectiveness metrics measure the ability of
the testing process to identify defects.
b. It includes metrics like defect detection
percentage (defects found in relation to total
defects present), false positive rate (percentage
of reported defects that are not actual defects),
and defect leakage (defects missed during testing
and found later).
c. These metrics provide insights into the
effectiveness of the test cases and test
environment.

5. Test Schedule and Progress Metrics:


a. Test schedule and progress metrics track the
progress of testing activities against the planned
schedule.
b. It includes metrics like test plan adherence, test
execution progress, and test cycle time.
c. These metrics help identify any delays or
deviations from the planned testing timeline.

6. Test Environment Metrics:


Manual Testing
a. Test environment metrics measure the
availability and stability of the test environment.
It includes metrics like environment downtime,
environment setup time, and environment
utilization.
b. These metrics help ensure that the test
environment is properly maintained and
available for testing activities.

• Test metrics provide objective data and insights into


the testing process, enabling stakeholders to assess
the quality of the software, identify bottlenecks, and
make informed decisions for process improvement.
• However, it's important to select and interpret the
metrics carefully, considering the specific context and
goals of the testing project.
• These metrics provide valuable insights into the
quality, effectiveness, and efficiency of the testing
process.
• They help in monitoring progress, identifying areas of
improvement, and making data-driven decisions to
enhance the overall software quality.
• It is important to select the appropriate metrics based
on project goals, context, and specific requirements.

D. What is RTM ( Requirement traceability metrics)

• RTM stands for Requirement Traceability Matrix in


software testing.
Manual Testing
• It is a document that establishes a traceable link
between the requirements and the corresponding test
cases.
• The RTM ensures that all the requirements specified
for a software system are properly tested and
validated.
• The purpose of the RTM is to provide a clear and
structured overview of the requirements and their
coverage by test cases.
• It helps in ensuring that each requirement has been
addressed in the testing process and provides
visibility into the testing progress.

The Requirement Traceability Matrix typically includes the


following information:

1. Requirement ID: A unique identifier assigned to each


requirement.
2. Requirement Description: A brief description of the
requirement.
3. Test Case ID: The identifier of the test case associated
with the requirement.
4. Test Case Description: A brief description of the test
case.
5. Test Result: The result of the test case execution
(pass/fail).
6. Remarks/Comments: Any additional notes or
comments related to the requirement or test case.
Manual Testing
• By maintaining an RTM, testers and stakeholders can
easily track the coverage of requirements during the
testing process.
• It helps in identifying any gaps or missing test cases
and ensures comprehensive testing.
• It also facilitates requirements management and
enables better communication between the
development and testing teams.
• The RTM is a dynamic document that needs to be
updated as the project progresses.
• It should be reviewed and maintained throughout the
software development lifecycle to reflect any changes
or updates to the requirements or test cases.
• Overall, the Requirement Traceability Matrix is a
valuable tool for ensuring that the software meets the
specified requirements and helps in maintaining the
quality and reliability of the software system.

E. Forward and backward traceability

• Forward and backward traceability are two aspects of


requirement traceability in software testing.

1. Forward Traceability:
• Forward traceability refers to the ability to trace from
the requirements to the corresponding test cases.
• It ensures that each requirement has been addressed
in the testing process and helps in determining the
coverage of requirements by test cases.
Manual Testing
• By establishing forward traceability, testers can
ensure that all the necessary test cases have been
developed and executed to validate the requirements.

Benefits of Forward Traceability:

• Ensures comprehensive testing by verifying that all


requirements are covered by test cases.
• Provides visibility into the testing progress and helps
in tracking the status of requirement validation.
• Facilitates requirements management and helps in
identifying any gaps or missing test cases.

2. Backward Traceability:

• Backward traceability refers to the ability to trace


from the test cases back to the corresponding
requirements.
• It ensures that each test case has a clear link to the
specific requirement it is intended to validate.
• Backward traceability helps in understanding the
purpose and significance of each test case and
ensures that all requirements are being adequately
tested.

Benefits of Backward Traceability:

• Provides a clear understanding of the purpose and


intent of each test case.
• Helps in identifying redundant or unnecessary test
cases.
Manual Testing
• Supports impact analysis by enabling the
identification of affected requirements when changes
are made to the software.

G. Use of RTM

• RTM, which stands for Requirement Traceability


Matrix, is a document used in software testing to
establish and maintain traceability between
requirements and test cases.
• The RTM serves as a tool to ensure that all
requirements are properly validated through testing
and helps in tracking the progress of requirement
coverage.

• In summary, the RTM is a valuable metric in software


testing for ensuring requirement coverage, guiding
test planning, analyzing test coverage, managing
requirements, and assessing the impact of
requirement changes.
• It enhances the effectiveness and efficiency of the
testing process and contributes to the overall quality
of the software being developed.

F. Overview of different test environments

• In software testing, a test environment refers to the


setup or configuration in which software testing
activities are performed.
• It includes the hardware, software, network, and
other necessary components needed to execute test
Manual Testing
cases and evaluate the behaviour of the software
under test.
• Test environments can vary based on the type of
testing being conducted, such as unit testing,
integration testing, system testing, or acceptance
testing.

1. Development Environment:
a. This environment is used by developers during
the software development process.
b. It typically includes development tools, IDEs
(Integrated Development Environments), version
control systems, and other resources required for
coding and building the software.

2. Unit Testing Environment:


a. This environment is used for unit testing, which
focuses on testing individual components or
units of the software in isolation.
b. It may involve setting up a testing framework,
stubs, or mocks to simulate dependencies, and
running tests using a unit testing framework like
JUnit or NUnit.

3. Integration Testing Environment:


a. Integration testing verifies the interaction
between different modules or components of the
software.
b. The integration testing environment is set up to
simulate the integrated system, including
multiple modules or subsystems.
Manual Testing
c. It may involve configuring test data, coordinating
communication between components, and
validating the integration points.

4. System Testing Environment:


a. System testing is performed to evaluate the
behaviour and functionality of the entire
software system.
b. The system testing environment replicates the
target production environment as closely as
possible, including the operating system,
hardware, network setup, and other
infrastructure components.
c. It aims to test the system's compatibility,
performance, security, and overall functionality.

5. User Acceptance Testing (UAT) Environment:


a. UAT is conducted by end-users or clients to
ensure that the software meets their
requirements and is ready for production use.
b. The UAT environment is typically set up to mimic
the production environment and closely
resembles the actual usage conditions.
c. It involves creating test scenarios that reflect
real-world usage and evaluating the software
against user expectations.

6. Performance Testing Environment:


a. Performance testing focuses on evaluating the
software's performance, scalability, and
responsiveness under different load conditions.
Manual Testing
b. The performance testing environment involves
simulating the expected workload, generating
synthetic users, and monitoring system
resources.
c. It may include tools like load generators,
monitoring tools, and performance testing
frameworks.

7. Security Testing Environment:


a. Security testing is performed to identify
vulnerabilities and weaknesses in the software's
security measures.
b. The security testing environment includes tools
and configurations to simulate different types of
attacks, perform penetration testing, and analyze
the software's resistance to potential threats.

8. Production-like Staging Environment:


a. This environment is used to validate the software
in an environment that closely resembles the
production environment.
b. It allows testing the software with real data and
configurations before deploying it to the live
production environment.

• The choice of the test environment depends on the


specific testing objectives, the stage of the software
development lifecycle, and the available resources.
• Each environment serves a different purpose and
helps ensure that the software is thoroughly tested
across various aspects before being released to users.
Manual Testing

7. Web testing , DB testing and cloud testing

Information

G. Why test environments are important

Test environments are important in software testing for


several reasons:

1. Isolation:
a. Test environments provide a controlled and
isolated environment for testing software.
b. By separating the testing environment from the
production environment, you can perform testing
activities without affecting the live system.
c. This ensures that any bugs or issues discovered
during testing do not impact the users or disrupt
the production environment.

2. Replication of Production Environment:


a. Test environments aim to replicate the
production environment as closely as possible.
b. This includes hardware, software, network
configurations, and other infrastructure
components.
c. By simulating the production environment, you
can accurately assess how the software will
perform and behave in real-world conditions.

3. Validation of System Integration:


Manual Testing
a. Test environments allow for the testing of system
integration and interaction between various
components or modules of the software.
b. It ensures that all the different parts of the
system work together seamlessly and correctly.
c. Testing in an integrated environment helps
identify any compatibility issues, data flow
problems, or communication errors between
different components.

4. Performance and Scalability Testing:


a. Test environments are crucial for conducting
performance testing and evaluating the
software's ability to handle various loads and
stress conditions.
b. By simulating realistic user traffic and workload,
you can assess the system's performance,
scalability, and responsiveness.
c. This helps identify performance bottlenecks,
optimize resource utilization, and ensure the
software meets performance requirements.

5. User Acceptance Testing:


a. Test environments play a significant role in user
acceptance testing (UAT).
b. UAT involves validating the software against user
requirements and expectations.
c. A dedicated UAT environment provides end-
users or clients with an environment to test the
software, provide feedback, and ensure it meets
their specific needs.
Manual Testing

6. Security Testing:
a. Test environments are essential for security
testing, where vulnerabilities and potential
threats are identified and assessed.
b. By setting up a controlled environment for
security testing, you can simulate various attack
scenarios, test security measures, and identify
weaknesses in the software's defenses.

7. Controlled Test Data:


a. Test environments allow for the provisioning of
test data, including both valid and invalid data, to
test the software's functionality, data validation,
and error handling capabilities. Having control
over the test data ensures consistent and
repeatable testing.

• Overall, test environments provide a controlled and


realistic setting to assess the software's performance,
functionality, integration, security, and other critical
aspects.
• They help identify and resolve issues before the
software is deployed to production, ensuring a higher
quality and more reliable end product.

II. Web Testing: Functionality Testing of a Website-


Functional-UI Testing-links html elements

• Functional-UI testing involves verifying the


functionality of a website's UI components, including
links to other HTML elements or pages.
Manual Testing
• In web testing, links are a fundamental part of
navigation and user interaction.
• Testing links ensures that they are working correctly,
directing users to the intended destinations and
providing a seamless user experience.

1. Link Verification:
a. Verify that the links are correctly implemented
using the <a> (anchor) tag and have the
appropriate href attribute.
b. Ensure that the href value is pointing to the
correct URL or target location.

2. Click ability:
a. Test the click ability of links by simulating user
interactions.
b. Click on each link and verify that it behaves as
expected.
c. Ensure that the link click triggers the appropriate
action, such as navigating to a new page or
scrolling to a specific section on the same page.

3. Target Window/Tab:
a. If the link has a target attribute, test that it opens
in the correct window or tab.
b. For example, links with the target="_blank"
attribute should open in a new tab or window,
while links without the target attribute should
open in the same window.

4. Internal and External Links:


Manual Testing
a. Differentiate between internal and external links.
Internal links navigate within the website, while
external links redirect to external websites.
b. Verify that internal links navigate to the correct
pages within the website, and external links open
the intended external websites.

5. Broken Links:
a. Check for broken or dead links that lead to non-
existent or error pages.
b. Use automated tools or manual testing to identify
any broken links and ensure they are updated or
removed.

6. Accessibility:
a. Ensure that links are accessible to all users,
including those with disabilities.
b. Test links using screen readers or assistive
technologies to verify that they are properly
announced and navigable.

• By thoroughly testing the functionality of links in


HTML elements, you can ensure a smooth and
seamless user experience, proper navigation, and
correct interaction within a website.

III. Web Testing: Functionality Testing of a Website-


Functional-UI Testing Forms

• Functional-UI testing of forms in a website involves


verifying the functionality and user experience of
Manual Testing
form elements such as input fields, checkboxes, radio
buttons, dropdowns, and submit buttons.

1. Form Submission:
a. Test the form submission process by entering
valid input values and submitting the form.
b. Verify that the form is submitted successfully,
and the data is processed or stored as intended.

2. Input Validation:
a. Validate the input fields by entering invalid or
incomplete data and verifying that appropriate
error messages are displayed.
b. Test for required fields, data format validation
(e.g., email, phone number), length restrictions,
and any custom validation rules specific to the
form.

3. Field Interactions:
a. Test the interactions between different form
fields.
b. For example, when selecting a particular option
in a dropdown or checkbox, ensure that it
dynamically affects the visibility or behaviour of
other fields.

4. Error Handling:
a. Verify how the form handles errors during
submission or validation.
b. Test scenarios such as server-side errors,
network connectivity issues, or timeouts and
Manual Testing
ensure that appropriate error messages or
feedback are displayed to the user.

5. Accessibility:
a. Test the form's accessibility to ensure that users
with disabilities can interact with the form using
assistive technologies.
b. Verify that form elements are properly labelled,
associated with their respective input fields, and
compatible with screen readers.

6. Autocomplete and Suggestions:


a. If the form supports autocomplete or
suggestions, test that the feature functions
correctly.
b. Enter partial input and verify that relevant
suggestions are provided or auto-filled based on
user input or previous data.

7. Form Reset:
a. Test the form reset functionality to ensure that
all input fields are cleared and reset to their
default state when the user triggers the reset
action.

8. Cross-browser and Cross-device Compatibility:


a. Perform testing on different web browsers and
devices to ensure consistent functionality and
appearance of the form across platforms.

9. Data Security:
Manual Testing
a. If the form involves sensitive information or
transactions, test the security measures in place,
such as SSL encryption, data masking, and
protection against common vulnerabilities like
SQL injection or cross-site scripting (XSS).

10. Usability and User Experience:


a. Evaluate the overall usability and user
experience of the form.
b. Test factors such as field placement, label clarity,
intuitive user interface, and responsiveness to
different screen sizes.

• By thoroughly testing the functionality and user


experience of forms, you can ensure that users can
successfully interact with the website's forms, submit
data accurately, and receive appropriate feedback or
validation messages.

IV. Web Testing: Functionality Testing of a Website


Business cycles

• In web testing, functionality testing of a website


involves verifying that all business cycles or critical
workflows on the website are working correctly.
• Business cycles refer to the end-to-end processes that
a user goes through while interacting with the
website to accomplish specific tasks or achieve
desired outcomes.

Identify Business Cycles:


Manual Testing
a. Understand the different business cycles or
workflows on the website, such as user
registration, product purchase, form submission,
search functionality, payment processing, or
account management.
b. Each business cycle represents a series of steps
that users follow to achieve a specific goal.

• By conducting thorough functionality testing of


business cycles, you can ensure that the website's
core workflows and user interactions are
functioning correctly, providing a seamless and
satisfying user experience.

VI. Web Testing: compliance to standards ex w3c

• When it comes to web testing, ensuring compliance


to standards set by the World Wide Web
Consortium (W3C) is crucial.
• The W3C establishes guidelines and specifications
for web technologies to ensure interoperability,
accessibility, and best practices.

1. HTML Validation:
a. Use W3C's HTML Validator or other validation
tools to check if the HTML code of the web pages
adheres to the standards defined by the W3C.
b. This involves validating the structure, syntax, and
usage of HTML elements, attributes, and values.
Fix any HTML validation errors or warnings to
ensure compliance.
Manual Testing
2. CSS Validation:
a. Employ W3C's CSS Validator or other tools to
validate the CSS code used in the web pages.
b. This involves checking for any syntax errors,
incorrect selectors, unsupported properties, or
conflicting styles.
c. Rectify any CSS validation issues to ensure
adherence to W3C standards.

3. Accessibility Testing:
a. Conduct accessibility testing to verify if the
website conforms to W3C's Web Content
Accessibility Guidelines (WCAG).
b. This includes ensuring that web pages are
perceivable, operable, understandable, and
robust for users with disabilities.
c. Test for proper heading structure, alternative text
for images, keyboard navigation, color contrast,
and other accessibility requirements.

4. JavaScript Compliance:
a. Validate the JavaScript code used in the web
pages against the ECMAScript standards defined
by the W3C.
b. Ensure that JavaScript functions, syntax, and APIs
are used correctly and consistently.
c. Use tools like ESLint or JSHint to check for any
code quality issues or violations of W3C
guidelines.

• By testing for compliance to W3C standards, you


ensure that your website follows best practices,
Manual Testing
promotes accessibility, and provides a consistent and
reliable experience across different browsers and
devices.
• It helps maintain interoperability, enhances usability,
and contributes to the overall quality of your web
application.

VII. Web Testing: API testing

• API testing is a type of web testing that focuses on


testing the functionality and reliability of APIs
(Application Programming Interfaces).
• APIs allow different software systems to
communicate and interact with each other, enabling
the exchange of data and functionality.

1. Understanding the API:


a. Begin by understanding the API's
documentation, including its endpoints,
request/response formats, authentication
mechanisms, and any specific requirements or
constraints.
b. Gain a clear understanding of the API's intended
functionality and behaviour.

2. Test Environment Setup:


a. Set up the necessary test environment, which
may include tools or frameworks for sending API
requests and capturing responses.
b. Use a suitable tool, such as Postman, to make API
calls and inspect the responses.
Manual Testing
3. Functional Testing:
a. Perform functional testing to validate the API's
expected behaviour.
b. This involves sending different types of requests
(GET, POST, PUT, DELETE) to various API
endpoints and verifying that the responses are
correct.
c. Test different scenarios, such as valid requests,
invalid requests, edge cases, and error handling.

• API testing plays a crucial role in ensuring the


reliability, functionality, and performance of web
applications that rely on APIs.
• By thoroughly testing APIs, you can identify and
address any issues early in the development process,
leading to more robust and reliable software systems.

VIII. Web Testing: Usability testing explanation

• Usability testing is a type of web testing that focuses


on evaluating the user-friendliness and ease of use of
a website or web application.
• The goal of usability testing is to assess how well
users can navigate, interact with, and accomplish
tasks on the website.
• Usability testing helps identify usability flaws and
provides insights into how real users interact with the
website.
• By conducting usability testing, website owners and
developers can make informed design decisions,
improve user satisfaction, and increase the chances of
Manual Testing
users successfully accomplishing their goals on the
website.

IX. Web Testing: Interface testing

• Interface testing, also known as API testing, is a type


of web testing that focuses on testing the interfaces
between different software components or systems.
• It involves testing the interaction and data exchange
between various components to ensure they
communicate correctly and function as intended.
• Interface testing is crucial for ensuring that the
different components of a web application can
communicate effectively and exchange data
accurately.
• By thoroughly testing the interfaces, potential
integration issues, data inconsistencies, and
compatibility problems can be identified and
resolved, leading to a more robust and reliable web
application.

X. Web Testing: Database testing

• Database testing is a type of web testing that focuses


on verifying the integrity, accuracy, and performance
of the underlying database used by a web application.
• It involves testing the interactions between the web
application and the database to ensure data is stored
and retrieved correctly.

1. Data Validation:
Manual Testing
a. Test the accuracy and integrity of the data stored
in the database.
b. This includes verifying that data is correctly
inserted, updated, and deleted according to the
defined business rules and constraints.
c. Validate data types, field lengths, constraints, and
relationships between different tables.

2. Data Manipulation:
a. Test the ability of the web application to retrieve
and display data from the database correctly.
b. This includes testing various data retrieval
scenarios such as searching, sorting, filtering, and
pagination.
c. Verify that the displayed data matches the data
stored in the database.

3. Data Integrity:
a. Test the integrity of the database by ensuring
that referential integrity is maintained.
b. This involves testing the relationships between
different tables, such as foreign key constraints,
and verifying that data modifications do not
violate the integrity rules.

4. Performance:
a. Test the performance of database operations,
such as data retrieval and updates.
b. This includes measuring response times for
different types of queries and ensuring that the
database can handle the expected load without
degradation in performance.
Manual Testing

5. Security:
a. Test the security measures implemented in the
database.
b. This includes testing authentication and
authorization mechanisms, ensuring sensitive
data is appropriately encrypted, and validating
access controls to prevent unauthorized access
or manipulation of data.

6. Data Consistency:
a. Test the consistency of the data across different
tables and database components.
b. This includes verifying that data modifications or
updates are propagated correctly to related
tables and that data synchronization processes,
such as data replication or data migration, are
functioning properly.

7. Error Handling:
a. Test the error handling capabilities of the
database.
b. This includes testing error conditions such as
database connection failures, constraint
violations, and handling of unexpected
exceptions.
c. Verify that appropriate error messages are
displayed, and error logs or notifications are
generated.

8. Database Backup and Recovery:


Manual Testing
a. Test the backup and recovery mechanisms of the
database.
b. This includes performing tests to ensure that
database backups are created and stored
correctly, and that data can be successfully
restored in the event of a failure or data loss.

9. Data Volume and Scalability:


a. Test the performance and scalability of the
database with large data volumes.
b. This involves simulating scenarios with a
significant amount of data to assess the
performance impact and ensure that the
database can handle increased data volume
without performance degradation.

10. Data Migration and Upgrades:


a. Test the migration or upgrade process of the
database when transitioning to a new version or
making schema changes.
b. This includes testing data migration scripts,
verifying data integrity after the migration, and
ensuring that the upgraded database functions
correctly.

• Database testing is essential to ensure the reliability,


accuracy, and performance of the data storage and
retrieval mechanisms within a web application.
• By thoroughly testing the interactions between the
web application and the database, potential issues
such as data corruption, performance bottlenecks,
and security vulnerabilities can be identified and
Manual Testing
resolved, leading to a robust and efficient web
application.

XI. Web Testing: non Functional- Performance testing

• Performance testing is a type of non-functional


testing that focuses on evaluating the performance
characteristics of a web application under specific
conditions.
• It aims to assess how well the application performs in
terms of response time, scalability, reliability, and
resource usage.
• Performance testing helps ensure that a web
application meets the performance requirements and
provides a satisfactory user experience.
• By identifying performance issues early in the
development cycle, such as slow response times or
resource bottlenecks, appropriate optimizations can
be implemented to enhance the application's
performance, scalability, and reliability.

XII. Web Testing: non Functional- Security testing

• Security testing is a critical component of web testing


that focuses on identifying vulnerabilities and
weaknesses in a web application's security measures.
• The primary objective of security testing is to ensure
that the application is protected against potential
threats and unauthorized access.

1. Authentication Testing:
Manual Testing
a. Test the effectiveness of the web application's
authentication mechanisms.
b. This involves verifying if the authentication
process correctly validates user credentials,
enforces password policies, handles session
management securely, and prevents
unauthorized access to sensitive areas of the
application.

2. Authorization Testing:
a. Test the authorization controls implemented in
the web application.
b. This involves verifying if users are granted
appropriate access privileges based on their
roles and permissions.
c. It includes testing scenarios such as role-based
access control, access to specific resources, and
restrictions on privileged operations.

3. Input Validation Testing:


a. Test the web application's ability to handle
different types of inputs securely.
b. This involves checking if the application properly
validates and sanitizes user inputs to prevent
common security vulnerabilities such as SQL
injection, cross-site scripting (XSS), and
command injection attacks.

4. Security Configuration Testing:


a. Test the security configuration of the web
application and its underlying infrastructure.
Manual Testing
b. This includes checking for secure configurations
of web servers, databases, firewalls, and other
components to ensure that default or weak
configurations are not exposing potential
vulnerabilities.

5. Session Management Testing:


a. Test the security of session management
mechanisms in the web application.
b. This involves verifying if sessions are properly
initiated, maintained, and terminated to prevent
session hijacking or fixation attacks.
c. It also includes testing session timeout, secure
cookie usage, and session data storage.

• Security testing helps ensure that a web application is


resilient to security threats and provides a secure
environment for users' data and interactions.
• By identifying and addressing security vulnerabilities
during the testing phase, potential risks can be
mitigated, and the application's overall security
posture can be enhanced.

XIV. Web Testing: non Functional- Challenges and Best


Practices

Challenges:

1. Test Environment Setup:


a. Setting up a realistic test environment that
accurately simulates user behaviour, network
Manual Testing
conditions, and device configurations can be
challenging.

2. Test Data Management:


a. Managing a large volume of test data and
ensuring its accuracy, completeness, and
confidentiality can be a challenge, especially for
web applications that rely on dynamic data.

3. Scalability and Performance Testing:


a. Testing the performance and scalability of a web
application under different load conditions,
concurrent users, and traffic patterns can be
complex and resource-intensive.

4. Security Vulnerability Assessment:


a. Identifying and addressing security
vulnerabilities in a web application requires
expertise in various security testing techniques,
including penetration testing and vulnerability
scanning.

5. Compatibility Testing:
a. Ensuring compatibility across multiple browsers,
operating systems, and devices adds complexity
to web testing, as each platform may have its
unique behaviour and limitations.

Best Practices:

1. Comprehensive Test Planning:


Manual Testing
a. Create a detailed test plan that covers all non-
functional aspects, including performance,
security, compatibility, and usability testing.
b. Define clear objectives, test scenarios, and
success criteria.

2. Realistic Test Environment:


a. Set up a test environment that closely resembles
the production environment, including network
configurations, server specifications, and
simulated user behavior.

3. Test Data Management:


a. Develop a strategy for generating and managing
test data to cover different scenarios and edge
cases. Ensure data integrity, privacy, and
compliance with relevant regulations.

4. Performance Testing:
a. Use appropriate tools and techniques to conduct
performance testing, including load testing,
stress testing, and scalability testing.
b. Monitor system resources, response times, and
user experience under different load conditions.

5. Security Testing:
a. Engage experts in security testing to perform
thorough vulnerability assessments, penetration
testing, and code review to identify and address
potential security vulnerabilities.

6. Compatibility Testing:
Manual Testing
a. Test the application on a wide range of browsers,
operating systems, and devices to ensure
consistent behavior and user experience across
different platforms.
b. Use responsive design principles and consider
mobile responsiveness.

7. Usability Testing:
a. Involve real users or representative personas in
usability testing to gather feedback on the
application's user interface, navigation, and
overall user experience.
b. Use appropriate usability testing techniques such
as interviews, surveys, and user observations.

8. Continuous Testing:
a. Implement a continuous testing approach where
non-functional testing is performed throughout
the development lifecycle, enabling early
detection of issues and faster resolution.

9. Test Automation:
a. Utilize test automation tools and frameworks to
streamline the execution of non-functional tests
and improve efficiency.
b. Automate repetitive and time-consuming tasks to
focus on more complex testing scenarios.

10. Collaboration and Communication:


a. Foster effective collaboration between
development, testing, and other stakeholders to
ensure clear communication, shared
Manual Testing
understanding of non-functional requirements,
and timely resolution of issues.

• By following these best practices, organizations can


overcome challenges and ensure the effective testing
of non-functional aspects of web applications,
resulting in high-quality, secure, and user-friendly
products.

XV. DB testing- Structural -object

• In database testing, structural testing focuses on


verifying the correctness and integrity of the database
objects, such as tables, views, indexes, triggers, stored
procedures, and functions.
• The goal is to ensure that the database objects are
defined correctly and operate as expected.
• Object testing involves testing individual database
objects to validate their structure, behavior, and
functionality.

1. Tables:
a. Verify the correctness of table structures,
including column names, data types, constraints
(e.g., primary key, foreign key, unique key), and
default values. Test table relationships and
ensure proper indexing.

2. Views:
Manual Testing
a. Validate the accuracy of data retrieved from
views by comparing the results with the
underlying tables.
b. Verify the view definition and any related
permissions or security settings.

3. Indexes:
a. Test the efficiency and effectiveness of indexes in
improving query performance.
b. Check if indexes are created on appropriate
columns and optimize them if necessary.

4. Triggers:
a. Ensure triggers are properly defined and execute
as expected when specific events occur, such as
data modifications (insert, update, delete) on
related tables.
b. Test trigger conditions, actions, and error
handling.

5. Stored Procedures:
a. Validate the correctness and functionality of
stored procedures by executing them with
different input parameters and verifying the
expected output.
b. Check for proper exception handling and error
reporting.

6. Functions:
a. Test user-defined functions to ensure they return
the expected results based on the input
parameters.
Manual Testing
b. Verify the correctness of the function logic and
any data manipulation performed.

• During structural testing of database objects, test


cases are designed to cover different scenarios,
boundary values, and edge cases.
• Testers may use SQL queries, scripts, or database
testing tools to perform object testing.
• The focus is on verifying the accuracy, integrity, and
performance of the database objects.
• By conducting structural testing on database objects,
organizations can ensure the reliability and
consistency of the database, which is crucial for the
proper functioning of the applications relying on the
data stored in the database.

XVI. DB testing- Structural data integrity

• In database testing, structural data integrity testing is


focused on ensuring the integrity and consistency of
the data stored in the database.
• It involves validating that the data conforms to
predefined rules, constraints, and relationships
defined in the database schema.

1. Primary Key Constraints:


a. Verify that primary key constraints are properly
defined and enforced.
Manual Testing
b. Test scenarios where duplicate or null values are
inserted into primary key columns and ensure
that appropriate error handling occurs.

2. Foreign Key Constraints:


a. Test the integrity of foreign key relationships
between tables.
b. Validate that referential integrity is maintained,
meaning that values in the foreign key columns
must exist in the referenced primary key
columns of related tables.

3. Unique Constraints:
a. Ensure that unique constraints are enforced,
meaning that duplicate values are not allowed in
columns marked as unique.
b. Test scenarios where duplicate or null values are
inserted into unique columns and verify that the
appropriate error messages are generated.

4. Check Constraints:
a. Validate the correctness of check constraints,
which define specific conditions that data in a
column must satisfy.
b. Test different scenarios to ensure that data meets
the defined conditions and that any violations are
properly handled.

5. Data Types:
a. Verify that the data types of columns are defined
correctly and that they can accommodate the
expected range of values.
Manual Testing
b. Test scenarios where incorrect data types or out-
of-range values are inserted and validate that the
appropriate error handling occurs.

6. Data Validation:
a. Check the correctness of data validation rules or
business rules implemented through triggers,
stored procedures, or application logic.
b. Test scenarios where data validation rules are
violated and verify that the expected actions or
error messages are triggered.

• Structural data integrity testing involves designing


test cases to cover different scenarios and edge cases
that could potentially violate the defined constraints
and rules.
• It requires a deep understanding of the database
schema and the relationships between tables.
• Testers can use SQL queries, data manipulation
statements, and database testing tools to perform
structural data integrity testing.
• The objective is to identify any data inconsistencies,
violations of constraints, or other integrity issues that
could impact the reliability and accuracy of the data
stored in the database.

XVII. DB testing- Structural data mapping

• In database testing, structural data mapping refers to


the process of verifying that the data stored in the
database is correctly mapped and aligned with the
Manual Testing
data model or schema defined for the application or
system.
• It involves validating that the tables, columns,
relationships, and data types in the database align
with the defined data model.
• Structural data mapping in DB testing requires a
thorough understanding of the data model and the
database schema.
• Testers can use SQL queries, data comparison tools,
and database testing frameworks to perform mapping
checks and validate the alignment between the data
model and the database structure.

• Table Mapping:
Verify that the tables in the database align with the
tables defined in the data model.
• Column Mapping:
Validate that the columns in the tables are mapped
correctly to the corresponding attributes or fields in
the data model.
• Relationship Mapping:
Test the relationships between tables to ensure they
are correctly defined and maintained. Validate that
foreign key relationships are properly established and
that referential integrity is enforced.

• The goal of structural data mapping is to ensure that


the database accurately reflects the defined data
model, enabling data integrity, consistency, and
efficient retrieval of information.
Manual Testing
• By validating the mapping, testers can identify any
discrepancies or inconsistencies that could impact the
proper functioning of the application or system.

XX. What Is Cloud Testing?

• Cloud testing refers to the practice of testing software


applications, systems, or services in a cloud
computing environment.
• It involves using cloud infrastructure, platforms, and
services to conduct various testing activities such as
functional testing, performance testing, security
testing, and more.
• Cloud testing offers several advantages over
traditional testing approaches, including scalability,
flexibility, cost-effectiveness, and accessibility.

1. Infrastructure-as-a-Service (IaaS):
a. Cloud testing leverages IaaS providers like
Amazon Web Services (AWS), Microsoft Azure,
or Google Cloud Platform (GCP) to provision
and manage the necessary infrastructure for
testing.
b. This eliminates the need for organizations to
invest in and maintain their own physical
hardware.

2. On-Demand Resource Allocation:


a. With cloud testing, resources such as virtual
machines, storage, and networking can be
provisioned on-demand.
Manual Testing
b. This allows testers to quickly set up and tear
down test environments as needed, optimizing
resource utilization and reducing costs.

3. Scalability and Elasticity:


a. Cloud environments provide the ability to scale
resources up or down based on testing
requirements.
b. This scalability allows for testing applications
under different loads and user volumes,
ensuring optimal performance and
responsiveness.

4. Collaboration and Accessibility:


a. Cloud testing facilitates collaboration among
team members by providing a centralized
testing environment accessible from anywhere
with an internet connection.
b. Testers can work concurrently on the same test
environment, enhancing efficiency and
collaboration.

5. Cost Optimization:
a. Cloud testing offers cost optimization benefits
by allowing organizations to pay only for the
resources and services they use.
b. Testing teams can scale resources as needed
and avoid upfront infrastructure costs, making
it a cost-effective approach for testing projects.

6. Service Virtualization:
Manual Testing
a. Cloud testing often utilizes service
virtualization techniques to simulate
dependencies and external services that may
not be available during testing.
b. This enables comprehensive testing of the
application's behavior in different scenarios.

7. Security and Compliance:


a. Cloud service providers typically offer robust
security measures and compliance
certifications, ensuring the confidentiality,
integrity, and availability of testing
environments and data.
b. However, it is essential to address any specific
security concerns and comply with regulatory
requirements when performing cloud testing.

• Cloud testing provides organizations with the ability


to test applications and systems in a flexible, scalable,
and cost-effective manner.
• It enables faster time-to-market, improved quality,
and enhanced collaboration among testing teams.
• By leveraging cloud resources and services,
organizations can optimize their testing efforts and
achieve reliable and efficient software delivery.

XXI. Limitations of On-Premise Testing

• On-premise testing refers to the traditional approach


of conducting software testing within an
Manual Testing
organization's own infrastructure, where the testing
environment is set up and managed locally.
• While on-premise testing has its advantages, it also
has some limitations.

1. Limited Scalability:
a. On-premise testing environments are often
limited in terms of scalability.
b. Organizations need to invest in physical
infrastructure such as servers, storage, and
networking equipment, which may have
limitations in terms of capacity.
c. Scaling up resources to accommodate large-scale
testing can be challenging and time-consuming.

2. Higher Infrastructure and Maintenance Costs:


a. Setting up and maintaining an on-premise testing
infrastructure can be costly.
b. Organizations need to invest in hardware,
software licenses, and ongoing maintenance and
upgrades.
c. These costs can be significant, especially for
small or medium-sized organizations with
limited budgets.

3. Limited Accessibility:
a. On-premise testing is typically limited to the
physical location where the infrastructure is set
up.
b. This can restrict access to testing environments
and hinder collaboration among distributed
testing teams.
Manual Testing
c. It may also limit the ability to perform remote or
distributed testing.

4. Longer Setup Time:


a. Building an on-premise testing environment
requires time and effort.
b. It involves procuring hardware, installing
software, configuring networks, and ensuring
compatibility with various testing tools and
technologies.
c. This setup time can delay the start of testing
activities and project timelines.

5. Resource Allocation and Utilization:


a. On-premise testing environments often face
challenges in resource allocation and utilization.
b. Organizations may have dedicated testing
environments that remain idle when not in use,
leading to underutilization of resources.
c. Conversely, during peak testing periods, resource
constraints may occur, impacting testing
efficiency.

6. Limited Disaster Recovery Options:


a. On-premise testing environments may have
limited disaster recovery options.
b. In the event of hardware failures, power outages,
or other infrastructure issues, the testing
environment may become unavailable or require
significant downtime for recovery.

7. Lack of Testing Tools and Infrastructure Updates:


Manual Testing
a. Maintaining up-to-date testing tools,
frameworks, and infrastructure can be
challenging with on-premise testing.
b. Organizations need to invest in regular updates
and upgrades to ensure compatibility with the
latest technologies and best practices.

8. Difficulty in Testing Diverse Environments:


a. On-premise testing may face challenges when
testing across different operating systems,
browsers, devices, or network configurations.
b. It may require additional investments in
hardware and software to replicate diverse user
environments accurately.

• Despite these limitations, on-premise testing still


offers control, privacy, and customization advantages
for certain organizations with specific requirements.
• However, many organizations are now leveraging
cloud-based testing solutions to overcome these
limitations and benefit from the scalability, flexibility,
and cost-effectiveness offered by cloud testing
platforms.

XXII. Types Of Cloud Testing-functional and nonfunctional


explanation

• Functional testing in the context of cloud testing


refers to the verification of the functional
requirements and behaviour of a cloud-based
application or system.
Manual Testing
• It focuses on ensuring that the application functions
correctly and meets the specified functional
requirements.

1. Unit Testing:
a. Unit testing involves testing individual
components or units of code in isolation to
ensure that they function correctly.
b. In cloud testing, unit testing can be performed on
specific cloud services or modules to validate
their functionality.

2. Integration Testing:
a. Integration testing verifies the interaction and
integration between different components,
services, or modules within a cloud application.
b. It ensures that the components work together
seamlessly and communicate effectively.

3. System Testing:
a. System testing involves testing the entire cloud
application as a whole to validate its behaviour
and functionality.
b. It focuses on testing the end-to-end flow and
interactions between different components and
services in the cloud environment.

4. User Acceptance Testing (UAT):


a. UAT is performed to ensure that the cloud
application meets the requirements and
expectations of the end users.
Manual Testing
b. It involves conducting tests based on real-world
scenarios and user workflows to validate the
usability and functionality of the application.

5. Regression Testing:
a. Regression testing is performed to verify that
changes or updates to the cloud application have
not introduced any unintended side effects or
regression issues.
b. It involves retesting previously tested
functionalities to ensure their continued proper
functioning.

6. Performance Testing:
a. Performance testing is done to assess the
performance and scalability of the cloud
application under various load conditions.
b. It measures response times, throughput,
resource utilization, and other performance
metrics to identify bottlenecks and optimize the
application's performance.

7. Security Testing:
a. Security testing focuses on identifying
vulnerabilities, weaknesses, and potential
security threats in the cloud application.
b. It includes testing authentication, access
controls, data encryption, and other security
measures to ensure the confidentiality, integrity,
and availability of the application.

8. Compatibility Testing:
Manual Testing
a. Compatibility testing ensures that the cloud
application functions correctly across different
devices, operating systems, browsers, and
network configurations.
b. It validates the application's compatibility with
various platforms to provide a consistent user
experience.

• These are some of the common types of functional


testing in cloud testing.
• The specific types and approaches to functional
testing may vary depending on the nature of the cloud
application and the testing objectives.

Interview Questions

1. What is the difference between verification and


validation in software testing?

A: Verification refers to the process of evaluating a system


or component to ensure that it meets specified
requirements. Validation, on the other hand, involves
evaluating a system during or at the end of the
development process to determine whether it satisfies the
specified business requirements.

2. What is the difference between functional testing and


non-functional testing?
Manual Testing
A: Functional testing focuses on testing the functionality of
the software application, ensuring that it meets the
intended requirements. Non-functional testing, on the
other hand, is concerned with testing aspects such as
performance, usability, security, and reliability.

3. What is the importance of test case prioritization?

A: Test case prioritization is important to ensure that


testing efforts are focused on areas that are more critical
or likely to have defects. Prioritizing test cases helps in
maximizing the testing coverage and identifying critical
issues early in the testing process.

4. What is the difference between positive testing and


negative testing?

A: Positive testing involves testing the system by providing


valid inputs and expecting the system to produce the
expected output. Negative testing, on the other hand,
involves testing the system by providing invalid inputs and
expecting the system to handle them gracefully without
any errors or unexpected behaviour.

5. What is the role of a test plan in software testing?

A: A test plan is a document that outlines the objectives,


scope, approach, and schedule of testing activities. It
provides a roadmap for the testing process, including the
test strategy, test environments, test deliverables, and
Manual Testing
resource allocation. It helps in ensuring that the testing
process is well-organized and meets the project
requirements.

6. How do you ensure the completeness of testing?

A: To ensure the completeness of testing, a combination of


techniques such as requirements traceability, test coverage
analysis, risk-based testing, and adequate test case design
should be employed. These techniques help in ensuring
that all requirements are tested, and the critical areas are
thoroughly covered.

7. What is the purpose of test documentation in


software testing?

A: Test documentation serves as a reference and


communication tool for the testing team. It includes
documents such as test plans, test cases, test scripts, defect
reports, and test summary reports. Test documentation
helps in maintaining a standardized and consistent
approach to testing, aids in knowledge transfer, and
provides a historical record of testing activities.

8. How do you handle a situation where the


requirements are not clear or incomplete?

A: When faced with unclear or incomplete requirements, it


is important to collaborate with stakeholders, such as
business analysts and product owners, to gain clarity. This
may involve conducting meetings, seeking additional
Manual Testing
information, and documenting assumptions. Clear
communication and documentation of any ambiguities or
assumptions are crucial to ensure that testing aligns with
the intended requirements.

9. What is the difference between system testing and


integration testing?

A: System testing involves testing the entire system as a


whole to ensure that all components work together
correctly and meet the specified requirements. Integration
testing, on the other hand, focuses on testing the
interaction and integration between individual
components or modules to ensure that they function
correctly when integrated.

10. How do you handle regression testing in an Agile


environment with frequent changes?

A: In an Agile environment, regression testing is typically


performed continuously throughout the development
cycle. Test automation plays a crucial role in ensuring
efficient and effective regression testing. Automated test
scripts are created to cover the critical functionality, and
they are executed with each iteration to quickly identify
any regression issues.

You might also like