0% found this document useful (0 votes)
36 views13 pages

Testing Methods: Functional & Regression

Software testing and tools

Uploaded by

Hashhash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views13 pages

Testing Methods: Functional & Regression

Software testing and tools

Uploaded by

Hashhash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 1

Introduction To Test Case Design

A. Short answer questions:

1. Define software testing.

Ans :
Software testing is the process of evaluating a software application or system to ensure that it
meets specified requirements and functions correctly. It involves executing the software with
the intent of finding defects or errors, verifying that it behaves as expected, and assessing its
quality. Testing encompasses various techniques and methods, including functional testing,
performance testing, security testing, and usability testing, among others, to identify issues
and ensure that the software meets user expectations and business objectives.

2. Give the difference between bug and error.

Ans :

No. bug Error


1. A bug refers to a flaw or defect in the An error is a mistake made by a human
software that causes it to behave during the development process that leads
unexpectedly or not as intended. to incorrect behaviour in the software.

2. It is typically a deviation from the It is the cause of a bug or defect in the


specified requirements or desired software.
functionality.
3. Bugs are identified during the testing Errors can occur due to various reasons
phase of software development. suchas coding mistakes,
misunderstandings of requirements, or
faulty logic.

For Programming Classes Join NR Classes LLP - 9730381255


3. What is 'test case'? Explain with example.

Ans :
Test Case:
A test case is a detailed set of instructions or conditions that are designed to verify the
correctness of a specific aspect of a software application or system. It outlines the steps to be
taken, the input data to be used, and the expected results to be observed during testing.

Example:
Let's consider a simple example of a login page for a website. A test case for the login
functionality could include the following components:

Test Case ID: TC001


Test Case Description: Verify user login functionality.
Preconditions: The user must have valid credentials.
Test Steps:
Step 1: Open the website's login page.
Step 2: Enter valid username and password.
Step 3: Click on the "Login" button.
Expected Results:
The system should redirect the user to the homepage.
The user's name should be displayed indicating a successful login.
Post-conditions: The user should have access to the features available after login. g.

4. Define software testing.

Ans :
Finding Bugs in an Application:
Bugs in an application can be discovered through various methods and techniques during the
software development lifecycle. These methods involve systematic testing and analysis to
identify and address issues before they impact the end-users. Some common approaches
include:

i. Manual Testing:
Human testers manually execute the software, exploring different features and
scenarios to identify defects.
Testers follow predefined test cases or conduct ad-hoc testing to simulate real-world
usage and uncover bugs.
ii. Automated Testing:
Automated testing involves using specialized tools to execute test cases
automatically.
Test scripts are created to simulate user interactions, validate functionality, and detect
bugs efficiently.
iii. Code Reviews:

For Programming Classes Join NR Classes LLP - 9730381255


Developers review each other's code to identify potential bugs, coding errors, or logic
flaws.
Peer reviews help in ensuring code quality and reducing the likelihood of introducing
bugs into the application.
iv. Unit Testing:
Developers write and execute unit tests to validate individual components or modules
of the application.
Unit tests help in isolating and fixing bugs at an early stage of development.
v. Integration Testing:
Integration tests verify the interactions between different modules or components of
the application.
They ensure that the integrated system functions correctly and that bugs arising from
interactions are identified and resolved.
vi. User Feedback:
Gathering feedback from end-users through beta testing, user surveys, or customer
support interactions can reveal bugs that may not have been identified during testing.
User feedback provides valuable insights into real-world usage scenarios and helps in
prioritizing bug fixes.

5. What kinds of testing should be considered?

Ans :

i. Functional Testing:
Functional testing verifies that each function of the software application operates in
accordance with the requirements.
It focuses on what the system does, ensuring that it meets user expectations and
performs as intended.

ii. Non-Functional Testing:


Non-functional testing evaluates aspects of the software other than its functionality,
such as performance, usability, security, and reliability.
It assesses how well the system performs under various conditions and constraints.

iii. Regression Testing:


Regression testing ensures that recent changes to the codebase do not adversely affect
existing functionalities.
It involves re-running previously executed test cases to verify that no new bugs have
been introduced.

iv. Integration Testing:


Integration testing tests the interactions between different modules or components of
the software.
It validates that integrated units function as expected when combined, ensuring
seamless communication and data exchange.

For Programming Classes Join NR Classes LLP - 9730381255


v. User Acceptance Testing (UAT):
User Acceptance Testing involves end-users testing the software to determine if it
meets their requirements and expectations.
It validates that the software satisfies business needs and is ready for deployment.

vi. Performance Testing:


Performance testing evaluates the responsiveness, stability, and scalability of the
software under various workloads.
It ensures that the application performs efficiently and reliably under expected and
peak conditions.

vii. Security Testing:


Security testing assesses the robustness of the software against potential security
threats and vulnerabilities.
It identifies and mitigates risks related to unauthorized access, data breaches, and
other security concerns.

viii. Usability Testing:


Usability testing evaluates the user-friendliness and intuitiveness of the software
interface.
It assesses how easily users can navigate the application, complete tasks, and achieve
their goals.

6. What should be done after a bug is found?

Ans :

After a bug is found, it should be promptly addressed and managed through a systematic
process to ensure effective resolution. Here's a simple yet professional description of what
should be done:

i. Documentation:
Record detailed information about the bug, including its description, steps to
reproduce, and any relevant screenshots or logs.
Use a standardized bug tracking system to log and track the bug throughout its
lifecycle.

ii. Prioritization:
Evaluate the severity and impact of the bug on the software and prioritize it
accordingly.
Classify the bug based on its severity levels (e.g., critical, major, minor) to determine
its urgency for resolution.

For Programming Classes Join NR Classes LLP - 9730381255


iii. Assignment:
Assign the bug to the appropriate developer or team responsible for fixing it.
Ensure clear communication regarding bug ownership and responsibilities.

iv. Analysis:
Investigate the root cause of the bug to understand why it occurred.
Analyze the impact of the bug on other parts of the software and identify any related
issues.

v. Fixing:
Develop a solution or fix for the bug based on the analysis findings.
Implement the fix following coding best practices and standards.

vi. Testing:
Validate the bug fix through rigorous testing to ensure that it resolves the issue
without introducing new defects.
Execute both the original test case that uncovered the bug and any additional test
cases related to the fix.

vii. Verification:
Verify that the bug is indeed fixed and that the software behaves as expected after the
fix.
Conduct thorough regression testing to ensure that the bug fix did not impact other
areas of the application.

viii. Closure:
Once the bug is confirmed to be fixed and verified, update its status in the bug
tracking system to indicate closure.
Provide feedback to stakeholders about the resolution of the bug and any relevant
updates.

7. What are entry and exit criteria?

Ans :

Entry Criteria:

Entry criteria are the conditions that must be satisfied before testing can commence. They
ensure that the testing process begins under the appropriate circumstances and with the
necessary resources in place. Entry criteria typically include factors such as the
availability of test environments, test data, and software builds. They serve as
prerequisites for initiating testing activities and help in ensuring the efficiency and
effectiveness of the testing process.

For Programming Classes Join NR Classes LLP - 9730381255


Exit Criteria:

Exit criteria are the conditions that must be met before testing can be concluded. They
define when testing activities should cease and the software can be considered ready for
release or further stages of development. Exit criteria are based on predefined metrics and
objectives, such as test coverage, defect density, and stability thresholds. They provide
clear guidelines for evaluating the readiness of the software for the next phase and help in
making informed decisions about its quality and readiness for deployment.

8. Explain STLC with its phases.

Ans :
The Software Testing Life Cycle (STLC) is a systematic process followed by software
development teams to ensure the quality and reliability of a software product. It consists
of several phases, each with specific objectives and activities aimed at identifying and
resolving defects throughout the software development lifecycle.

Phases of STLC:

i. Requirement Analysis:
In this phase, testers analyze the project requirements to gain a thorough
understanding of the software's functionality, features, and objectives.
Testers identify testable requirements, assess potential risks, and define testing
objectives and strategies.

ii. Test Planning:


Test planning involves creating a comprehensive test plan that outlines the approach,
scope, resources, and schedule for testing activities.
Testers define test objectives, test scenarios, test cases, and entry/exit criteria to guide
the testing process.

iii. Test Case Development:


In this phase, testers develop detailed test cases based on the test scenarios defined in
the test plan.
Test cases specify the steps to be executed, the expected results, and any necessary
test data or preconditions.

iv. Test Environment Setup:


Test environment setup involves configuring the necessary hardware, software, and
test tools required to execute test cases effectively.
Testers ensure that the test environment mirrors the production environment as
closely as possible to simulate real-world conditions.

v. Test Execution:

For Programming Classes Join NR Classes LLP - 9730381255


Test execution is the phase where testers execute the test cases created during the test
case development phase.
Testers run the tests, record the results, and report any defects found during testing.

vi. Defect Tracking and Management:


Defect tracking and management involve logging, prioritizing, and tracking defects
identified during testing.
Testers report defects using a bug tracking system, assign them to appropriate
stakeholders, and monitor their resolution status.

vii. Test Reporting:


Test reporting involves summarizing the testing activities, results, and findings in a
formal test report.
Test reports provide stakeholders with insights into the quality of the software, the
effectiveness of testing efforts, and any remaining risks.

viii. Test Closure:


Test closure marks the end of the testing process and involves formalizing the
completion of testing activities.
Testers review the testing objectives, deliverables, and outcomes to ensure that all
testing requirements have been met.

9. Explain any three kinds of errors with their possible conditions.

Ans :

i. Syntax Errors:
a. Description: Syntax errors occur when code violates the rules of the
programming language's syntax, making it invalid and unable to be executed.
b. Possible Conditions:
Missing semicolons at the end of statements.
Incorrect capitalization or spelling of keywords.
Mismatched parentheses, brackets, or braces.
Using reserved keywords as variable names.

ii. Logic Errors:


a. Description: Logic errors occur when the code's logic or algorithm is flawed,
resulting in incorrect behavior or unexpected outcomes.
b. Possible Conditions:
Using the wrong mathematical operation (e.g., addition instead of
subtraction).
Incorrect conditional statements leading to unintended branching.
Improper handling of boundary conditions or edge cases.
Misinterpreting requirements, leading to incorrect implementation.

For Programming Classes Join NR Classes LLP - 9730381255


iii. Runtime Errors:
a. Description: Runtime errors occur while the program is running, typically due
to issues that cannot be detected until execution.
b. Possible Conditions:
Division by zero.
Attempting to access a non-existent or null object.
Memory allocation failures (e.g., out of memory).
File or resource not found during input/output operations.

10. Differentiate black box and white box testing methods?

Ans :

No. Black Box Testing White Box Testing


1. Black box testing is a testing technique White box testing, also known as
where the internal structure, design, or structural or glass box testing, is a testing
implementation details of the software technique where the tester has access to
are not known to the tester. Instead, the internal structure, design, and code of
testing is performed based on the the software being tested.
specifications and functionality of the
software.

2. Testers focus on the inputs and outputs of Testers examine the internal logic, paths,
the software, treating it as a black box and control flows within the software to
where only the externally visible design test cases that exercise specific
behavior is considered. code segments or branches.

3. Test cases are derived from requirements, Test cases are derived from an
specifications, and user expectations. understanding of the code structure,
Testers evaluate the software's algorithms, and implementation details.
functionality, usability, and performance Testers verify the correctness of individual
without knowledge of its internal code units, paths, and system integrations.
workings.
4. Black box testing focuses on validating White box testing, on the other hand,
the software's external behavior and involves inspecting the internal structure
functionality from a user's perspective, and logic of the software to design test
without considering its internal cases that target specific code segments
implementation details. and ensure thorough coverage.

For Programming Classes Join NR Classes LLP - 9730381255


11. Explain various test case design techniques.

Ans :
Various Test Case Design Techniques:

1. Equivalence Partitioning:
Description: Equivalence partitioning divides input data into partitions or classes to
reduce the number of test cases required.
Approach: Test cases are designed to cover each partition, treating all data within the
same partition as equivalent.
Example: If a system accepts numeric inputs from 1 to 100, test cases would be
created for values like 1, 50, and 100, representing each partition (e.g., valid inputs,
invalid inputs).

2. Boundary Value Analysis (BVA):


Description: Boundary value analysis focuses on testing the boundaries or limits of
input data.
Approach: Test cases are designed to cover values at the lower and upper boundaries,
as well as just above and just below these boundaries.
Example: If a system accepts inputs from 1 to 100, test cases would include values
like 0, 1, 2, 99, 100, and 101 to test boundary conditions.

3. Decision Table Testing:


Description: Decision table testing is used to test combinations of inputs or conditions
that result in different outcomes.
Approach: Test cases are derived from a decision table that maps inputs to
corresponding actions or outcomes.
Example: A decision table for a login system might include inputs such as valid
username, valid password, invalid username, and invalid password, with
corresponding actions like successful login or error message displayed.

4. State Transition Testing:


Description: State transition testing focuses on testing the behavior of a system as it
transitions between different states.
Approach: Test cases are designed to cover transitions between states, including valid
and invalid transitions.
Example: For a traffic light system, test cases would cover transitions between states
like red, yellow, and green lights, including scenarios like skipping a light or
changing out of sequence.

5. Pairwise Testing (All-Pairs Testing):


Description: Pairwise testing aims to reduce the number of test cases by testing all
possible combinations of input parameters pairwise.
Approach: Test cases are created to cover every possible pair of input values,
ensuring thorough coverage with fewer test cases.

For Programming Classes Join NR Classes LLP - 9730381255


Example: In a software application with multiple input fields, pairwise testing would
ensure that every pair of input values is tested at least once.

12. Write test cases in excel for student admission system.

Ans :
Test Cases for Student Admission System

1. Test Case ID: TC001


Description: Verify that the student admission form opens successfully.
Test Steps:
Open the student admission system application.
Navigate to the "Admission Form" section.
Expected Result: The student admission form should open without any errors.

2. Test Case ID: TC002


Description: Verify that all mandatory fields are marked and required for submission.
Test Steps:
Open the student admission form.
Check for mandatory fields marked with asterisks (*) on the form.
Expected Result: All mandatory fields should be clearly marked and required for
submission.

3. Test Case ID: TC003


Description: Verify that the system accepts valid student information.
Test Steps:
Enter valid student information into all fields of the admission form.
Submit the form.
Expected Result: The system should accept valid student information and display a
confirmation message.

4. Test Case ID: TC004


Description: Verify that the system displays appropriate error messages for invalid
input.
Test Steps:
Enter invalid data (e.g., alphabetic characters in numeric fields) into the admission
form.
Submit the form.
Expected Result: The system should display relevant error messages for each invalid
input field.

5. Test Case ID: TC005


Description: Verify that the system generates a unique admission ID for each
successful submission.
Test Steps:
Submit the admission form with valid student information.
Retrieve the admission ID from the confirmation message or database.

For Programming Classes Join NR Classes LLP - 9730381255


Expected Result: The system should generate a unique admission ID for each
successful submission.

13. Describe integration, unit and acceptance testing.

Ans :

1. Integration Testing:

i. Definition: Integration testing verifies that individual software modules or


components work together as expected when integrated into a larger system.
ii. Focus: It concentrates on testing interactions and interfaces between integrated
components to ensure seamless integration.
iii. Objective: The aim is to detect inconsistencies, data flow issues, and errors that
may arise during integration.
iv. Scope: Integration testing validates the integrated system's functionality and
behavior.
v. Method: Testing is performed on combined modules or components to assess
their interaction and compatibility.
vi. Outcome: It ensures that the integrated system functions correctly and aligns with
specified requirements.

2. Unit Testing:

i. Definition: Unit testing involves testing individual units or components of a


software application in isolation.
ii. Focus: It aims to verify the correctness and functionality of each unit
independently.
iii. Objective: The goal is to identify defects early in the development process and
ensure code reliability.
iv. Scope: Unit testing focuses on testing functions, methods, or classes within the
codebase.
v. Method: Test cases are written and executed to validate the behavior of individual
units for various inputs.
vi. Outcome: It facilitates code refactoring, enhances maintainability, and ensures the
stability of the codebase.

3. Acceptance Testing:

i. Definition: Acceptance testing evaluates whether the software meets specified


requirements and is acceptable for deployment.
ii. Focus: It validates the software against business requirements and user
expectations.

For Programming Classes Join NR Classes LLP - 9730381255


iii. Objective: The aim is to ensure that the software fulfills its intended purpose and
aligns with organizational goals.
iv. Scope: Acceptance testing assesses the software's suitability for delivery to end-
users or stakeholders.
v. Method: Various techniques such as user acceptance testing (UAT), alpha testing,
and beta testing may be employed.
vi. Outcome: It confirms that the software meets user needs, addresses business
objectives, and is ready for deployment.

14. What is functional and nonfunctional testing?

Ans :

1. Functional Testing:

i. Objective: Verifies that the software functions correctly and meets specified
functional requirements.
ii. Focus: Tests what the system does from the end-user's perspective.
iii. Scope: Validates individual features and functionalities, such as user interfaces
and data manipulation.
iv. Test Cases: Designed to assess functional behavior against predefined criteria.
v. Purpose: Ensures that the software performs intended tasks accurately and
efficiently.

2. Nonfunctional Testing:

i. Objective: Evaluates aspects other than functionality, such as performance and


security.
ii. Focus: Assesses how well the system performs under various conditions and
constraints.
iii. Scope: Covers characteristics like performance, reliability, usability, and
scalability.
iv. Test Cases: Designed to validate nonfunctional requirements beyond core
functionality.
v. Purpose: Ensures the software delivers a satisfactory user experience and meets
quality attributes and performance benchmarks.

Follow us on Instagram
@logic_overflow
For Programming Classes Join NR Classes LLP - 9730381255
For Programming Classes Join NR Classes LLP - 9730381255

You might also like