0% found this document useful (0 votes)
18 views33 pages

Manual Testing Tutorial

Uploaded by

kriszpm
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
18 views33 pages

Manual Testing Tutorial

Uploaded by

kriszpm
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 33

Manual Testing Tutorial

Index

1. Introduction to Manual Testing

• Definition and Importance

• Difference Between Manual and Automated Testing

• Types of Software Testing

• Advantages and Disadvantages of Manual Testing

• Real-time Example: Online Shopping Platform Testing

2. Software Development Life Cycle (SDLC)

• Phases of SDLC

• Role of Testing in SDLC

• Diagrams: SDLC Process Models (Waterfall, Agile, V-Model)

• Real-time Example: SDLC for a Banking Application

3. Software Testing Life Cycle (STLC)

• Phases of STLC

• Entry and Exit Criteria

• Deliverables of Each Phase

• Diagrams: STLC Workflow

• Real-time Example: Test Case Creation for a Social Media App

4. Types of Testing

• Functional Testing

• Non-functional Testing

• Exploratory Testing

• Regression Testing

• Smoke and Sanity Testing

• Real-time Examples: Functional Testing of a Login Page, Load Testing for an E-commerce Website

5. Test Case Development

• What is a Test Case?

• Writing Test Cases

• Characteristics of Good Test Cases

• Real-time Example: Test Cases for an ATM Machine

• Diagrams: Template for Test Case Writing

6. Test Plan
• What is a Test Plan?

• Components of a Test Plan

• Test Planning Tools

• Real-time Example: Test Plan for a Food Delivery App

• Diagrams: Sample Test Plan Structure

7. Defect Lifecycle and Management

• What is a Defect?

• Phases of Defect Lifecycle

• Tools for Defect Management (e.g., JIRA, Bugzilla)

• Diagrams: Defect Lifecycle Workflow

• Real-time Example: Reporting Bugs in a Payment Gateway

8. Test Execution

• Preparing for Test Execution

• Test Environment Setup

• Test Data Preparation

• Real-time Example: Executing Tests for an IoT Device

9. Reporting and Metrics

• Importance of Reporting

• Test Metrics and KPIs

• Reporting Tools (e.g., TestRail, Zephyr)

• Real-time Example: Generating Test Reports for an Insurance Software

10. Agile and Manual Testing

• Manual Testing in Agile

• Role of a Tester in Agile Teams

• Sprint Planning and Manual Testing

• Real-time Example: Manual Testing for a Ride-Sharing Application

11. Challenges in Manual Testing

• Common Challenges

• Strategies to Overcome Challenges

• Real-time Examples of Challenges

12. Manual Testing Tools

• Overview of Tools (e.g., TestLink, Bugzilla)

• Installation and Usage

• Real-time Example: Managing Manual Tests with TestLink


13. Career in Manual Testing

• Skills Required

• Roles and Responsibilities

• Certifications and Resources

• Real-time Scenario: Transition from Manual Testing to Automation Testing


Chapter 1: Introduction to Manual Testing

1.1 What is Manual Testing?

Manual Testing is the process of testing software manually to identify bugs, defects, or issues. Testers simulate end-
user scenarios and verify whether the application behaves as expected. Unlike automated testing, manual testing does
not rely on scripts or tools; instead, it depends on human effort.

1.2 Why is Manual Testing Important?

• Ensures User Satisfaction: It identifies bugs from an end-user perspective.

• Adapts to Change: Manual testing is flexible and can handle dynamic changes during the testing process.

• Catches Usability Issues: Human testers can evaluate user experience, which tools cannot.

1.3 Differences Between Manual and Automated Testing

Aspect Manual Testing Automated Testing

Execution Performed by humans Performed by scripts/tools

Speed Slower Faster

Cost Low initial cost High initial setup cost

Best Suited For Exploratory, Ad-hoc, Usability Regression, Repeated Tests

Accuracy Prone to human errors High accuracy

1.4 Types of Software Testing

Manual testing encompasses several types:

1. Functional Testing: Verifying features and functionalities.

2. Regression Testing: Ensuring new changes don’t break existing functionality.

3. Exploratory Testing: Testing without predefined cases.

4. Usability Testing: Evaluating user experience.

1.5 Advantages of Manual Testing

• Human Insight: Detects issues related to user behavior.

• Cost-Effective: Low initial cost, especially for small projects.

• Dynamic Testing: Adapts to changes in real-time scenarios.


1.6 Disadvantages of Manual Testing

• Time-Consuming: Slower than automation.

• Repetitive Effort: Tedious for repeated test cases.

• Prone to Errors: Subject to human mistakes.

1.7 Real-Time Example

Let’s test a login page of an online shopping platform manually.

1. Scenario: User logs into the shopping portal.

2. Test Steps:

o Open the login page.

o Enter valid username and password.

o Click on the "Login" button.

o Verify that the user lands on the homepage.

3. Expected Result: The user should be directed to the homepage without errors.

Possible Bugs:

• Error message appears even with correct credentials.

• "Login" button unresponsive.

1.8 Diagram: Manual Testing Process

1. Requirement Analysis

2. Test Planning

3. Test Case Design

4. Test Execution

5. Defect Logging & Retesting

6. Test Closure
Chapter 2: Software Development Life Cycle (SDLC)

2.1 What is SDLC?

The Software Development Life Cycle (SDLC) is a structured process used to design, develop, test, and maintain
software. It ensures that the software meets user expectations and quality standards.

2.2 Phases of SDLC

The SDLC consists of the following phases:

1. Requirement Gathering and Analysis

o Understanding client requirements.

o Deliverables: Requirement Specification Document (SRS).

o Example: Gathering requirements for an e-commerce website.

2. System Design

o High-level design (HLD) and low-level design (LLD) are created.

o Deliverables: System Design Document.

o Example: Designing a database schema for user profiles.

3. Implementation (Development)

o Developers write code based on design documents.

o Example: Writing code for a shopping cart feature.

4. Testing

o Verifying that the software works as intended.

o Deliverables: Test Plans, Test Cases, and Defect Reports.

o Example: Testing the payment gateway integration.

5. Deployment

o Deploying the software to production.

o Deliverables: Deployment Guide.

o Example: Deploying the website to a live server.

6. Maintenance

o Fixing bugs and updating the software post-deployment.

o Example: Adding new product categories to the e-commerce platform.

2.3 Role of Testing in SDLC

Testing plays a pivotal role in ensuring software quality. It:

• Verifies that requirements are met.

• Identifies and resolves bugs.


• Ensures compatibility across devices and platforms.

2.4 SDLC Models

Various models define how SDLC phases are executed. Popular models include:

1. Waterfall Model

o Sequential execution of phases.

o Suitable for small projects with well-defined requirements.

o Diagram:

Requirement → Design → Development → Testing → Deployment → Maintenance

2. V-Model

o Emphasizes verification and validation.

o Each development phase is linked with a corresponding testing phase.

o Diagram:

Requirements ↔ Acceptance Testing

Design ↔ System Testing

Coding ↔ Unit Testing

3. Agile Model

o Iterative and incremental development.

o Focuses on collaboration and adaptability.

o Example: Testing features in sprints during the Agile process.

2.5 Diagrams: SDLC Models

Waterfall Model

Requirements → Design → Implementation → Testing → Deployment → Maintenance

V-Model

Requirements ↔ Acceptance Testing

Design ↔ System Testing

Coding ↔ Unit Testing

Agile Model

Sprint 1: Plan → Develop → Test → Deliver

Sprint 2: Plan → Develop → Test → Deliver

Repeat...
2.6 Real-Time Example: SDLC for a Banking Application

1. Requirement Gathering: Understand features like balance check, funds transfer, and transaction history.

2. Design: Create flow diagrams for each feature.

3. Development: Write code for fund transfers.

4. Testing: Test the accuracy of fund transfers and error handling.

5. Deployment: Launch the application for customer use.

6. Maintenance: Add new features like credit score checks.

Chapter 3: Software Testing Life Cycle (STLC)

3.1 What is STLC?

The Software Testing Life Cycle (STLC) is a systematic process that defines testing activities to be performed during
each stage of software development. It ensures a thorough evaluation of the product's functionality, performance, and
reliability.

3.2 Phases of STLC

1. Requirement Analysis

o Purpose: Understand what needs to be tested.

o Activities:

▪ Analyze requirements.

▪ Identify testable features.

▪ Check for testability of requirements.

o Deliverables: Requirements Traceability Matrix (RTM).

2. Test Planning

o Purpose: Plan the testing strategy and resources.

o Activities:

▪ Create a test plan.

▪ Estimate testing effort.

▪ Identify tools, resources, and timelines.

o Deliverables: Test Plan Document.

3. Test Case Design

o Purpose: Create detailed test cases.

o Activities:
▪ Write test cases based on requirements.

▪ Review and optimize test cases.

o Deliverables: Test Cases and Test Data.

4. Environment Setup

o Purpose: Prepare the test environment.

o Activities:

▪ Set up hardware, software, and network configurations.

▪ Verify the test environment with a smoke test.

o Deliverables: Test Environment Setup Checklist.

5. Test Execution

o Purpose: Execute the test cases.

o Activities:

▪ Execute test cases manually or using tools.

▪ Log defects and retest after fixes.

o Deliverables: Test Execution Reports and Defect Logs.

6. Test Closure

o Purpose: Conclude testing activities.

o Activities:

▪ Document lessons learned.

▪ Archive test artifacts.

▪ Prepare a test summary report.

o Deliverables: Test Closure Report.

3.3 Entry and Exit Criteria

• Entry Criteria:

o Test plan is approved.

o Test environment is ready.

o Test cases are prepared.

• Exit Criteria:

o All planned tests are executed.

o Defects are fixed and retested.

o Test summary report is created.

3.4 Diagrams: STLC Workflow


plaintext

Copy code

Requirement Analysis

Test Planning

Test Case Design

Environment Setup

Test Execution

Test Closure

3.5 Real-Time Example: Testing a Social Media Application

1. Requirement Analysis:

o Features: Post creation, liking posts, and messaging.

o Testable Requirements: Character limit for posts, notification on likes, etc.

2. Test Planning:

o Define scope: Testing the messaging feature in this release.

o Plan tools: Use JIRA for defect tracking.

3. Test Case Design:

o Test Case 1: Validate successful message delivery.

o Test Case 2: Verify error on exceeding the message length

4. Test Execution:

o Test Case 1: Validate successful message delivery.

• Steps:

▪ Log into the app with valid credentials.

▪ Navigate to the chat feature and select a contact.

▪ Send a text message "Hello."

▪ Verify the message appears in the conversation.

• Expected Result: The message "Hello" is displayed in the chat and marked as delivered.

o Test Case 2: Verify error on exceeding the message length.

• Steps:
▪ Log into the app.

▪ Navigate to the chat feature and select a contact.

▪ Attempt to send a message exceeding 200 characters.

▪ Verify an error message appears.

• Expected Result: The system displays "Message length exceeded" and prevents the message
from being sent.

5. Test Closure:

• Activities:

o Document all executed test cases.

o Log any defects, such as:

▪ Notification delay in message delivery.

▪ Error message for long messages appearing in an unexpected format.

o Prepare a Test Summary Report, including:

▪ Total test cases executed: 20.

▪ Passed test cases: 18.

▪ Failed test cases: 2.

▪ Defects logged: 2.

o Deliverables:

▪ Test Summary Report

▪ Lessons Learned Document:

▪ The chat feature needs optimization for long messages.

▪ Notifications should sync faster with server responses.

3.6 Key Deliverables of STLC

1. Requirements Traceability Matrix (RTM): Maps test cases to requirements.

2. Test Plan Document: Outlines testing strategy, scope, and schedules.

3. Test Cases: Detailed steps and expected results for test scenarios.

4. Defect Logs: Records of identified defects, severity, and status.

5. Test Summary Report: Overview of testing activities, results, and key metrics.
Chapter 4: Types of Testing

4.1 What Are the Types of Testing?

In software testing, various types of testing are used to validate different aspects of a software product. These can be
broadly categorized into Functional and Non-Functional testing, but there are also specialized types based on specific
goals, such as exploratory testing and regression testing.

4.2 Functional Testing

Functional testing validates the functionality of a software application by checking whether it behaves as expected.
These tests are usually based on requirements and specifications.

• Types of Functional Testing:

o Unit Testing: Tests individual components or functions of the software.

▪ Example: Testing a login function to verify the correct username and password handling.

o Integration Testing: Ensures that different modules or components of the software work together.

▪ Example: Testing the interaction between a payment gateway and the shopping cart in an e-
commerce application.

o System Testing: Tests the entire system as a whole to ensure it works as expected.

▪ Example: Testing a fully developed e-commerce website, including login, shopping, and
checkout functionality.

o Acceptance Testing: Verifies if the software meets the business requirements and is ready for
deployment.

▪ Example: Testing an online banking system to ensure it meets regulatory requirements and
user expectations.

4.3 Non-Functional Testing

Non-functional testing focuses on non-functional aspects of the software, such as performance, usability, and security.

• Types of Non-Functional Testing:

o Performance Testing: Assesses how well the software performs under load and stress.

▪ Example: Load testing an e-commerce website to see how many users it can handle
simultaneously.

o Usability Testing: Evaluates how user-friendly and intuitive the software is.

▪ Example: Testing the navigation and layout of a mobile app to ensure a seamless user
experience.

o Security Testing: Ensures that the software is secure from vulnerabilities and cyber-attacks.

▪ Example: Penetration testing an online banking application to check for security flaws.

o Compatibility Testing: Verifies that the software works across different environments, devices, and
browsers.
▪ Example: Testing a website on various browsers like Chrome, Firefox, and Safari to ensure
cross-browser compatibility.

4.4 Specialized Testing Types

These are testing types performed in specific scenarios or to uncover particular issues.

1. Exploratory Testing:

o The tester explores the application without predefined test cases to find potential issues.

o Example: Manually testing an e-commerce website's checkout process by trying different


combinations of payment methods and discounts.

2. Regression Testing:

o Ensures that new changes or updates do not break or negatively impact existing features.

o Example: After adding a new product page, testing the existing checkout and search functionalities to
make sure they still work properly.

3. Smoke Testing:

o A preliminary test to check whether the basic functions of the application work.

o Example: After a new build, testing if users can log in and access the main page of the application.

4. Sanity Testing:

o Focuses on verifying whether a specific bug has been fixed or if a small change works as expected.

o Example: After fixing a bug where a user couldn't add items to their cart, testing this specific
functionality.

5. Ad-hoc Testing:

o Unscripted testing performed to find unexpected defects without following any formal testing process.

o Example: Randomly clicking on different features of an app to see if any unexpected crashes occur.

4.5 Diagram: Types of Testing

Types of Testing

/ \

Functional Testing Non-Functional Testing

/ | | \ / | | \

Unit Integration System Acceptance | Performance Usability Security

4.6 Real-Time Examples of Functional Testing

1. Functional Testing of a Login Page:

o Scenario: A user attempts to log in using a valid username and password.

o Steps:

1. Enter valid credentials.


2. Click on "Login".

o Expected Result: The user should be directed to the homepage.

o Bug Example: Incorrect password handling could result in a login failure.

2. Functional Testing of a Shopping Cart:

o Scenario: A user adds items to the shopping cart.

o Steps:

1. Select a product and add it to the cart.

2. Go to the cart page and verify the product is listed.

o Expected Result: The correct product, quantity, and price should be displayed in the cart.

o Bug Example: The cart shows the wrong price or an empty cart after adding products.

4.7 Real-Time Examples of Non-Functional Testing

1. Performance Testing for a Web Application:

o Scenario: Test how the website performs under load.

o Steps:

1. Simulate multiple users accessing the website simultaneously.

o Expected Result: The website should respond to all users without crashing.

o Bug Example: The website becomes slow or crashes under heavy traffic.

2. Usability Testing for a Mobile App:

o Scenario: Test the ease of use and user-friendliness of an app.

o Steps:

1. Have users navigate through the app, completing common tasks.

o Expected Result: The app should be intuitive, with easy navigation and minimal friction.

o Bug Example: Users have difficulty finding the settings menu or navigating between screens.

4.8 Key Considerations in Testing Types

• Functional Testing is typically prioritized during the initial stages of development, while Non-Functional Testing
is done later to ensure overall performance.

• Specialized testing types like Exploratory and Ad-hoc are used for finding defects that are not easily identified
through standard testing procedures.
Chapter 5: Test Case Development

5.1 What is a Test Case?

A Test Case is a detailed document that outlines a specific set of actions to verify whether a particular functionality or
feature of the software behaves as expected. It includes inputs, execution steps, expected results, and any necessary
configurations to perform the test.

5.2 Components of a Test Case

A well-structured test case includes several key components:

1. Test Case ID: A unique identifier for the test case (e.g., TC_001).

2. Test Case Title: A brief description of the test case.

3. Preconditions: Any setup or conditions that must be met before executing the test (e.g., user must be logged
in).

4. Test Steps: The detailed steps to perform the test.

5. Test Data: The input values used in the test (e.g., username, password).

6. Expected Result: The anticipated result or behavior after executing the test steps.

7. Actual Result: The actual outcome observed after executing the test case.

8. Pass/Fail: Indicates whether the test passed or failed based on the expected and actual results.

9. Priority: The importance level of the test (e.g., High, Medium, Low).

10. Remarks: Additional comments or observations related to the test.

5.3 Writing Effective Test Cases

Writing effective test cases is a critical skill in manual testing. Here are some guidelines for creating clear and efficient
test cases:

• Be Clear and Concise: Test cases should be easy to understand. Avoid ambiguous or unclear language.

• Use Simple and Relevant Data: Use realistic data in test cases that simulate real-world usage.

• Cover Different Scenarios: Include positive and negative scenarios to ensure thorough testing (e.g., valid and
invalid inputs).

• Maintain Consistency: Follow a consistent format for all test cases to ensure clarity and ease of execution.

5.4 Example Test Case Template

Test Case ID : TC_001

Test Case Title : Login with valid credentials

Preconditions : User is on the login page

Test Steps :

1. Enter a valid username in the username field


2. Enter a valid password in the password field

3. Click the "Login" button

Test Data : Username: user1, Password: password123

Expected Result : The user should be redirected to the homepage.

Actual Result : (To be filled after execution)

Pass/Fail : (To be filled after execution)

Priority : High

Remarks : None

5.5 Example Test Cases

1. Test Case 1: Valid Login

o Test Case ID: TC_002

o Test Case Title: Login with valid credentials

o Preconditions: User is on the login page

o Test Steps:

1. Enter a valid username (e.g., "user1") in the username field.

2. Enter a valid password (e.g., "password123") in the password field.

3. Click the "Login" button.

o Test Data: Username: user1, Password: password123

o Expected Result: User should be redirected to the homepage.

o Actual Result: (To be filled after execution)

o Pass/Fail: (To be filled after execution)

o Priority: High

o Remarks: None

2. Test Case 2: Invalid Login

o Test Case ID: TC_003

o Test Case Title: Login with invalid credentials

o Preconditions: User is on the login page

o Test Steps:

1. Enter an invalid username (e.g., "wronguser") in the username field.

2. Enter an incorrect password (e.g., "wrongpassword") in the password field.

3. Click the "Login" button.

o Test Data: Username: wronguser, Password: wrongpassword

o Expected Result: An error message should appear stating "Invalid username or password."
o Actual Result: (To be filled after execution)

o Pass/Fail: (To be filled after execution)

o Priority: High

o Remarks: None

5.6 Types of Test Cases

1. Positive Test Cases:

o Test scenarios where the application is expected to behave correctly.

o Example: Logging in with valid credentials.

2. Negative Test Cases:

o Test scenarios where the application should handle invalid input or errors gracefully.

o Example: Logging in with an incorrect password.

3. Boundary Test Cases:

o Focus on the boundaries or edge cases of input values.

o Example: Testing a password field with the maximum allowed characters.

4. Integration Test Cases:

o Test cases that validate the interaction between multiple components.

o Example: Verifying that a user can add an item to the cart and proceed to checkout.

5.7 Real-Time Example: Testing an E-commerce Application

Test Case: Add Item to Cart

• Test Case ID: TC_004

• Test Case Title: Add item to shopping cart

• Preconditions: User is logged in and on the product listing page

• Test Steps:

1. Browse through the product listing and select a product.

2. Click on "Add to Cart" button.

3. Navigate to the cart page.

4. Verify that the selected product appears in the cart.

• Test Data: Product: Laptop (Price: $999)

• Expected Result: The selected product should be added to the cart with the correct price and quantity.

• Actual Result: (To be filled after execution)

• Pass/Fail: (To be filled after execution)

• Priority: High
• Remarks: None

5.8 Key Considerations for Test Case Development

• Clarity: The steps and expected results must be easy to understand.

• Reusability: Test cases should be reusable for future testing.

• Traceability: Test cases should map back to requirements or user stories to ensure full coverage.

• Maintainability: Test cases should be easy to maintain and update as the software evolves.

Chapter 6: Test Plan

6.1 What is a Test Plan?

A Test Plan is a comprehensive document that outlines the strategy, scope, approach, resources, and schedule for
testing activities. It defines the testing objectives, deliverables, and the criteria for testing success, ensuring that all
aspects of the software are tested effectively.

6.2 Importance of a Test Plan

A Test Plan is essential for:

• Guiding the Testing Process: It provides a clear roadmap for the testing process.

• Ensuring Consistency: Ensures all team members follow the same approach and understand the scope of
testing.

• Resource Allocation: Helps in planning the required resources, tools, and time for testing.

• Risk Management: Identifies potential risks and outlines mitigation strategies.

• Communication Tool: Serves as a reference for the team, stakeholders, and clients.

6.3 Components of a Test Plan

A well-structured Test Plan includes the following key components:

1. Test Plan ID: A unique identifier for the test plan.

2. Introduction: A brief overview of the testing objectives and goals.

3. Test Items: The software components or features that will be tested.

4. Scope of Testing: Defines what is included and excluded from testing.

5. Test Strategy: The overall approach to testing, including methodologies and levels of testing.

6. Test Deliverables: The documents and artifacts that will be produced during the testing process.

7. Testing Resources: A list of tools, environments, and team members required for testing.

8. Test Schedule: A timeline outlining the milestones and deadlines for testing activities.
9. Entry and Exit Criteria: Defines the conditions that must be met to begin and conclude testing.

10. Risk and Mitigation Plan: Identifies potential risks in the testing process and strategies for mitigating them.

11. Approval and Sign-off: The process for obtaining approval of the test plan and its components.

6.4 Test Plan Example Structure

1. Test Plan ID : TP_001

2. Introduction : Overview of the test plan for testing the e-commerce platform

3. Test Items : Login functionality, Shopping cart, Checkout process

4. Scope of Testing :

- In-scope: Test login, product search, cart, and checkout features

- Out-of-scope: Payment gateway, mobile app testing

5. Test Strategy :

- Functional Testing: Valid and invalid login scenarios

- Regression Testing: Ensure new code does not break existing features

6. Test Deliverables : Test cases, Test execution reports, Defect logs, Test summary report

7. Testing Resources : Selenium for automation, JIRA for defect tracking

8. Test Schedule :

- Test Execution: 5th Jan - 15th Jan

- Test Closure: 16th Jan

9. Entry and Exit Criteria:

- Entry: Test environment set up, test cases prepared

- Exit: All test cases executed, all defects closed or deferred

10. Risk and Mitigation :

- Risk: Limited test environment availability

- Mitigation: Coordinate with DevOps team for timely setup

11. Approval and Sign-off: Product Manager, QA Lead

6.5 Key Considerations for Test Plan Development

1. Clarity and Detail: A test plan should be clear and detailed enough for anyone to understand the testing
approach.

2. Scope: The scope must be carefully defined to avoid scope creep and to ensure the testing process is focused.

3. Realistic Scheduling: The timeline should be feasible, considering resource availability and the complexity of
the testing.

4. Resource Allocation: Properly allocate tools, environments, and team members based on expertise and
availability.
5. Risk Management: Identify potential risks (e.g., resource constraints, environment issues) and plan mitigation
strategies.

6. Approval Process: Ensure proper sign-offs and approvals from stakeholders to proceed with the testing phase.

6.6 Real-Time Example: Test Plan for an E-commerce Platform

Test Plan ID: TP_001

Introduction:
This test plan defines the approach and activities for testing the e-commerce platform’s key features, including user
login, product search, shopping cart functionality, and checkout process.

Test Items:

• Login: Verify login functionality with valid and invalid credentials.

• Shopping Cart: Test adding/removing items, cart persistence, and price calculation.

• Checkout: Ensure smooth checkout process, including shipping options and order confirmation.

Scope of Testing:

• In-scope: Functional testing of login, cart, and checkout; Regression testing for any changes in the cart
functionality.

• Out-of-scope: Payment gateway integration, mobile app testing.

Test Strategy:

• Functional Testing: Test cases for login, search, cart, and checkout.

• Regression Testing: Ensure that the checkout process and cart functionality work after new updates.

Test Deliverables:

• Test cases for each scenario.

• Test execution report with results for each test case.

• Defect logs and test summary report at the end of the testing phase.

Testing Resources:

• Tools: Selenium for automation (if applicable), JIRA for defect tracking.

• Team: 2 QA testers, 1 test lead, 1 automation engineer.

Test Schedule:

• Test Execution: January 5th to January 15th.

• Test Closure: January 16th.

Entry and Exit Criteria:

• Entry Criteria: Test environment setup, test cases prepared, and test data in place.

• Exit Criteria: All planned tests executed, critical defects resolved, test summary prepared.

Risk and Mitigation:

• Risk: Limited availability of test data.

• Mitigation: Coordinate with the business team to prepare realistic test data in advance.
Approval and Sign-off:

• Approvers: Product Manager, QA Lead

6.7 Key Benefits of a Test Plan

• Efficiency: Streamlines the testing process by providing a clear roadmap.

• Quality Assurance: Helps ensure that all critical aspects of the software are tested thoroughly.

• Stakeholder Alignment: Ensures all stakeholders are aligned on the testing approach, timeline, and
deliverables.

• Risk Mitigation: Identifies potential risks early on and prepares strategies to minimize their impact.

6.8 Conclusion

The Test Plan is a crucial document in manual testing. It ensures that testing is structured, organized, and aligned with
project goals. By defining the scope, resources, timelines, and risk management strategies, it provides a clear
framework for conducting effective testing.

Chapter 7: Test Execution and Defect Reporting

7.1 What is Test Execution?

Test Execution is the process of executing test cases as defined in the test plan and observing the actual outcomes.
During test execution, testers run the tests, record the results, compare them with the expected outcomes, and
determine whether the system is functioning as expected.

7.2 Test Execution Process

The test execution process typically involves the following steps:

1. Preparation:

o Ensure that the testing environment is set up.

o Confirm that all required tools, applications, and test data are ready for execution.

2. Executing Test Cases:

o Begin executing the test cases according to the test plan.

o Follow the test steps defined in the test case documentation.

o Capture screenshots, logs, or other evidence as necessary to support results.

3. Comparing Actual and Expected Results:

o After executing each test, compare the actual results with the expected results.

o Identify any discrepancies between the two.


4. Logging Defects (if any):

o If the test fails, log a defect or bug, detailing the issue, steps to reproduce, and severity.

o Provide detailed information, such as error messages, screenshots, or logs, to help developers fix the
issue.

5. Reporting Results:

o Update the test case status (Pass/Fail) and provide feedback to stakeholders about the testing
progress.

7.3 Test Execution Phases

Test execution can be broken down into different phases based on the project lifecycle:

• Alpha Testing: Testing conducted by the development team before releasing to the QA team.

• Beta Testing: Testing performed by the QA team or selected end-users before the product is released to the
public.

• Production Testing: The testing phase after the product has been deployed to the live environment to ensure
stability.

7.4 Real-Time Example: Executing Test Cases for an E-commerce Site

Test Case: Add Item to Cart

1. Preconditions:

o User is logged into the application.

o User is on the product listing page.

2. Test Steps:

o Select a product (e.g., "Laptop").

o Click "Add to Cart."

o Navigate to the shopping cart page.

o Verify that the product appears in the cart with the correct details (product name, price, and quantity).

3. Expected Result:

o The product is successfully added to the cart, and the cart reflects the correct product name, price,
and quantity.

4. Actual Result:

o Pass/Fail: Pass

If the test had failed (e.g., the item was not added to the cart), a defect would be logged.

7.5 Defect Reporting

A defect (or bug) is any deviation from the expected result during test execution. Defects are reported so that
developers can fix them. Here are the key components of a defect report:

1. Defect ID: A unique identifier for the defect.


2. Summary: A brief description of the defect.

3. Description: A detailed explanation of the defect, including how it was discovered.

4. Steps to Reproduce: Clear, concise steps to reproduce the defect.

5. Expected Result: What was supposed to happen.

6. Actual Result: What actually happened.

7. Severity/Priority: The impact of the defect (e.g., Critical, Major, Minor).

8. Environment: Details about the environment in which the defect was found (e.g., OS, browser version).

9. Attachments: Screenshots, logs, or other files that provide more details about the defect.

7.6 Real-Time Example of Defect Reporting

Let’s assume during test execution, we encountered an issue while adding an item to the shopping cart.

• Defect ID: D_001

• Summary: Product not added to the cart after clicking "Add to Cart."

• Description: When attempting to add a product to the cart, the cart does not update with the product.

• Steps to Reproduce:

1. Log in to the application.

2. Navigate to the product listing page.

3. Select the product "Laptop."

4. Click the "Add to Cart" button.

5. Navigate to the cart page.

• Expected Result: The product "Laptop" should appear in the cart with the correct price and quantity.

• Actual Result: The product does not appear in the cart.

• Severity/Priority: High (since the shopping cart functionality is a critical feature).

• Environment: Windows 10, Chrome 90.0.

• Attachments: Screenshot showing the cart page with no product.

7.7 Defect Lifecycle

Once a defect is reported, it goes through the following stages in its lifecycle:

1. New: The defect has been identified and reported but not yet assigned for fixing.

2. Assigned: The defect is assigned to a developer or team for investigation and resolution.

3. In Progress: The developer is working on fixing the defect.

4. Fixed: The defect has been fixed and the developer has verified the solution.

5. Retesting: The QA team tests the fix to ensure the defect is resolved and no new issues have been introduced.

6. Closed: If the defect is successfully fixed, it is closed. If the defect is not reproducible or not valid, it may be
closed as "Not a Bug."
7. Rejected: If the defect is not deemed critical or is determined to be working as expected, it may be rejected.

7.8 Key Defect Reporting Tools

There are several tools available for defect tracking and reporting, including:

1. JIRA: One of the most popular bug tracking tools, used for agile project management and issue tracking.

2. Bugzilla: An open-source defect tracking tool, often used in open-source projects.

3. Trello: A simple board tool that can be used for tracking bugs in smaller projects or teams.

4. Redmine: A project management tool with issue tracking capabilities.

5. Mantis: A web-based open-source issue tracking tool.

7.9 Key Considerations in Test Execution and Defect Reporting

1. Documentation: Ensure all steps, results, and defects are well-documented to provide clear insights for
developers and stakeholders.

2. Reproducibility: Ensure that defects are reproducible by providing clear, actionable steps.

3. Timely Reporting: Report defects as soon as they are found to prevent delays in the development process.

4. Severity vs. Priority: Understand the difference between defect severity (the impact on functionality) and
priority (how soon it should be fixed).

5. Communication: Effective communication between QA and development teams is essential for resolving
defects efficiently.

7.10 Conclusion

Test execution and defect reporting are critical stages in the software testing process. By following a structured
approach to executing tests and logging defects, teams ensure that the software meets its quality standards. Effective
defect reporting and management contribute to a smoother development cycle and higher-quality software.

Chapter 8: Test Reporting and Closure

8.1 What is Test Reporting?

Test Reporting is the process of documenting and communicating the results of the testing phase. It involves
summarizing the outcomes of executed tests, tracking defects, and providing stakeholders with a clear overview of the
quality of the product.

The primary goal of test reporting is to offer transparency about the status of testing and to provide stakeholders with
the necessary information to make informed decisions about the product's readiness.
8.2 Importance of Test Reporting

Test reports are essential for:

• Tracking Progress: They provide a snapshot of test execution and defect statuses, which helps in understanding
the progress of testing activities.

• Informed Decision Making: They assist stakeholders (e.g., product owners, developers) in making decisions on
release readiness or further work needed.

• Quality Assurance: Test reports document whether the software meets the defined acceptance criteria and
quality standards.

• Documentation and Compliance: They serve as official records for audits and quality control.

8.3 Components of a Test Report

A well-structured Test Report includes the following key components:

1. Test Summary:

o A brief overview of the testing activities, including objectives, scope, and the testing environment.

2. Test Execution Results:

o A summary of test cases executed, passed, failed, or blocked.

o Provides an overall status of test execution.

3. Defect Summary:

o A summary of defects identified during testing, including their severity and status (open, in-progress,
fixed, closed).

4. Test Coverage:

o Indicates the percentage of the total application or functionality tested against the test plan.

5. Test Metrics:

o Metrics such as test case execution time, defect density, and pass/fail ratio, helping in assessing the
efficiency and effectiveness of testing.

6. Risk and Issues:

o Identifies any risks, blockers, or challenges encountered during testing.

7. Conclusion:

o A summary of the testing status and whether the product is ready for release, along with any open
issues that need resolution.

8.4 Test Execution Report Example

Test Report Summary:

- Project: E-commerce Website Testing

- Test Execution Period: January 5, 2024 – January 15, 2024

- Test Manager: John Doe

- Test Environment: Windows 10, Chrome v90, Production Server


Test Execution Results:

- Total Test Cases: 50

- Test Cases Passed: 40 (80%)

- Test Cases Failed: 8 (16%)

- Test Cases Blocked: 2 (4%)

Defect Summary:

- Total Defects Logged: 10

- Critical Defects: 2

- Major Defects: 5

- Minor Defects: 3

- Defects Closed: 5

- Defects Pending: 5

Test Coverage:

- Login Functionality: 100% tested

- Shopping Cart Functionality: 90% tested

- Checkout Process: 80% tested

- Payment Gateway: Not tested (Out of Scope)

Test Metrics:

- Test Execution Time: 30 hours

- Pass/Fail Ratio: 80% Pass

- Defect Density: 2 defects per 10 test cases

Risk and Issues:

- Limited test data for edge cases in the cart functionality.

- Performance issues observed on the payment page during high traffic.

Conclusion:

- Testing is 90% complete.

- The product is not yet ready for release due to critical defects in the checkout process and
performance issues.

- Pending defects need to be addressed before the release.


8.5 Test Closure

Test Closure is the final phase of the testing process, where the testing activities are formally concluded, and the
testing team prepares for project completion. This phase involves evaluating the entire testing process, ensuring that
all necessary documentation is completed, and providing final reports to stakeholders.

8.6 Key Activities in Test Closure

1. Test Summary Report:

o Prepare a comprehensive report summarizing all test activities, results, and defect status, as shown in
the previous example.

2. Defect Report Finalization:

o Ensure that all defects are properly logged, tracked, and resolved. Any open defects that may impact
the release should be flagged.

3. Test Artifacts Finalization:

o All test cases, test scripts, defect logs, and other related artifacts should be archived and finalized for
future reference.

4. Lessons Learned:

o A retrospective session to review what went well and what could have been improved during the
testing process. This helps in improving future testing cycles.

5. Stakeholder Sign-off:

o Obtain final approval from stakeholders (e.g., product owners, project managers) to confirm that
testing is complete and the software is ready for release.

6. Release Test Artifacts:

o Provide stakeholders with access to final test deliverables, such as test cases, defect logs, and test
execution reports, for record-keeping or auditing purposes.

8.7 Real-Time Example: Test Closure for E-commerce Website

Test Summary Report:

• Project: E-commerce Website Testing

• Test Execution Period: January 5, 2024 – January 15, 2024

• Test Manager: Jane Smith

• Test Environment: Windows 10, Chrome 90.0, Production Server

Defect Summary:

• Total Defects Logged: 12

• Critical Defects: 3

• Major Defects: 5

• Minor Defects: 4

• Defects Closed: 6
• Defects Pending: 6 (2 critical defects are pending resolution)

Test Completion Status:


Testing has been completed for all planned features except the Payment Gateway, which was out of scope. Most
features have passed, but critical defects need to be addressed before release. The overall pass rate is 80%.

Lessons Learned:

• The testing team faced challenges with the payment integration, as the test environment was not stable,
impacting the ability to perform certain tests.

• Test automation can be improved by adding more automated tests for critical user flows (like login and
checkout).

Test Artifacts Finalization:

• All test cases and defect logs have been reviewed and archived.

• Test execution reports, defect logs, and test summary have been delivered to the stakeholders.

Stakeholder Sign-off:

• Product Manager: Approved (with the understanding that critical defects will be fixed before release).

• QA Lead: Approved.

8.8 Key Considerations for Test Reporting and Closure

1. Accuracy: Test reports should accurately reflect the status of testing, defects, and overall quality.

2. Clarity: Reports should be easy to understand, even for non-technical stakeholders.

3. Comprehensiveness: Ensure that all necessary information (e.g., defect summary, test coverage) is included in
the final report.

4. Timeliness: Test reports and closure activities should be completed promptly at the end of the testing phase to
allow for timely decision-making.

5. Feedback: Use lessons learned to improve future testing cycles and processes.

8.9 Conclusion

Test Reporting and Closure are essential for finalizing the testing phase and ensuring that all testing activities are
properly documented and communicated. A clear test report and formal test closure process help stakeholders assess
the product’s quality and make informed decisions about its release.
Chapter 9: Agile and Manual Testing

9.1 Introduction to Agile and Manual Testing

Agile methodology emphasizes flexibility, collaboration, and frequent delivery of small increments of software. In Agile,
manual testing is still essential, even in environments where automation is used, due to its ability to handle exploratory
testing, usability testing, and testing new features or functionalities in real-time.

While Agile practices focus on continuous iteration and improvement, manual testers ensure that user requirements
are met through comprehensive testing of each feature as it’s developed, ensuring high-quality deliverables at the end
of each sprint.

9.2 Manual Testing in Agile

Manual testing in Agile involves testing the software manually after each sprint. Unlike traditional Waterfall
methodologies, where testing is done at the end of the project, Agile testing is integrated into the iterative process and
begins as soon as the first set of features is ready for testing.

Key Points:

• Frequent Releases: Testers perform manual testing on new features delivered in each sprint.

• Continuous Feedback: Feedback from testers is used to refine and improve the product, ensuring early
detection of issues.

• Collaboration: Testers work closely with developers and business stakeholders to ensure that the product
meets the acceptance criteria.

Common Manual Testing Tasks in Agile:

• Test Case Design: Writing test cases to validate features developed in each sprint.

• Exploratory Testing: Testing the product with an exploratory approach to find unexpected issues.

• Regression Testing: Ensuring that new code does not break existing functionality.

• UAT (User Acceptance Testing): Verifying that the product meets business requirements.

9.3 Role of a Tester in Agile Teams

In an Agile environment, the role of a manual tester extends beyond just writing and executing test cases. Testers in
Agile teams are expected to contribute to the development process, provide feedback during sprints, and collaborate
across various stages of development.

Responsibilities of a Tester in Agile:

• Active Participation in Sprint Planning: Testers participate in sprint planning sessions to understand the scope
and requirements for the upcoming sprint.

• Test Case Design and Execution: Writing, reviewing, and executing test cases during the sprint to validate new
features.

• Collaborating with Developers: Providing feedback on new features and working with developers to reproduce
and fix defects.

• Continuous Integration: Participating in continuous integration (CI) and continuous testing (CT) processes.
• Test Automation (Optional but Often Involved): In some Agile environments, testers may help create
automated tests or use automation tools for repetitive tasks while continuing to conduct manual testing for
complex scenarios.

Agile Tester's Key Skills:

• Strong communication skills for collaboration.

• Flexibility to adjust testing efforts based on changing requirements.

• A deep understanding of both the business and technical aspects of the product.

• Quick adaptability to new tools and techniques.

9.4 Sprint Planning and Manual Testing

In Agile, sprint planning is a collaborative session where the team discusses the features to be developed in the
upcoming sprint, sets priorities, and estimates the time required to complete the tasks.

Manual Testing During Sprint Planning:

• Understanding Requirements: Testers need to understand the user stories or requirements associated with
the sprint. This ensures that testing efforts are aligned with business goals.

• Test Case Planning: Testers can identify potential test scenarios, create test data, and design test cases based
on the acceptance criteria of the user stories.

• Effort Estimation: Testers collaborate with developers to estimate the testing effort required, ensuring that the
test cases can be executed within the sprint's timeframe.

Key Considerations:

• Test-First Approach: In some Agile methodologies like Test-Driven Development (TDD), test cases are written
before the development of features, and manual testers are often involved in reviewing these test cases.

• Test Cases and Acceptance Criteria: Testers ensure that all acceptance criteria are covered by the test cases
and that the software meets business requirements.

9.5 Real-Time Example: Manual Testing for a Ride-Sharing Application

Context: Suppose we are testing a ride-sharing application in an Agile environment. The development team is working
on a new feature: “Ride Cancellation.”

• User Story: "As a user, I want to cancel a ride request before the driver accepts it."

• Acceptance Criteria:

o The user can cancel the ride request from the app.

o The driver gets a notification of the cancellation.

o The user should see a message confirming the cancellation.

Sprint Planning:

• Testers review the user story, understand the business requirements, and plan for testing the ride cancellation
feature.

• Test cases are created based on the acceptance criteria:

o Test Case 1: Verify that the user can cancel the ride request.
o Test Case 2: Verify that the driver gets a cancellation notification.

o Test Case 3: Verify that the cancellation message is displayed to the user.

Manual Testing:

• Test Execution: During the sprint, testers execute these test cases on the newly developed feature. Any issues
are logged and communicated back to the development team for resolution.

• Exploratory Testing: Testers also perform exploratory testing to identify any edge cases related to ride
cancellation, such as attempting to cancel a ride after a driver has accepted it.

• Regression Testing: Manual testers run regression tests to ensure that the new feature doesn't break any
existing functionality (like ride booking or payment).

Collaboration:

• Testers work closely with developers, providing feedback on the functionality during the sprint.

• The team holds daily stand-ups to track testing progress and any roadblocks that may arise.

End of Sprint:

• At the end of the sprint, the feature is considered "done" if all tests have passed, defects are resolved, and it
meets the acceptance criteria.

Chapter 10: Challenges in Manual Testing

10.1 Common Challenges in Manual Testing

Manual testing, while effective in many scenarios, comes with its own set of challenges. Some of the most common
challenges faced by manual testers include:

1. Time Constraints:

o Testing can be time-consuming, especially when the product has complex features or large volumes of
functionality to verify. Manual testing may not always be able to keep up with tight release schedules.

2. Repetitive Nature of Testing:

o Many test cases, particularly for regression testing, involve repetitive tasks that testers need to execute
multiple times. This can lead to tester fatigue and reduce efficiency.

3. Limited Test Coverage:

o Due to time or resource constraints, testers may not be able to cover every single scenario. This can
lead to missing critical defects, especially in large applications.

4. Human Error:

o Manual testing is prone to human error. Testers may overlook test cases or fail to execute tests
accurately, leading to missed defects.

5. Difficulty in Handling Large Data Sets:


o Testing with large volumes of data can be challenging when done manually. Testers may struggle to
handle extensive data or complex data manipulations.

6. Lack of Objectivity:

o Testers can sometimes become biased due to familiarity with the application, which might result in
missing issues that a fresh perspective would catch.

7. Inability to Reproduce Defects Consistently:

o Manual testing often struggles with reproducing defects consistently, especially intermittent issues or
those occurring in specific environments.

10.2 Strategies to Overcome Challenges

Here are several strategies to help manual testers overcome common challenges:

1. Effective Test Planning and Prioritization:

o Proper planning ensures that the most critical test cases are prioritized. Testers should focus on high-
risk areas first and consider automating lower-priority tests.

2. Test Case Design Improvements:

o Use techniques like Boundary Value Analysis and Equivalence Partitioning to design test cases that
cover a wide range of scenarios while reducing redundant tests.

3. Test Automation (for Repetitive Tests):

o For repetitive tests, especially regression tests, automation can save time and reduce errors. Testers
should focus on automating test cases that need to be executed frequently.

4. Collaboration and Communication:

o Close collaboration with developers and other team members helps ensure that requirements are
understood and defects are addressed promptly.

o Regular feedback loops in Agile teams can help catch issues early in the sprint.

5. Use of Testing Tools:

o Tools like bug trackers, test management systems, and performance testing tools can streamline the
process and help testers manage their efforts more effectively.

6. Exploratory Testing:

o Testers should regularly engage in exploratory testing to uncover issues that are difficult to capture in
predefined test cases. This also helps ensure coverage of edge cases.

7. Defect Tracking and Resolution:

o Proper defect tracking, with clear steps for developers to reproduce the issue, ensures that bugs are
resolved efficiently and testers can verify fixes.

10.3 Real-Time Examples of Challenges

Let's look at a few real-time examples to illustrate common challenges faced in manual testing:

1. Challenge 1: Difficulty in Reproducing Intermittent Defects


o Example: A defect where a user is intermittently unable to add items to the shopping cart in an e-
commerce app.

o Challenge: The defect doesn't occur consistently, making it hard to reproduce manually.

o Solution: Testers used logging and debugging tools to track the defect. They also started tracking the
user’s actions before the issue occurred, eventually identifying a race condition in the cart logic.

2. Challenge 2: Limited Test Coverage

o Example: A large banking application with many features, but limited time to test.

o Challenge: Testers can't test every function due to time constraints.

o Solution: Testers prioritized testing of the most critical functions (like fund transfers, balance checking)
and automated lower-priority tests. They also performed exploratory testing to uncover other defects.

3. Challenge 3: Time Constraints and Tight Deadlines

o Example: An online booking platform releases new features every week.

o Challenge: Testers have limited time for testing each feature due to tight release schedules.

o Solution: Testers focused on functional testing and regression for the most critical workflows. They
also started leveraging test automation for routine checks.

10.4 Conclusion

Manual testing is a vital part of the software development lifecycle, especially in Agile environments. While manual
testing comes with challenges, such as time constraints and human error, these challenges can be mitigated through
strategic planning, collaboration, automation, and effective communication.

By understanding and addressing these challenges, testers can ensure that they deliver high-quality, reliable software
while adapting to the fast-paced nature of Agile development.

You might also like