Manual Testing Tutorial
Manual Testing Tutorial
Index
• Phases of SDLC
• Phases of STLC
4. Types of Testing
• Functional Testing
• Non-functional Testing
• Exploratory Testing
• Regression Testing
• Real-time Examples: Functional Testing of a Login Page, Load Testing for an E-commerce Website
6. Test Plan
• What is a Test Plan?
• What is a Defect?
8. Test Execution
• Importance of Reporting
• Common Challenges
• Skills Required
Manual Testing is the process of testing software manually to identify bugs, defects, or issues. Testers simulate end-
user scenarios and verify whether the application behaves as expected. Unlike automated testing, manual testing does
not rely on scripts or tools; instead, it depends on human effort.
• Adapts to Change: Manual testing is flexible and can handle dynamic changes during the testing process.
• Catches Usability Issues: Human testers can evaluate user experience, which tools cannot.
2. Test Steps:
3. Expected Result: The user should be directed to the homepage without errors.
Possible Bugs:
1. Requirement Analysis
2. Test Planning
4. Test Execution
6. Test Closure
Chapter 2: Software Development Life Cycle (SDLC)
The Software Development Life Cycle (SDLC) is a structured process used to design, develop, test, and maintain
software. It ensures that the software meets user expectations and quality standards.
2. System Design
3. Implementation (Development)
4. Testing
5. Deployment
6. Maintenance
Various models define how SDLC phases are executed. Popular models include:
1. Waterfall Model
o Diagram:
2. V-Model
o Diagram:
3. Agile Model
Waterfall Model
V-Model
Agile Model
Repeat...
2.6 Real-Time Example: SDLC for a Banking Application
1. Requirement Gathering: Understand features like balance check, funds transfer, and transaction history.
The Software Testing Life Cycle (STLC) is a systematic process that defines testing activities to be performed during
each stage of software development. It ensures a thorough evaluation of the product's functionality, performance, and
reliability.
1. Requirement Analysis
o Activities:
▪ Analyze requirements.
2. Test Planning
o Activities:
o Activities:
▪ Write test cases based on requirements.
4. Environment Setup
o Activities:
5. Test Execution
o Activities:
6. Test Closure
o Activities:
• Entry Criteria:
• Exit Criteria:
Copy code
Requirement Analysis
Test Planning
Environment Setup
Test Execution
Test Closure
1. Requirement Analysis:
2. Test Planning:
4. Test Execution:
• Steps:
• Expected Result: The message "Hello" is displayed in the chat and marked as delivered.
• Steps:
▪ Log into the app.
• Expected Result: The system displays "Message length exceeded" and prevents the message
from being sent.
5. Test Closure:
• Activities:
▪ Defects logged: 2.
o Deliverables:
3. Test Cases: Detailed steps and expected results for test scenarios.
5. Test Summary Report: Overview of testing activities, results, and key metrics.
Chapter 4: Types of Testing
In software testing, various types of testing are used to validate different aspects of a software product. These can be
broadly categorized into Functional and Non-Functional testing, but there are also specialized types based on specific
goals, such as exploratory testing and regression testing.
Functional testing validates the functionality of a software application by checking whether it behaves as expected.
These tests are usually based on requirements and specifications.
▪ Example: Testing a login function to verify the correct username and password handling.
o Integration Testing: Ensures that different modules or components of the software work together.
▪ Example: Testing the interaction between a payment gateway and the shopping cart in an e-
commerce application.
o System Testing: Tests the entire system as a whole to ensure it works as expected.
▪ Example: Testing a fully developed e-commerce website, including login, shopping, and
checkout functionality.
o Acceptance Testing: Verifies if the software meets the business requirements and is ready for
deployment.
▪ Example: Testing an online banking system to ensure it meets regulatory requirements and
user expectations.
Non-functional testing focuses on non-functional aspects of the software, such as performance, usability, and security.
o Performance Testing: Assesses how well the software performs under load and stress.
▪ Example: Load testing an e-commerce website to see how many users it can handle
simultaneously.
o Usability Testing: Evaluates how user-friendly and intuitive the software is.
▪ Example: Testing the navigation and layout of a mobile app to ensure a seamless user
experience.
o Security Testing: Ensures that the software is secure from vulnerabilities and cyber-attacks.
▪ Example: Penetration testing an online banking application to check for security flaws.
o Compatibility Testing: Verifies that the software works across different environments, devices, and
browsers.
▪ Example: Testing a website on various browsers like Chrome, Firefox, and Safari to ensure
cross-browser compatibility.
These are testing types performed in specific scenarios or to uncover particular issues.
1. Exploratory Testing:
o The tester explores the application without predefined test cases to find potential issues.
2. Regression Testing:
o Ensures that new changes or updates do not break or negatively impact existing features.
o Example: After adding a new product page, testing the existing checkout and search functionalities to
make sure they still work properly.
3. Smoke Testing:
o A preliminary test to check whether the basic functions of the application work.
o Example: After a new build, testing if users can log in and access the main page of the application.
4. Sanity Testing:
o Focuses on verifying whether a specific bug has been fixed or if a small change works as expected.
o Example: After fixing a bug where a user couldn't add items to their cart, testing this specific
functionality.
5. Ad-hoc Testing:
o Unscripted testing performed to find unexpected defects without following any formal testing process.
o Example: Randomly clicking on different features of an app to see if any unexpected crashes occur.
Types of Testing
/ \
/ | | \ / | | \
o Steps:
o Steps:
o Expected Result: The correct product, quantity, and price should be displayed in the cart.
o Bug Example: The cart shows the wrong price or an empty cart after adding products.
o Steps:
o Expected Result: The website should respond to all users without crashing.
o Bug Example: The website becomes slow or crashes under heavy traffic.
o Steps:
o Expected Result: The app should be intuitive, with easy navigation and minimal friction.
o Bug Example: Users have difficulty finding the settings menu or navigating between screens.
• Functional Testing is typically prioritized during the initial stages of development, while Non-Functional Testing
is done later to ensure overall performance.
• Specialized testing types like Exploratory and Ad-hoc are used for finding defects that are not easily identified
through standard testing procedures.
Chapter 5: Test Case Development
A Test Case is a detailed document that outlines a specific set of actions to verify whether a particular functionality or
feature of the software behaves as expected. It includes inputs, execution steps, expected results, and any necessary
configurations to perform the test.
1. Test Case ID: A unique identifier for the test case (e.g., TC_001).
3. Preconditions: Any setup or conditions that must be met before executing the test (e.g., user must be logged
in).
5. Test Data: The input values used in the test (e.g., username, password).
6. Expected Result: The anticipated result or behavior after executing the test steps.
7. Actual Result: The actual outcome observed after executing the test case.
8. Pass/Fail: Indicates whether the test passed or failed based on the expected and actual results.
9. Priority: The importance level of the test (e.g., High, Medium, Low).
Writing effective test cases is a critical skill in manual testing. Here are some guidelines for creating clear and efficient
test cases:
• Be Clear and Concise: Test cases should be easy to understand. Avoid ambiguous or unclear language.
• Use Simple and Relevant Data: Use realistic data in test cases that simulate real-world usage.
• Cover Different Scenarios: Include positive and negative scenarios to ensure thorough testing (e.g., valid and
invalid inputs).
• Maintain Consistency: Follow a consistent format for all test cases to ensure clarity and ease of execution.
Test Steps :
Priority : High
Remarks : None
o Test Steps:
o Priority: High
o Remarks: None
o Test Steps:
o Expected Result: An error message should appear stating "Invalid username or password."
o Actual Result: (To be filled after execution)
o Priority: High
o Remarks: None
o Test scenarios where the application should handle invalid input or errors gracefully.
o Example: Verifying that a user can add an item to the cart and proceed to checkout.
• Test Steps:
• Expected Result: The selected product should be added to the cart with the correct price and quantity.
• Priority: High
• Remarks: None
• Traceability: Test cases should map back to requirements or user stories to ensure full coverage.
• Maintainability: Test cases should be easy to maintain and update as the software evolves.
A Test Plan is a comprehensive document that outlines the strategy, scope, approach, resources, and schedule for
testing activities. It defines the testing objectives, deliverables, and the criteria for testing success, ensuring that all
aspects of the software are tested effectively.
• Guiding the Testing Process: It provides a clear roadmap for the testing process.
• Ensuring Consistency: Ensures all team members follow the same approach and understand the scope of
testing.
• Resource Allocation: Helps in planning the required resources, tools, and time for testing.
• Communication Tool: Serves as a reference for the team, stakeholders, and clients.
5. Test Strategy: The overall approach to testing, including methodologies and levels of testing.
6. Test Deliverables: The documents and artifacts that will be produced during the testing process.
7. Testing Resources: A list of tools, environments, and team members required for testing.
8. Test Schedule: A timeline outlining the milestones and deadlines for testing activities.
9. Entry and Exit Criteria: Defines the conditions that must be met to begin and conclude testing.
10. Risk and Mitigation Plan: Identifies potential risks in the testing process and strategies for mitigating them.
11. Approval and Sign-off: The process for obtaining approval of the test plan and its components.
2. Introduction : Overview of the test plan for testing the e-commerce platform
4. Scope of Testing :
5. Test Strategy :
- Regression Testing: Ensure new code does not break existing features
6. Test Deliverables : Test cases, Test execution reports, Defect logs, Test summary report
8. Test Schedule :
1. Clarity and Detail: A test plan should be clear and detailed enough for anyone to understand the testing
approach.
2. Scope: The scope must be carefully defined to avoid scope creep and to ensure the testing process is focused.
3. Realistic Scheduling: The timeline should be feasible, considering resource availability and the complexity of
the testing.
4. Resource Allocation: Properly allocate tools, environments, and team members based on expertise and
availability.
5. Risk Management: Identify potential risks (e.g., resource constraints, environment issues) and plan mitigation
strategies.
6. Approval Process: Ensure proper sign-offs and approvals from stakeholders to proceed with the testing phase.
Introduction:
This test plan defines the approach and activities for testing the e-commerce platform’s key features, including user
login, product search, shopping cart functionality, and checkout process.
Test Items:
• Shopping Cart: Test adding/removing items, cart persistence, and price calculation.
• Checkout: Ensure smooth checkout process, including shipping options and order confirmation.
Scope of Testing:
• In-scope: Functional testing of login, cart, and checkout; Regression testing for any changes in the cart
functionality.
Test Strategy:
• Functional Testing: Test cases for login, search, cart, and checkout.
• Regression Testing: Ensure that the checkout process and cart functionality work after new updates.
Test Deliverables:
• Defect logs and test summary report at the end of the testing phase.
Testing Resources:
• Tools: Selenium for automation (if applicable), JIRA for defect tracking.
Test Schedule:
• Entry Criteria: Test environment setup, test cases prepared, and test data in place.
• Exit Criteria: All planned tests executed, critical defects resolved, test summary prepared.
• Mitigation: Coordinate with the business team to prepare realistic test data in advance.
Approval and Sign-off:
• Quality Assurance: Helps ensure that all critical aspects of the software are tested thoroughly.
• Stakeholder Alignment: Ensures all stakeholders are aligned on the testing approach, timeline, and
deliverables.
• Risk Mitigation: Identifies potential risks early on and prepares strategies to minimize their impact.
6.8 Conclusion
The Test Plan is a crucial document in manual testing. It ensures that testing is structured, organized, and aligned with
project goals. By defining the scope, resources, timelines, and risk management strategies, it provides a clear
framework for conducting effective testing.
Test Execution is the process of executing test cases as defined in the test plan and observing the actual outcomes.
During test execution, testers run the tests, record the results, compare them with the expected outcomes, and
determine whether the system is functioning as expected.
1. Preparation:
o Confirm that all required tools, applications, and test data are ready for execution.
o After executing each test, compare the actual results with the expected results.
o If the test fails, log a defect or bug, detailing the issue, steps to reproduce, and severity.
o Provide detailed information, such as error messages, screenshots, or logs, to help developers fix the
issue.
5. Reporting Results:
o Update the test case status (Pass/Fail) and provide feedback to stakeholders about the testing
progress.
Test execution can be broken down into different phases based on the project lifecycle:
• Alpha Testing: Testing conducted by the development team before releasing to the QA team.
• Beta Testing: Testing performed by the QA team or selected end-users before the product is released to the
public.
• Production Testing: The testing phase after the product has been deployed to the live environment to ensure
stability.
1. Preconditions:
2. Test Steps:
o Verify that the product appears in the cart with the correct details (product name, price, and quantity).
3. Expected Result:
o The product is successfully added to the cart, and the cart reflects the correct product name, price,
and quantity.
4. Actual Result:
o Pass/Fail: Pass
If the test had failed (e.g., the item was not added to the cart), a defect would be logged.
A defect (or bug) is any deviation from the expected result during test execution. Defects are reported so that
developers can fix them. Here are the key components of a defect report:
8. Environment: Details about the environment in which the defect was found (e.g., OS, browser version).
9. Attachments: Screenshots, logs, or other files that provide more details about the defect.
Let’s assume during test execution, we encountered an issue while adding an item to the shopping cart.
• Summary: Product not added to the cart after clicking "Add to Cart."
• Description: When attempting to add a product to the cart, the cart does not update with the product.
• Steps to Reproduce:
• Expected Result: The product "Laptop" should appear in the cart with the correct price and quantity.
Once a defect is reported, it goes through the following stages in its lifecycle:
1. New: The defect has been identified and reported but not yet assigned for fixing.
2. Assigned: The defect is assigned to a developer or team for investigation and resolution.
4. Fixed: The defect has been fixed and the developer has verified the solution.
5. Retesting: The QA team tests the fix to ensure the defect is resolved and no new issues have been introduced.
6. Closed: If the defect is successfully fixed, it is closed. If the defect is not reproducible or not valid, it may be
closed as "Not a Bug."
7. Rejected: If the defect is not deemed critical or is determined to be working as expected, it may be rejected.
There are several tools available for defect tracking and reporting, including:
1. JIRA: One of the most popular bug tracking tools, used for agile project management and issue tracking.
3. Trello: A simple board tool that can be used for tracking bugs in smaller projects or teams.
1. Documentation: Ensure all steps, results, and defects are well-documented to provide clear insights for
developers and stakeholders.
2. Reproducibility: Ensure that defects are reproducible by providing clear, actionable steps.
3. Timely Reporting: Report defects as soon as they are found to prevent delays in the development process.
4. Severity vs. Priority: Understand the difference between defect severity (the impact on functionality) and
priority (how soon it should be fixed).
5. Communication: Effective communication between QA and development teams is essential for resolving
defects efficiently.
7.10 Conclusion
Test execution and defect reporting are critical stages in the software testing process. By following a structured
approach to executing tests and logging defects, teams ensure that the software meets its quality standards. Effective
defect reporting and management contribute to a smoother development cycle and higher-quality software.
Test Reporting is the process of documenting and communicating the results of the testing phase. It involves
summarizing the outcomes of executed tests, tracking defects, and providing stakeholders with a clear overview of the
quality of the product.
The primary goal of test reporting is to offer transparency about the status of testing and to provide stakeholders with
the necessary information to make informed decisions about the product's readiness.
8.2 Importance of Test Reporting
• Tracking Progress: They provide a snapshot of test execution and defect statuses, which helps in understanding
the progress of testing activities.
• Informed Decision Making: They assist stakeholders (e.g., product owners, developers) in making decisions on
release readiness or further work needed.
• Quality Assurance: Test reports document whether the software meets the defined acceptance criteria and
quality standards.
• Documentation and Compliance: They serve as official records for audits and quality control.
1. Test Summary:
o A brief overview of the testing activities, including objectives, scope, and the testing environment.
3. Defect Summary:
o A summary of defects identified during testing, including their severity and status (open, in-progress,
fixed, closed).
4. Test Coverage:
o Indicates the percentage of the total application or functionality tested against the test plan.
5. Test Metrics:
o Metrics such as test case execution time, defect density, and pass/fail ratio, helping in assessing the
efficiency and effectiveness of testing.
7. Conclusion:
o A summary of the testing status and whether the product is ready for release, along with any open
issues that need resolution.
Defect Summary:
- Critical Defects: 2
- Major Defects: 5
- Minor Defects: 3
- Defects Closed: 5
- Defects Pending: 5
Test Coverage:
Test Metrics:
Conclusion:
- The product is not yet ready for release due to critical defects in the checkout process and
performance issues.
Test Closure is the final phase of the testing process, where the testing activities are formally concluded, and the
testing team prepares for project completion. This phase involves evaluating the entire testing process, ensuring that
all necessary documentation is completed, and providing final reports to stakeholders.
o Prepare a comprehensive report summarizing all test activities, results, and defect status, as shown in
the previous example.
o Ensure that all defects are properly logged, tracked, and resolved. Any open defects that may impact
the release should be flagged.
o All test cases, test scripts, defect logs, and other related artifacts should be archived and finalized for
future reference.
4. Lessons Learned:
o A retrospective session to review what went well and what could have been improved during the
testing process. This helps in improving future testing cycles.
5. Stakeholder Sign-off:
o Obtain final approval from stakeholders (e.g., product owners, project managers) to confirm that
testing is complete and the software is ready for release.
o Provide stakeholders with access to final test deliverables, such as test cases, defect logs, and test
execution reports, for record-keeping or auditing purposes.
Defect Summary:
• Critical Defects: 3
• Major Defects: 5
• Minor Defects: 4
• Defects Closed: 6
• Defects Pending: 6 (2 critical defects are pending resolution)
Lessons Learned:
• The testing team faced challenges with the payment integration, as the test environment was not stable,
impacting the ability to perform certain tests.
• Test automation can be improved by adding more automated tests for critical user flows (like login and
checkout).
• All test cases and defect logs have been reviewed and archived.
• Test execution reports, defect logs, and test summary have been delivered to the stakeholders.
Stakeholder Sign-off:
• Product Manager: Approved (with the understanding that critical defects will be fixed before release).
• QA Lead: Approved.
1. Accuracy: Test reports should accurately reflect the status of testing, defects, and overall quality.
3. Comprehensiveness: Ensure that all necessary information (e.g., defect summary, test coverage) is included in
the final report.
4. Timeliness: Test reports and closure activities should be completed promptly at the end of the testing phase to
allow for timely decision-making.
5. Feedback: Use lessons learned to improve future testing cycles and processes.
8.9 Conclusion
Test Reporting and Closure are essential for finalizing the testing phase and ensuring that all testing activities are
properly documented and communicated. A clear test report and formal test closure process help stakeholders assess
the product’s quality and make informed decisions about its release.
Chapter 9: Agile and Manual Testing
Agile methodology emphasizes flexibility, collaboration, and frequent delivery of small increments of software. In Agile,
manual testing is still essential, even in environments where automation is used, due to its ability to handle exploratory
testing, usability testing, and testing new features or functionalities in real-time.
While Agile practices focus on continuous iteration and improvement, manual testers ensure that user requirements
are met through comprehensive testing of each feature as it’s developed, ensuring high-quality deliverables at the end
of each sprint.
Manual testing in Agile involves testing the software manually after each sprint. Unlike traditional Waterfall
methodologies, where testing is done at the end of the project, Agile testing is integrated into the iterative process and
begins as soon as the first set of features is ready for testing.
Key Points:
• Frequent Releases: Testers perform manual testing on new features delivered in each sprint.
• Continuous Feedback: Feedback from testers is used to refine and improve the product, ensuring early
detection of issues.
• Collaboration: Testers work closely with developers and business stakeholders to ensure that the product
meets the acceptance criteria.
• Test Case Design: Writing test cases to validate features developed in each sprint.
• Exploratory Testing: Testing the product with an exploratory approach to find unexpected issues.
• Regression Testing: Ensuring that new code does not break existing functionality.
• UAT (User Acceptance Testing): Verifying that the product meets business requirements.
In an Agile environment, the role of a manual tester extends beyond just writing and executing test cases. Testers in
Agile teams are expected to contribute to the development process, provide feedback during sprints, and collaborate
across various stages of development.
• Active Participation in Sprint Planning: Testers participate in sprint planning sessions to understand the scope
and requirements for the upcoming sprint.
• Test Case Design and Execution: Writing, reviewing, and executing test cases during the sprint to validate new
features.
• Collaborating with Developers: Providing feedback on new features and working with developers to reproduce
and fix defects.
• Continuous Integration: Participating in continuous integration (CI) and continuous testing (CT) processes.
• Test Automation (Optional but Often Involved): In some Agile environments, testers may help create
automated tests or use automation tools for repetitive tasks while continuing to conduct manual testing for
complex scenarios.
• A deep understanding of both the business and technical aspects of the product.
In Agile, sprint planning is a collaborative session where the team discusses the features to be developed in the
upcoming sprint, sets priorities, and estimates the time required to complete the tasks.
• Understanding Requirements: Testers need to understand the user stories or requirements associated with
the sprint. This ensures that testing efforts are aligned with business goals.
• Test Case Planning: Testers can identify potential test scenarios, create test data, and design test cases based
on the acceptance criteria of the user stories.
• Effort Estimation: Testers collaborate with developers to estimate the testing effort required, ensuring that the
test cases can be executed within the sprint's timeframe.
Key Considerations:
• Test-First Approach: In some Agile methodologies like Test-Driven Development (TDD), test cases are written
before the development of features, and manual testers are often involved in reviewing these test cases.
• Test Cases and Acceptance Criteria: Testers ensure that all acceptance criteria are covered by the test cases
and that the software meets business requirements.
Context: Suppose we are testing a ride-sharing application in an Agile environment. The development team is working
on a new feature: “Ride Cancellation.”
• User Story: "As a user, I want to cancel a ride request before the driver accepts it."
• Acceptance Criteria:
o The user can cancel the ride request from the app.
Sprint Planning:
• Testers review the user story, understand the business requirements, and plan for testing the ride cancellation
feature.
o Test Case 1: Verify that the user can cancel the ride request.
o Test Case 2: Verify that the driver gets a cancellation notification.
o Test Case 3: Verify that the cancellation message is displayed to the user.
Manual Testing:
• Test Execution: During the sprint, testers execute these test cases on the newly developed feature. Any issues
are logged and communicated back to the development team for resolution.
• Exploratory Testing: Testers also perform exploratory testing to identify any edge cases related to ride
cancellation, such as attempting to cancel a ride after a driver has accepted it.
• Regression Testing: Manual testers run regression tests to ensure that the new feature doesn't break any
existing functionality (like ride booking or payment).
Collaboration:
• Testers work closely with developers, providing feedback on the functionality during the sprint.
• The team holds daily stand-ups to track testing progress and any roadblocks that may arise.
End of Sprint:
• At the end of the sprint, the feature is considered "done" if all tests have passed, defects are resolved, and it
meets the acceptance criteria.
Manual testing, while effective in many scenarios, comes with its own set of challenges. Some of the most common
challenges faced by manual testers include:
1. Time Constraints:
o Testing can be time-consuming, especially when the product has complex features or large volumes of
functionality to verify. Manual testing may not always be able to keep up with tight release schedules.
o Many test cases, particularly for regression testing, involve repetitive tasks that testers need to execute
multiple times. This can lead to tester fatigue and reduce efficiency.
o Due to time or resource constraints, testers may not be able to cover every single scenario. This can
lead to missing critical defects, especially in large applications.
4. Human Error:
o Manual testing is prone to human error. Testers may overlook test cases or fail to execute tests
accurately, leading to missed defects.
6. Lack of Objectivity:
o Testers can sometimes become biased due to familiarity with the application, which might result in
missing issues that a fresh perspective would catch.
o Manual testing often struggles with reproducing defects consistently, especially intermittent issues or
those occurring in specific environments.
Here are several strategies to help manual testers overcome common challenges:
o Proper planning ensures that the most critical test cases are prioritized. Testers should focus on high-
risk areas first and consider automating lower-priority tests.
o Use techniques like Boundary Value Analysis and Equivalence Partitioning to design test cases that
cover a wide range of scenarios while reducing redundant tests.
o For repetitive tests, especially regression tests, automation can save time and reduce errors. Testers
should focus on automating test cases that need to be executed frequently.
o Close collaboration with developers and other team members helps ensure that requirements are
understood and defects are addressed promptly.
o Regular feedback loops in Agile teams can help catch issues early in the sprint.
o Tools like bug trackers, test management systems, and performance testing tools can streamline the
process and help testers manage their efforts more effectively.
6. Exploratory Testing:
o Testers should regularly engage in exploratory testing to uncover issues that are difficult to capture in
predefined test cases. This also helps ensure coverage of edge cases.
o Proper defect tracking, with clear steps for developers to reproduce the issue, ensures that bugs are
resolved efficiently and testers can verify fixes.
Let's look at a few real-time examples to illustrate common challenges faced in manual testing:
o Challenge: The defect doesn't occur consistently, making it hard to reproduce manually.
o Solution: Testers used logging and debugging tools to track the defect. They also started tracking the
user’s actions before the issue occurred, eventually identifying a race condition in the cart logic.
o Example: A large banking application with many features, but limited time to test.
o Solution: Testers prioritized testing of the most critical functions (like fund transfers, balance checking)
and automated lower-priority tests. They also performed exploratory testing to uncover other defects.
o Challenge: Testers have limited time for testing each feature due to tight release schedules.
o Solution: Testers focused on functional testing and regression for the most critical workflows. They
also started leveraging test automation for routine checks.
10.4 Conclusion
Manual testing is a vital part of the software development lifecycle, especially in Agile environments. While manual
testing comes with challenges, such as time constraints and human error, these challenges can be mitigated through
strategic planning, collaboration, automation, and effective communication.
By understanding and addressing these challenges, testers can ensure that they deliver high-quality, reliable software
while adapting to the fast-paced nature of Agile development.