Software Testing
Software Testing
Software Testing
• Software testing is the process of finding errors in the developed product.
• It also checks whether the real outcomes can match expected results, as
well as aids in the identification of defects, missing requirements, or gaps.
Software testing can be divided into two steps:
1. Verification: It refers to the set of tasks that ensure that the software
correctly implements a specific function. It means “Are we building
the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements. It
means “Are we building the right product?”
Manual testing
• Manual testing is a type of software testing where testers manually execute
test cases without using any automated tools.
• In manual testing, testers simulate end-user scenarios to ensure that the
software behaves as expected and meets the specified requirements.
Automation testing
• Automation testing is the process of using software tools and scripts to
automate the execution of test cases and compare the actual outcomes with
expected outcomes.
• Automation testing involves the use of automation frameworks, scripts,
and tools to perform testing tasks more efficiently and accurately.
software tools:
• Selenium WebDriver: For automating web browser interactions.
• Appium: For automating mobile application testing (iOS and Android).
• Test Complete: For functional UI testing of web, desktop, and mobile apps.
• Katalon Studio: A comprehensive tool for web, API, mobile, and desktop
automation.
• Robot Framework: An open-source automation framework for various
types of testing.
• Cypress: Specifically designed for modern web application testing.
• Postman: For automating API testing and web services.
• Jenkins: For automating CI/CD pipelines and integrating testing into
development workflows.
Differentiate between White box and Black box testing
Requirement Analysis:
• In this phase, testers analyse the requirements provided by the client or
stakeholders.
• They understand what the software is supposed to do and how it's supposed
to behave.
Test Planning:
• Test planning involves creating a detailed plan outlining the testing
approach, scope, resources, timelines, and deliverables.
• Testers define what needs to be tested, how it will be tested, and who will
do the testing.
Test Case Development:
• Test cases are designed based on the requirements and specifications.
• Testers create detailed steps to verify that the software functions correctly
under various conditions.
Test Environment Setup:
• Testers set up the testing environment, which includes hardware, software,
tools, and other resources needed to execute the tests effectively.
Test Execution:
• In this phase, testers execute the test cases created earlier.
• They run the software with different inputs and configurations to verify its
behaviour and functionality.
• Any defects or issues found during testing are reported.
Defect Tracking and Management:
• Document defects with severity, priority, and steps to reproduce.
• Communicate issues to the development team for resolution.
Test Reporting:
• Summarize testing activities in reports.
• Include test coverage, execution results, and defect metrics.
• Provide stakeholders with insights into software quality.
Test Closure:
• Once testing is complete and all identified defects are fixed.
• The testing team conducts a final assessment to ensure that all requirements
have been met and the software is ready for release.
• Test closure involves documenting lessons learned and archiving testing
artifacts.
Fundamental approaches to apply test cases
Requirement-Based Testing:
• Create test cases based on what the software is supposed to do according
to its requirements.
• Make sure the software does what it's supposed to do as per these
requirements.
Equivalence Partitioning:
• Group different kinds of input into sets.
• Test just one input from each set to cover various possibilities without
testing everything.
Boundary Value Analysis:
• Test the edges and nearby values of input ranges.
• Problems are more likely to happen at these edges, so test them carefully.
Error Guessing:
• Use your experience to guess where things might go wrong.
• Test those areas of the software to see if your guesses were right.
Exploratory Testing:
• Explore the software while testing it.
• Try different things to see what happens and if anything breaks, without a
strict plan.
Model-Based Testing:
• Use diagrams or models to plan your tests.
• These help you think about all the different ways the software might work.
Risk-Based Testing:
• Focus on testing the parts of the software that could cause the most
problems if they don't work right.
• Prioritize testing based on potential risks to the project or system.
Combinatorial Testing:
• Test different combinations of things together.
• Check if they work well together or if they cause problems when combined.
Regression Testing:
• Make sure that new changes to the software don't accidentally break things
that used to work.
• Re-run tests to ensure previous functionalities are still intact after updates.
Ad Hoc Testing:
• Test the software randomly, without a plan.
• See if you can find any problems by just playing around with it.
• Systematic Approach:
Developing systematic approaches or guidelines for conducting boundary
value analysis across various domains and applications.
• Risk-Based Testing:
Prioritizing boundary value testing based on the perceived risk associated
with different boundary conditions, focusing efforts on critical areas.