Unit-5-Testing
Unit-5-Testing
2
Introduction
▪ Pushing the quality concept down to the lowest level of the organization
4
Software Quality
• Kitchenham and Pfleeger’s article [60] on software quality gives a succinct exposition
of software quality.
5
Five views of quality
2. User View: It perceives quality as fitness for purpose. According to this view, while
evaluating the quality of a product, one must ask the key question: “Does the product satisfy
user needs and expectations?”
quality level of a product is determined by the extent to which the product meets its
specifications.
6
Five views of quality
4. Product View: In this case, quality is viewed as tied to the inherent characteristics of
the product. A product’s inherent characteristics, that is, internal qualities, determine
7
Role of Testing
• Testing plays an important role in achieving and assessing the quality of a software
product.
• On the one hand, we improve the quality of the products as we repeat a test–find
defects–fix cycle during development.
• On the other hand, we assess how good our system is when we perform system-level
tests before releasing a product.
• The activities for software quality assessment can be divided into two broad
categories, namely, static analysis and dynamic analysis.
8
Static Analysis
9
Dynamic Analysis
• The behavioral and performance properties of the program are also observed.
• Programs are executed with both typical and carefully chosen input values.
10
Verification and Validation
• Verification: This kind of activity helps us in evaluating a software system by determining whether the
product of a given development phase satisfies the requirements established before the start of that
phase.
• One may note that a product can be an intermediate product, such as requirement specification, design
specification, code, user manual, or even the final product.
• Activities that check the correctness of a development phase are called verification activities.
• Verification activities review interim work products, such as requirements specification, design, code,
and user manual, during a project life cycle to ensure their quality.
• Verification activities are performed on interim products by applying mostly static analysis techniques,
such as inspection, walkthrough, and reviews, and using standards and checklists.
11
Verification and Validation
• Validation: Activities of this kind help us in confirming that a product meets its
intended use.
• In other words, validation activities focus on the final product, which is extensively
tested from the customer point of view.
• Validation establishes whether the product meets overall expectations of the users.
12
Failure, Error, Fault, and Defect
• Failure: A failure is said to occur whenever the external behavior of a system does not
conform to that prescribed in the system specification.
• Error: An error is a state of the system. In the absence of any corrective action by the
system, an error state could lead to a failure which would not be attributed to any
event subsequent to the error.
• Fault: A fault is the adjudged cause of an error. A fault may remain undetected for a
long time, until some event activates it. When an event activates a fault, it first brings
the program into an intermediate error state.
13
Objectives of Testing
• The stakeholders in a test process are the programmers, the test engineers, the project
managers, and the customers.
• Different stakeholders view a test process from different perspectives as explained below:
• It does work: While implementing a program unit, the programmer may want to test whether
or not the unit works in normal circumstances. The programmer gets much confidence if the
unit works to his or her satisfaction.
• It does not work: Once the programmer (or the development team) is satisfied that a unit (or
the system) works to a certain degree, more tests are conducted with the objective of finding
faults in the unit (or the system). Here, the idea is to try to make the unit (or the system) fail.
14
Objectives of Testing
• Reduce the risk of failure: Most of the complex software systems contain faults, which cause
the system to fail from time to time. This concept of “failing from time to time” gives rise to the
notion of failure rate.
• Reduce the cost of testing: The different kinds of costs associated with a test process include:
15
What is a Test case?
• In its most basic form, a test case is a simple pair of <input, expected outcome>.
• If a program under test is expected to compute the square root of nonnegative numbers,
then four examples of test cases are as shown in Figure.
16
Testing Activities
• Identify an objective to be tested: The first activity is to identify an objective to be tested. The
objective defines the intention, or purpose, of designing one or more test cases to ensure that
the program supports the objective. A clear purpose must be associated with every test case..
17
Testing Activities
• Select inputs: The second activity is to select test inputs. Selection of test inputs can be based
on the requirements specification, the source code, or our expectations. Test inputs are
selected by keeping the test objective in mind.
• Compute the expected outcome: The third activity is to compute the expected outcome of the
program with the selected inputs. In most cases, this can be done from an overall, high-level
understanding of the test objective and the specification of the program under test.
• Set up the execution environment of the program: The fourth step is to prepare the right
execution environment of the program. In this step all the assumptions external to the program
must be satisfied.
18
Testing Activities
• Execute the program: In the fifth step, the test engineer executes the program with the selected inputs
and observes the actual outcome of the program. To execute a test case, inputs may be provided to the
program at different physical locations at different times. The concept of test coordination is used in
synchronizing different components of a test case.
• Analyze the test result: The final test activity is to analyze the result of test execution. Here, the main
task is to compare the actual outcome of program execution with the expected outcome.
• There are three major kinds of test verdicts, namely, pass, fail, and inconclusive, If the program produces
the expected outcome and the purpose of the test case is satisfied, then a pass verdict is assigned.
• If the program does not produce the expected outcome, then a fail verdict is assigned.
19
Test Levels
• Testing is performed at different levels involving the complete system or parts of it throughout
the life cycle of a software product.
• A software system goes through four stages of testing before it is actually deployed.
• These four stages are known as unit, integration, system, and acceptance level testing.
• The first three levels of testing are performed by a number of different stakeholders in the
development organization, where as acceptance testing is performed by the customers.
• In unit testing, programmers test individual program units, such as a procedures, functions,
methods, or classes, in isolation. After ensuring that individual units work to a satisfactory
extent, modules are assembled to construct larger subsystems by following integration testing
techniques. 20
Test Levels
• Integration testing is jointly performed by software developers and integration test engineers.
• The objective of integration testing is to construct a reasonably stable system that can
withstand the rigor of system-level testing.
• System-level testing includes a wide spectrum of testing, such as functionality testing, security
testing, robustness testing, load testing, stability testing, stress testing, performance testing,
and reliability testing.
• System testing is a critical phase in a software development process because of the need to
meet a tight schedule close to delivery date, to discover most of the faults, and to verify that
fixes are working and have not resulted in new faults.
21
22
Test Levels
• System testing comprises a number of distinct activities: creating a test plan, designing a test
suite, preparing test environments, executing the tests by following a clear strategy, and
monitoring the process of test execution.
• Regression testing is another level of testing that is performed throughout the life cycle of a
system.
• The key idea in regression testing is to ascertain that the modification has not introduced any
new faults in the portion that was not subject to modification.
• After the completion of system-level testing, the product is delivered to the customer.
• The customer performs their own series of tests, commonly known as acceptance testing.
• The objective of acceptance testing is to measure the quality of the product, rather than
searching for the defects, which is objective of system testing.
• A key notion in acceptance testing is the customer’s expectations from the system.
• The purpose of system test planning, or simply test planning, is to get ready and organized for
test execution.
• A test plan provides a framework, scope, details of resource needed, effort required, schedule of
activities, and a budget.
• A framework is a set of ideas, facts, or circumstances within which the tests will be conducted.
• During the test design phase, the system requirements are critically studied, system features to
be tested are thoroughly identified, and the objectives of test cases and the detailed behavior of
test cases are defined.
26
Sources of information for Test Case selection
• In order to generate effective tests at a lower cost, test designers analyze the following sources
of information:
• Requirements and functional specifications
• Source code
• Input and output domains
• Operational profile – Quantitative characterization of how a system will be used.
• Fault model - Previously encountered faults are an excellent source of information in designing new test cases.
The known faults are classified into different classes, such as initialization faults, logic faults, and interface
faults, and stored in a repository
27
Test Scenarios
• Test scenarios are designed to cover the critical functionalities of the software and
guide the creation of detailed test cases.
28
Test Scenarios
4. Expected Result: What the expected behavior or outcome of the test should be.
5. Test Coverage: The extent to which different functional areas of the application are covered.
29
Test Scenarios
4. Expected Result: What the expected behavior or outcome of the test should be.
5. Test Coverage: The extent to which different functional areas of the application are covered.
2. Foundation for Test Cases: Test scenarios provide a basis for creating detailed test cases.
3. Clarity: They help testers and stakeholders understand the scope of testing and expectations.
30
White-box testing
• White box testing, also known as clear box testing, glass box or structural testing.
• Is a software testing technique where the internal structure, design, and
implementation of the application are tested.
• In white box testing, the tester has knowledge of the internal workings of the system,
including its code, architecture, and algorithms.
Key Characteristics:
• Internal Testing: The tester has access to the source code and internal logic of the
application.
• Focus: The focus is on testing individual functions, logic, paths, and code structures
(e.g., loops, conditions, branches).
31
White-box testing
32
White-box testing
Advantages:
Thorough Testing: Since it involves testing the internal workings of the application, it helps in identifying
hidden errors and potential vulnerabilities.
Early Detection of Bugs: White box testing can help detect issues at an early stage of development by
focusing on the internal code and logic.
Optimization: It helps identify inefficient or redundant code that can be optimized.
Disadvantages:
Requires Expertise: Testers need a deep understanding of the code and programming languages used,
which can make the process more complex.
Time-Consuming: Since it involves testing each internal component and path, it can be time-intensive.
Limited Scope: White box testing typically doesn’t cover the system’s user interface (UI) or behavior under
real-world usage, so it may miss user-centric issues.
33
Black-box testing
34
Black-box testing
35
Black-box testing
Advantages:
No Need for Technical Knowledge: Testers do not require any knowledge of programming or the
system’s code.
Real-World Scenarios: The testing focuses on how the system will be used by actual users,
simulating real-world interactions.
Helps Find User-Centric Issues: Black box testing can uncover issues related to usability, user
interfaces, and other behavior-related bugs.
Disadvantages:
Limited Coverage: It doesn’t provide insight into the internal workings of the system, so certain
types of defects (e.g., performance or security issues) may be missed.
Redundancy: Without access to the code, it may be difficult to know if all possible scenarios have
been tested, leading to potential gaps.
Not Effective for Complex Logic: Testing complex algorithms or business logic can be challenging
without knowing the implementation details.
36
Unit Testing
• Unit testing is a type of software testing where individual components or units of code
are tested in isolation to ensure they function correctly.
• The primary focus of unit testing is to validate the behavior of a small, specific piece of
code (such as a function, method, or class) in isolation from the rest of the application.
37
Key Characteristics of Unit Testing
• Focus on Small Units: Unit tests focus on testing the smallest testable parts of the
application (e.g., functions, methods, or classes).
• Isolation: The unit being tested is isolated from other parts of the system to ensure the
test is focused solely on the specific functionality.
• Automated: Unit tests are typically automated, allowing for frequent and consistent
testing during the development cycle.
• Test Input and Output: The tests check if the unit works as expected with various
inputs and produces the correct outputs.
38
Purpose of Unit Testing
• Ensure Correct Functionality: To verify that the individual units of code perform their
intended tasks correctly.
• Early Detection of Bugs: Unit tests help catch bugs early in the development process,
making it easier to fix them before they escalate.
• Code Refactoring Support: Unit tests allow developers to refactor code with confidence,
ensuring that changes do not introduce new issues.
• Documentation: Unit tests can serve as documentation for how individual units of the
code are supposed to behave.
39
Purpose of Unit Testing
• Ensure Correct Functionality: To verify that the individual units of code perform their
intended tasks correctly.
• Early Detection of Bugs: Unit tests help catch bugs early in the development process,
making it easier to fix them before they escalate.
• Code Refactoring Support: Unit tests allow developers to refactor code with confidence,
ensuring that changes do not introduce new issues.
• Documentation: Unit tests can serve as documentation for how individual units of the
code are supposed to behave.
40
Integration Testing
• The purpose of integration testing is to verify that different modules or services work
together as expected after being integrated into a larger system.
• It helps detect issues that might not have been identified during unit testing, such as
problems with data flow, control flow, or interaction between components.
41
Key Concepts of Integration Testing
• Modules Integration: Integration testing occurs after unit testing and before system
testing. It focuses on the interaction between integrated units or modules, ensuring
that they work correctly together.
42
Key Concepts of Integration Testing
• Bottom-up: Testing starts from the lower-level modules and proceeds upwards.
43
Key Concepts of Integration Testing
• Test Environment: Integration testing typically requires a more complex test environment than unit
testing. It may include external systems, databases, or APIs to simulate the real-world behavior of the
integrated system.
• Focus Areas:
• Interface Compatibility: Ensuring that data passed between modules is correctly processed and
formatted.
• Data Integrity: Verifying that data exchanged between modules remains consistent.
• Performance: Ensuring that the system performs well when integrated components work together.
44
Acceptance testing
45
Key Aspects of Acceptance Testing
1. Purpose:The primary goal of acceptance testing is to ensure the software functions as expected in real-world
scenarios and that it meets the business needs of the stakeholders. It focuses on validating the usability, functionality,
and business requirements of the system rather than its technical correctness (which is the focus of earlier tests like
system testing).
46
Key Aspects of Acceptance Testing
• Beta Testing: Performed by a limited group of end-users or customers to uncover issues that
weren't identified during alpha testing. Feedback gathered is used for final improvements
before full release.
47
Process of Acceptance Testing
• Requirement Review: Review the business and functional requirements to ensure they are clear and well-
defined.
• Test Plan Creation: Create an acceptance test plan outlining the scope, test cases, resources, and criteria for
success.
• Test Case Design: Design test cases based on real-world scenarios and end-user expectations.
• Test Execution: Perform the test cases by simulating actual user behavior and checking the system’s
functionality, performance, and usability.
• Issue Reporting: If any issues or defects are identified, they are reported to the development team for
resolution.
• Acceptance Decision: Based on the results, the client or business stakeholder decides whether the product is
accepted, rejected, or needs further development.
48
System Testing
• System testing is a type of software testing that focuses on verifying the complete and
integrated software system to ensure it meets the specified requirements.
• It is a high-level test performed after unit testing, integration testing, and before
acceptance testing.
• System testing checks the system's behavior as a whole in an environment that mimics
the real world and evaluates its functionality, performance, security, usability, and
compatibility.
49
Types of System Testing
• Functional Testing: Ensures the system works according to its specifications and requirements.
• This includes testing all user-facing features and functionalities.
• Non-functional Testing: Evaluates aspects like performance, scalability, security, usability, and
compatibility.
• Performance Testing: Tests how the system performs under different levels of load and stress.
• Security Testing: Checks for vulnerabilities and ensures that the system is secure.
• Usability Testing: Evaluates how easy and user-friendly the system is.
• Compatibility Testing: Assesses the system's compatibility with different operating systems, browsers,
and devices.
• Regression Testing: Ensures that recent changes (such as code updates or bug fixes) haven’t negatively
affected existing functionality.
50
System Testing Process
• Test Planning: Create a detailed test plan that outlines the scope, objectives, resources,
schedule, and deliverables of system testing.
• Test Design: Design test cases that cover all aspects of the system (both functional and non-
functional). These should align with the system requirements.
• Test Execution: Execute the test cases in a test environment that mirrors production as closely
as possible.
• Defect Reporting: Document any defects found during testing and communicate them to the
development team for resolution.
• Test Closure: After testing is complete, evaluate the results, prepare test reports, and close the
testing phase.
51
Goals of System Testing
• Verification: Ensure the system works as expected based on the defined requirements.
• Validation: Verify that the system meets business needs and is ready for production
deployment.
• Identifying Defects: Discover defects and bugs before the system is released to users.
• Assessing Quality: Ensure the software meets high standards of quality in terms of
functionality, performance, security, and usability.
52
Challenges in System Testing
• Complexity: System testing can be complex due to the interactions between multiple
components and subsystems.
• Environment Differences: The test environment may not always replicate the real-
world environment exactly, leading to discrepancies.
• Time Constraints: System testing can be time-consuming, especially for large, complex
systems, and there may be pressure to complete it quickly.
53