0% found this document useful (0 votes)
6 views53 pages

Unit-5-Testing

The document discusses the importance of software testing in ensuring product quality throughout the development process, emphasizing a shift from traditional end-cycle defect detection to an integrated approach across all phases. It outlines various aspects of software quality, testing methodologies, and the roles of verification and validation, while detailing testing activities, objectives, and levels. Additionally, it highlights the significance of test planning, design, and the use of test scenarios to ensure comprehensive coverage and effective testing outcomes.

Uploaded by

Gavi Kiran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
6 views53 pages

Unit-5-Testing

The document discusses the importance of software testing in ensuring product quality throughout the development process, emphasizing a shift from traditional end-cycle defect detection to an integrated approach across all phases. It outlines various aspects of software quality, testing methodologies, and the roles of verification and validation, while detailing testing activities, objectives, and levels. Additionally, it highlights the significance of test planning, design, and the use of test scenarios to ensure comprehensive coverage and effective testing outcomes.

Uploaded by

Gavi Kiran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 53

BCA530 – Software Engineering

Dr. Siddesha S MCA, M.Sc. Tech(by Research), Ph.D.


Associate Professor
Department of Computer Applications
JSS Science and Technology University, Mysuru
1
Unit 5- Testing

2
Introduction

▪ Developing quality products on tighter schedules is critical for a company to be


successful in the new global economy.
▪ Traditionally, efforts to improve quality have centered around the end of the product
development cycle by emphasizing the detection and correction of defects.
▪ On the contrary, the new approach to enhancing quality encompasses all phases of a
product development process—from a requirements analysis to the final delivery of
the product to the customer.
▪ Every step in the development process must be performed to the highest possible
standard.
3
Introduction

An effective quality process must focus on:

▪ Paying much attention to customer’s requirements

▪ Making efforts to continuously improve quality

▪ Integrating measurement processes with product design and development

▪ Pushing the quality concept down to the lowest level of the organization

▪ Developing a system-level perspective with an emphasis on methodology and process

▪ Eliminating waste through continuous improvement

4
Software Quality

• The question “What is software quality?” evokes many different answers.

• Quality is a complex concept—it means different things to different people, and it is


highly context dependent.

• Kitchenham and Pfleeger’s article [60] on software quality gives a succinct exposition
of software quality.

• They discuss five views of quality in a comprehensive manner as follows:

5
Five views of quality

1. Transcendental View: It envisages quality as something that can be recognized but is


difficult to define. The transcendental view is not specific to software quality alone but has
been applied in other complex areas of everyday life.

2. User View: It perceives quality as fitness for purpose. According to this view, while
evaluating the quality of a product, one must ask the key question: “Does the product satisfy
user needs and expectations?”

3. Manufacturing View: Here quality is understood as conformance to the specification. The

quality level of a product is determined by the extent to which the product meets its

specifications.
6
Five views of quality

4. Product View: In this case, quality is viewed as tied to the inherent characteristics of

the product. A product’s inherent characteristics, that is, internal qualities, determine

its external qualities.

5. Value-Based View: Quality, in this perspective, depends on the amount a customer is

willing to pay for it.

7
Role of Testing

• Testing plays an important role in achieving and assessing the quality of a software
product.

• On the one hand, we improve the quality of the products as we repeat a test–find
defects–fix cycle during development.

• On the other hand, we assess how good our system is when we perform system-level
tests before releasing a product.

• The activities for software quality assessment can be divided into two broad
categories, namely, static analysis and dynamic analysis.
8
Static Analysis

• As the term “static” suggests, it is based on the examination of a number of documents,


namely requirements documents, software models, design documents, and source
code.
• Traditional static analysis includes code review, inspection, walk-through, algorithm
analysis, and proof of correctness.
• It does not involve actual execution of the code under development. Instead, it
examines code and reasons over all possible behaviors that might arise during run
time.
• Compiler optimizations are standard static analysis.

9
Dynamic Analysis

• Dynamic analysis of a software system involves actual program execution in


order to expose possible program failures.

• The behavioral and performance properties of the program are also observed.

• Programs are executed with both typical and carefully chosen input values.

10
Verification and Validation

• Verification: This kind of activity helps us in evaluating a software system by determining whether the
product of a given development phase satisfies the requirements established before the start of that
phase.

• One may note that a product can be an intermediate product, such as requirement specification, design
specification, code, user manual, or even the final product.

• Activities that check the correctness of a development phase are called verification activities.

• Verification activities review interim work products, such as requirements specification, design, code,
and user manual, during a project life cycle to ensure their quality.

• Verification activities are performed on interim products by applying mostly static analysis techniques,
such as inspection, walkthrough, and reviews, and using standards and checklists.

11
Verification and Validation

• Validation: Activities of this kind help us in confirming that a product meets its
intended use.

• Validation activities aim at confirming that a product meets its customer’s


expectations.

• In other words, validation activities focus on the final product, which is extensively
tested from the customer point of view.

• Validation establishes whether the product meets overall expectations of the users.

12
Failure, Error, Fault, and Defect

• Failure: A failure is said to occur whenever the external behavior of a system does not
conform to that prescribed in the system specification.

• Error: An error is a state of the system. In the absence of any corrective action by the
system, an error state could lead to a failure which would not be attributed to any
event subsequent to the error.

• Fault: A fault is the adjudged cause of an error. A fault may remain undetected for a
long time, until some event activates it. When an event activates a fault, it first brings
the program into an intermediate error state.

13
Objectives of Testing

• The stakeholders in a test process are the programmers, the test engineers, the project
managers, and the customers.

• Different stakeholders view a test process from different perspectives as explained below:

• It does work: While implementing a program unit, the programmer may want to test whether
or not the unit works in normal circumstances. The programmer gets much confidence if the
unit works to his or her satisfaction.

• It does not work: Once the programmer (or the development team) is satisfied that a unit (or
the system) works to a certain degree, more tests are conducted with the objective of finding
faults in the unit (or the system). Here, the idea is to try to make the unit (or the system) fail.

14
Objectives of Testing

• Reduce the risk of failure: Most of the complex software systems contain faults, which cause
the system to fail from time to time. This concept of “failing from time to time” gives rise to the
notion of failure rate.

• Reduce the cost of testing: The different kinds of costs associated with a test process include:

• the cost of designing, maintaining, and executing test cases,

• the cost of analyzing the result of executing each test case,

• the cost of documenting the test cases, and

• the cost of actually executing the system and documenting it.

15
What is a Test case?

• In its most basic form, a test case is a simple pair of <input, expected outcome>.

• If a program under test is expected to compute the square root of nonnegative numbers,
then four examples of test cases are as shown in Figure.

16
Testing Activities

• Identify an objective to be tested: The first activity is to identify an objective to be tested. The
objective defines the intention, or purpose, of designing one or more test cases to ensure that
the program supports the objective. A clear purpose must be associated with every test case..

17
Testing Activities

• Select inputs: The second activity is to select test inputs. Selection of test inputs can be based
on the requirements specification, the source code, or our expectations. Test inputs are
selected by keeping the test objective in mind.

• Compute the expected outcome: The third activity is to compute the expected outcome of the
program with the selected inputs. In most cases, this can be done from an overall, high-level
understanding of the test objective and the specification of the program under test.

• Set up the execution environment of the program: The fourth step is to prepare the right
execution environment of the program. In this step all the assumptions external to the program
must be satisfied.
18
Testing Activities

• Execute the program: In the fifth step, the test engineer executes the program with the selected inputs
and observes the actual outcome of the program. To execute a test case, inputs may be provided to the
program at different physical locations at different times. The concept of test coordination is used in
synchronizing different components of a test case.

• Analyze the test result: The final test activity is to analyze the result of test execution. Here, the main
task is to compare the actual outcome of program execution with the expected outcome.

• There are three major kinds of test verdicts, namely, pass, fail, and inconclusive, If the program produces
the expected outcome and the purpose of the test case is satisfied, then a pass verdict is assigned.

• If the program does not produce the expected outcome, then a fail verdict is assigned.

19
Test Levels

• Testing is performed at different levels involving the complete system or parts of it throughout
the life cycle of a software product.

• A software system goes through four stages of testing before it is actually deployed.

• These four stages are known as unit, integration, system, and acceptance level testing.

• The first three levels of testing are performed by a number of different stakeholders in the
development organization, where as acceptance testing is performed by the customers.

• In unit testing, programmers test individual program units, such as a procedures, functions,
methods, or classes, in isolation. After ensuring that individual units work to a satisfactory
extent, modules are assembled to construct larger subsystems by following integration testing
techniques. 20
Test Levels

• Integration testing is jointly performed by software developers and integration test engineers.

• The objective of integration testing is to construct a reasonably stable system that can
withstand the rigor of system-level testing.

• System-level testing includes a wide spectrum of testing, such as functionality testing, security
testing, robustness testing, load testing, stability testing, stress testing, performance testing,
and reliability testing.

• System testing is a critical phase in a software development process because of the need to
meet a tight schedule close to delivery date, to discover most of the faults, and to verify that
fixes are working and have not resulted in new faults.
21
22
Test Levels

• System testing comprises a number of distinct activities: creating a test plan, designing a test
suite, preparing test environments, executing the tests by following a clear strategy, and
monitoring the process of test execution.

• Regression testing is another level of testing that is performed throughout the life cycle of a
system.

• Regression testing is performed whenever a component of the system is modified.

• The key idea in regression testing is to ascertain that the modification has not introduced any
new faults in the portion that was not subject to modification.

• To be precise, regression testing is not a distinct level of testing.


23
Test Levels

• After the completion of system-level testing, the product is delivered to the customer.

• The customer performs their own series of tests, commonly known as acceptance testing.

• The objective of acceptance testing is to measure the quality of the product, rather than
searching for the defects, which is objective of system testing.

• A key notion in acceptance testing is the customer’s expectations from the system.

• There are two kinds of acceptance testing:

• User acceptance testing (UAT) – By customer to ensure contractual acceptance

• Business acceptance testing (BAT) – Undertaken within supplier’s development organization


24
Test Plan
• A test plan is a detailed document that outlines the strategy, scope, resources, and schedule for testing
activities.
• It serves as a blueprint to ensure that the software product meets the required quality standards before
release. The test plan typically includes the following components:

1. Test Objectives: Goals and purpose of testing.


2. Test Scope: What will be tested and what will not.
3. Test Strategy: The overall approach, including types of testing (e.g., functional, non-functional).
4. Test Deliverables: Documentation and reports to be provided (e.g., test cases, test results).
5. Test Environment: Hardware, software, and network configuration needed for testing.
6. Test Schedule: Timeframes for each phase of testing.
7. Resources: People, tools, and equipment required.
8. Test Criteria: Acceptance criteria, including pass/fail conditions.
9. Risk and Mitigation: Potential risks and plans to address them.
25
Test Plan and Design

• The purpose of system test planning, or simply test planning, is to get ready and organized for
test execution.

• A test plan provides a framework, scope, details of resource needed, effort required, schedule of
activities, and a budget.

• A framework is a set of ideas, facts, or circumstances within which the tests will be conducted.

• Test design is a critical phase of software testing.

• During the test design phase, the system requirements are critically studied, system features to
be tested are thoroughly identified, and the objectives of test cases and the detailed behavior of
test cases are defined.
26
Sources of information for Test Case selection

• A software development process generates a large body of information, such as requirements


specification, design document, and source code.

• In order to generate effective tests at a lower cost, test designers analyze the following sources
of information:
• Requirements and functional specifications
• Source code
• Input and output domains
• Operational profile – Quantitative characterization of how a system will be used.
• Fault model - Previously encountered faults are an excellent source of information in designing new test cases.
The known faults are classified into different classes, such as initialization faults, logic faults, and interface
faults, and stored in a repository

27
Test Scenarios

• Test scenarios are high-level descriptions of what needs to be tested in a software


application.

• They provide a broad overview of a specific function or feature that needs to be


validated during testing.

• Test scenarios are designed to cover the critical functionalities of the software and
guide the creation of detailed test cases.

28
Test Scenarios

• Key aspects of test scenarios:


1. Test Objective: What functionality or feature is being tested.

2. Conditions: The environment or setup conditions for executing the test.

3. Input Data: The data or parameters to be used for the test.

4. Expected Result: What the expected behavior or outcome of the test should be.

5. Test Coverage: The extent to which different functional areas of the application are covered.

29
Test Scenarios

• Key aspects of test scenarios:


1. Test Objective: What functionality or feature is being tested.

2. Conditions: The environment or setup conditions for executing the test.

3. Input Data: The data or parameters to be used for the test.

4. Expected Result: What the expected behavior or outcome of the test should be.

5. Test Coverage: The extent to which different functional areas of the application are covered.

• Importance of Test Scenarios:


1. Broad Coverage: They ensure that key functions are tested, reducing the risk of missing critical issues.

2. Foundation for Test Cases: Test scenarios provide a basis for creating detailed test cases.

3. Clarity: They help testers and stakeholders understand the scope of testing and expectations.
30
White-box testing

• White box testing, also known as clear box testing, glass box or structural testing.
• Is a software testing technique where the internal structure, design, and
implementation of the application are tested.
• In white box testing, the tester has knowledge of the internal workings of the system,
including its code, architecture, and algorithms.
Key Characteristics:
• Internal Testing: The tester has access to the source code and internal logic of the
application.
• Focus: The focus is on testing individual functions, logic, paths, and code structures
(e.g., loops, conditions, branches).
31
White-box testing

Types of White Box Testing:


Unit Testing: Testing individual components or units of code.
Integration Testing: Verifying that different modules work together correctly.
Code Coverage: Ensuring all paths, branches, and conditions in the code are tested.
Path Testing: Testing all possible paths in the code to ensure each path works as
expected.
Branch Testing: Testing each branch or decision point in the code (e.g., if, else).

32
White-box testing

Advantages:
Thorough Testing: Since it involves testing the internal workings of the application, it helps in identifying
hidden errors and potential vulnerabilities.
Early Detection of Bugs: White box testing can help detect issues at an early stage of development by
focusing on the internal code and logic.
Optimization: It helps identify inefficient or redundant code that can be optimized.
Disadvantages:
Requires Expertise: Testers need a deep understanding of the code and programming languages used,
which can make the process more complex.
Time-Consuming: Since it involves testing each internal component and path, it can be time-intensive.
Limited Scope: White box testing typically doesn’t cover the system’s user interface (UI) or behavior under
real-world usage, so it may miss user-centric issues.
33
Black-box testing

• Is a software testing technique where the tester evaluates the functionality of an


application without having knowledge of its internal code or structure.
• In this approach, the focus is entirely on the input and output of the system, rather than
how the system processes the inputs.
Key Characteristics:
External Testing: The tester does not have access to the internal code, design, or
architecture of the system.
Focus: Testing is based on the system’s behavior, functionality, and user requirements.

34
Black-box testing

Types of Black Box Testing:


1. Functional Testing: Verifying that the system performs its intended functions
correctly.
2. Non-Functional Testing: Testing for performance, usability, reliability, etc.
3. System Testing: Validating the entire system's compliance with the requirements.
4. Acceptance Testing: Ensuring the system meets business requirements and is ready
for deployment.
5. Regression Testing: Ensuring that new changes don't break existing functionality.

35
Black-box testing
Advantages:
No Need for Technical Knowledge: Testers do not require any knowledge of programming or the
system’s code.
Real-World Scenarios: The testing focuses on how the system will be used by actual users,
simulating real-world interactions.
Helps Find User-Centric Issues: Black box testing can uncover issues related to usability, user
interfaces, and other behavior-related bugs.
Disadvantages:
Limited Coverage: It doesn’t provide insight into the internal workings of the system, so certain
types of defects (e.g., performance or security issues) may be missed.
Redundancy: Without access to the code, it may be difficult to know if all possible scenarios have
been tested, leading to potential gaps.
Not Effective for Complex Logic: Testing complex algorithms or business logic can be challenging
without knowing the implementation details.
36
Unit Testing

• Unit testing is a type of software testing where individual components or units of code
are tested in isolation to ensure they function correctly.

• The primary focus of unit testing is to validate the behavior of a small, specific piece of
code (such as a function, method, or class) in isolation from the rest of the application.

37
Key Characteristics of Unit Testing

• Focus on Small Units: Unit tests focus on testing the smallest testable parts of the
application (e.g., functions, methods, or classes).

• Isolation: The unit being tested is isolated from other parts of the system to ensure the
test is focused solely on the specific functionality.

• Automated: Unit tests are typically automated, allowing for frequent and consistent
testing during the development cycle.

• Test Input and Output: The tests check if the unit works as expected with various
inputs and produces the correct outputs.
38
Purpose of Unit Testing

• Ensure Correct Functionality: To verify that the individual units of code perform their
intended tasks correctly.

• Early Detection of Bugs: Unit tests help catch bugs early in the development process,
making it easier to fix them before they escalate.

• Code Refactoring Support: Unit tests allow developers to refactor code with confidence,
ensuring that changes do not introduce new issues.

• Documentation: Unit tests can serve as documentation for how individual units of the
code are supposed to behave.
39
Purpose of Unit Testing

• Ensure Correct Functionality: To verify that the individual units of code perform their
intended tasks correctly.

• Early Detection of Bugs: Unit tests help catch bugs early in the development process,
making it easier to fix them before they escalate.

• Code Refactoring Support: Unit tests allow developers to refactor code with confidence,
ensuring that changes do not introduce new issues.

• Documentation: Unit tests can serve as documentation for how individual units of the
code are supposed to behave.
40
Integration Testing

• Integration Testing is a type of software testing where individual units or components


of a system are combined and tested as a group.

• The purpose of integration testing is to verify that different modules or services work
together as expected after being integrated into a larger system.

• It helps detect issues that might not have been identified during unit testing, such as
problems with data flow, control flow, or interaction between components.

41
Key Concepts of Integration Testing

• Modules Integration: Integration testing occurs after unit testing and before system
testing. It focuses on the interaction between integrated units or modules, ensuring
that they work correctly together.

• Types of Integration Testing:


• Big Bang Integration Testing: All modules are integrated at once and tested together. It can
be challenging to identify the exact cause of failure because everything is integrated
simultaneously.

42
Key Concepts of Integration Testing

• Incremental Integration Testing: Modules are integrated and tested incrementally,


one at a time. This approach allows issues to be detected early. There are two main
approaches:
• Top-down: Testing starts from the top of the hierarchy (higher-level modules) and
proceeds down to lower-level modules.

• Bottom-up: Testing starts from the lower-level modules and proceeds upwards.

• Sandwich (Hybrid) Integration Testing: A combination of top-down and bottom-up


methods, where both high and low-level modules are tested simultaneously.

43
Key Concepts of Integration Testing

• Test Environment: Integration testing typically requires a more complex test environment than unit
testing. It may include external systems, databases, or APIs to simulate the real-world behavior of the
integrated system.

• Focus Areas:

• Interface Compatibility: Ensuring that data passed between modules is correctly processed and
formatted.

• Data Integrity: Verifying that data exchanged between modules remains consistent.

• Error Handling: Checking how modules handle errors or unexpected inputs.

• Performance: Ensuring that the system performs well when integrated components work together.

44
Acceptance testing

• Acceptance testing is a type of software testing that determines whether a system or


product meets the business requirements and is ready for deployment or release to
end-users. It typically comes after system testing and is the final step before the
product goes live or is handed over to the customer.

• Acceptance testing is often performed by the customer or end-users, though


sometimes the testing team or quality assurance (QA) team conducts it, depending on
the project's nature. The goal is to verify that the software satisfies the agreed-upon
requirements and is fit for the intended use.

45
Key Aspects of Acceptance Testing

1. Purpose:The primary goal of acceptance testing is to ensure the software functions as expected in real-world
scenarios and that it meets the business needs of the stakeholders. It focuses on validating the usability, functionality,
and business requirements of the system rather than its technical correctness (which is the focus of earlier tests like
system testing).

2. Who Performs Acceptance Testing?


• User Acceptance Testing (UAT): This is typically performed by the end-users or clients. UAT ensures that the product behaves
as expected in the hands of real users, and that it solves the business problem.
• Business Acceptance Testing (BAT): This is performed by business stakeholders to ensure that the system meets the business
processes and aligns with business goals.
• Contract Acceptance Testing (CAT): Conducted to verify that the product meets the requirements outlined in the contract or
service agreement.
• Regulatory Acceptance Testing (RAT): Verifies whether the product complies with regulatory standards or industry-specific
regulations (e.g., healthcare, finance).

46
Key Aspects of Acceptance Testing

3. Types of Acceptance Testing:


• Alpha Testing: Conducted by the internal development or QA team to ensure the product is
working before it is released to external users.

• Beta Testing: Performed by a limited group of end-users or customers to uncover issues that
weren't identified during alpha testing. Feedback gathered is used for final improvements
before full release.

47
Process of Acceptance Testing

• Requirement Review: Review the business and functional requirements to ensure they are clear and well-
defined.

• Test Plan Creation: Create an acceptance test plan outlining the scope, test cases, resources, and criteria for
success.

• Test Case Design: Design test cases based on real-world scenarios and end-user expectations.

• Test Execution: Perform the test cases by simulating actual user behavior and checking the system’s
functionality, performance, and usability.

• Issue Reporting: If any issues or defects are identified, they are reported to the development team for
resolution.

• Acceptance Decision: Based on the results, the client or business stakeholder decides whether the product is
accepted, rejected, or needs further development.
48
System Testing

• System testing is a type of software testing that focuses on verifying the complete and
integrated software system to ensure it meets the specified requirements.

• It is a high-level test performed after unit testing, integration testing, and before
acceptance testing.

• System testing checks the system's behavior as a whole in an environment that mimics
the real world and evaluates its functionality, performance, security, usability, and
compatibility.

49
Types of System Testing

• Functional Testing: Ensures the system works according to its specifications and requirements.
• This includes testing all user-facing features and functionalities.

• Non-functional Testing: Evaluates aspects like performance, scalability, security, usability, and
compatibility.
• Performance Testing: Tests how the system performs under different levels of load and stress.
• Security Testing: Checks for vulnerabilities and ensures that the system is secure.
• Usability Testing: Evaluates how easy and user-friendly the system is.
• Compatibility Testing: Assesses the system's compatibility with different operating systems, browsers,
and devices.
• Regression Testing: Ensures that recent changes (such as code updates or bug fixes) haven’t negatively
affected existing functionality.

50
System Testing Process

• Test Planning: Create a detailed test plan that outlines the scope, objectives, resources,
schedule, and deliverables of system testing.

• Test Design: Design test cases that cover all aspects of the system (both functional and non-
functional). These should align with the system requirements.

• Test Execution: Execute the test cases in a test environment that mirrors production as closely
as possible.

• Defect Reporting: Document any defects found during testing and communicate them to the
development team for resolution.

• Test Closure: After testing is complete, evaluate the results, prepare test reports, and close the
testing phase.
51
Goals of System Testing

• Verification: Ensure the system works as expected based on the defined requirements.

• Validation: Verify that the system meets business needs and is ready for production
deployment.

• Identifying Defects: Discover defects and bugs before the system is released to users.

• Assessing Quality: Ensure the software meets high standards of quality in terms of
functionality, performance, security, and usability.

52
Challenges in System Testing

• Complexity: System testing can be complex due to the interactions between multiple
components and subsystems.

• Environment Differences: The test environment may not always replicate the real-
world environment exactly, leading to discrepancies.

• Time Constraints: System testing can be time-consuming, especially for large, complex
systems, and there may be pressure to complete it quickly.

53

You might also like