UNIT - III
Test objective Identification
• The question “What do I test?” must be answered with another
question: “What do I expect the system to do?”
• The first step in identifying the test objective is to read, understand,
and analyze the functional specification
• It is essential to have a background familiarity with the subject area,
the goals of the system, business processes, and system users, for a
successful analysis
• One must critically analyze requirements to extract the inferred
requirements that are embedded in the requirements
• An inferred requirement is one that a system is expected to support,
but not explicitly stated
• Inferred requirements need to be tested just like the explicitly stated
requirements
Test objective Identification
• The test objectives are put together to form a test group or a subgroup
after they have been identified
• A set of (sub) groups of test cases are logically combined to form a
larger group
• A hierarchical structure of test groups as shown in Figure 11.2 is
called a test suite
• It is necessary to identify the test groups based on test categories, and
refine the test groups into sets of test objectives
• Individual test cases are created for each test objective within the
subgroups
• Test groups may be nested to an arbitrary depth
• The test grouping may be used to aid system test planning and
execution
Test objective Identification
Test Design Factors
The following factors must be taken into consideration during
the design of system tests
• Coverage metrics
• Effectiveness
• Productivity
• Validation
• Maintenance
• User skills
Requirement Identification
• Requirements are a description of the needs or desire of users that a
system suppose to implement
• Two major challenges in defining requirements:
– Ensure that the right requirements are captured which is essential for
meeting the expectations of the users
– Ensure that requirements are communicated unambiguously to the
developers and testers so that no surprise when the system is delivered
• Essential to have an unambiguous representation of the requirements
• The requirements must be available in a centralized place so that all
the stakeholders have the same interpretation of the requirements
A stakeholder is a person or an organization who influences a system’s behavior or
who is impacted by the system
Requirement Identification
•The state diagram of a simplified requirement life-cycle starting from Submit state to
the Closed state is shown in Figure 11.1and the schema in Table 11.1
•At each of these states certain actions are taken by the owner, and the requirement is
moved to the next state after the actions are completed
•A requirement may be moved to Decline state from any of the following state: Open,
Review, Assign, Implement, and Verification for several reasons
•A marketing manager may decide that the implementation of a particular requirement
may not generate revenue and may decline a requirement.
Figure 11.1: State transition diagram of a requirement
Requirement Identification
Table 11.1: Requirement schema field summary
Requirement Identification
Definition of requirements traceability:
The requirements traceability is the ability to describe and
follow the life of a requirement, in both forward and backward
direction, i.e., from its origins, through its development and
specification, to its subsequent deployment and use, and
through periods of ongoing refinement and iteration in any of
these phase
Requirement Identification
• One can generate traceability matrix from the requirement life-cycle
system, which gives them confidence about test coverage
• A traceability matrix allows one to find a two-way mapping between
requirements and test cases as follows
– From a requirement to a functional specification to specific tests which
exercise the requirements
– From each test case back to the requirement and functional specifications
• A traceability matrix finds two applications:
– To identify and track the functional coverage of a test
– To identify which test cases must be exercised or updated when a system
evolves
Testable Requirements
• One way to determine the requirement description is testable is as follows:
• Take the requirement description “The system must perform X.”
• Encapsulate the requirement description to create a test objective: “Verify
that the system performs X correctly.”
• Review this test objective and find out if it is possible to execute it
assuming that the system and the test environment are available
• If the answer to the above question is yes, then the requirement
description is clear and detailed for testing purpose
• Otherwise, more work needs to be done to revise or supplement the
requirement description
Characteristics of Testable Requirements
• The following items must be analyzed during the review of requirements:
– Safety
– Security
– Completeness
– Correctness
– Consistency
– Clarity
– Relevance
– Feasibility
– Verifiable
– Traceable
Characteristics of Testable Requirements
• A functional specification provides
– a precise description of the major functions the system must fulfill the
requirements
– explanation of the technological risks involved
– external interfaces with other software modules
– data flow such as flowcharts, transaction sequence diagrams, and
finite-state machines describing the sequence of activities
– fault handling, memory utilization and performance estimates
– any engineering limitation
Characteristics of Testable Requirements
• The following are the objectives that are kept in mind while reviewing a
functional specification:
– Achieving requirements
– Correctness
– Extensible
– Comprehensive
– Necessity
– Implementable
– Efficient
– Simplicity
– Consistency with existing components
– Limitations
Characteristics of Testable Requirements
Table 11.4: Characteristics of testable functional specification.
Modeling a Test Design Process
• One test case is created for each test objective
• Each test case is designed as a combination of modular components called test
steps
• Test cases are clearly specified so that testers can quickly understand, borrow, and
re-use the test cases
• Figure 11.6 illustrate the life-cycle model of a test case in the form of a state
transition diagram
• One can easily implement a database of test cases using the test cases schema
shown in Table 11.6
Figure 11.6: State-transition diagram of a test case.
Modeling a Test Design Process
Table 11.6: Test case schema summary.
Modeling Test Results
• A test suite schema shown in Table 11.7 can be used for testing a particular
release
• The schema requires a test suite id, a title, an objective and a list of test
cases to be managed by the test suite
• The idea is to gather a selected number of released test cases and
repackage them to form a test suite for a new project
• The results of executing those test cases are recorded in a database for
gathering and analyzing test metrics
• The result of test execution is modeled by using a state-transition diagram
as shown in Figure 11.7
• The corresponding schema is given in Table 11.8
Modeling Test Results
Table 11.7: Test suite field summary.
Modeling Test Results
Table 11.8: Test result schema summary.
Equivalence class Testing
• Equivalence test case design technique is typically used to reduce the
total number of test cases to a finite set of testable test cases, still
covering maximum requirements.
• In this method, the input domain data containing a range of input values is
divided into different equivalence data classes (at times also referred to as
split sets). Test cases should be written in such a way that they cover each
equivalence partition (also called equivalence data class) at least once.
• During test case execution, each value of every equivalence partition must
display the same output behavior as the other one. This method is
established on the assumption that if one condition/value in an equivalence
partition passes, then all others will pass as well. Similarly, if one of the
partition’s conditions fails, the partition’s other condition/value will
likewise fail.
• In short, it is the process of picking all possible test cases and carefully
placing them into equivalence classes. One test value is picked from each
class while testing.
Equivalence class Testing
• Example #1: If you are testing for an input box accepting numbers from 1 to 1000,
then there is no use in writing thousands of test cases for all 1000 valid input
numbers plus other test cases for invalid data.
Using the Equivalence Partitioning method discussed above, test cases can be
divided into three sets of input data, called equivalence data classes. Each test case
is representative of the respective equivalence class.
So in the above example, we can divide our test cases into three equivalence
classes of some valid and invalid input values.
Test cases for input box accepting numbers between 1 and 1000 using
Equivalence Partitioning:
#1) Valid data test case (value between 1 and 1000): This includes one input data
class with all valid inputs. Pick a single value from the range of 1 to 1000 (say 99)
as a valid test case. If you select other values between 1 and 1000, then the result is
going to be the same. So one test case for valid input data should be sufficient.
#2) Invalid data test case (value less than 1): Here, input data class with all
values below the lower limit. i.e., any value below 1 (say zero), as an invalid input
data test case.
Equivalence class Testing
#3) Invalid data test case (value greater than 1000): Input data with any
value greater than 1000 (say 1001) to represent the invalid input class and
the third equivalence class.
Example #2: Testing an input box for a mobile number accepting ten
digits (i.e. length of input value has to be ten).
Again, the input value range can be divided into three equivalence data
classes using the equivalence partitioning technique; one valid and two
invalid classes.
Test cases for mobile number input box accepting ten digits:
#1) Valid data test case (= 10 digits): Enter the exact 10 digits, say
1234056789
#2) Invalid data test case (< 10 digits): Enter phone number with 9 digits,
say 123456789
#3) Invalid data test case (> 10 digits): Enter phone number with 11
digits, say 12340567899
Boundary Value Testing
• The ‘Boundary Value Analysis’ testing technique is used to identify errors
at boundaries rather than finding those that exist in the center of the input
domain. This kind of testing of boundary values (or boundaries) is also
referred to as ‘boundary testing’.
• More application errors occur at the boundaries of the input domain,
meaning more failures occur at lower and upper limit values of input data.
The basic principle behind this technique is to choose input data values:
Just below the minimum value
Minimum value
Just above the minimum value
A Normal (nominal) value
Just below the maximum value
Maximum value
Just above the maximum value
Bug life cycle
• A defect is an error or bug in an application that is
created during the building or designing of software
and due to which software starts to show abnormal
behaviors during its use.
• It is one of the important responsibilities of the tester
to find as much as defect possible to ensure the quality
of the product is not affected and the end product is
fulfilling all requirements perfectly for which it has
been designed and provide required services to the
end-user.
Contd…
Defect Status
Defect status or Bug status is the current state from
which the defect is currently going through. The
number of states the defect goes through varies from
project to project.
Contd…
1. New: When any new defect is identified by the tester, it falls in
the ‘New’ state. It is the first state of the Bug Life Cycle. The
tester provides a proper Defect document to the Development
team so that the development team can refer to Defect
Document and can fix the bug accordingly.
2. Assigned: Defects that are in the status of ‘New’ will be
approved and that newly identified defect is assigned to the
development team for working on the defect and to resolve
that. When the defect is assigned to the developer team the
status of the bug changes to the ‘Assigned’ state.
Contd…
3. Open: In this ‘Open’ state the defect is being addressed by the
developer team and the developer team works on the defect for
fixing the bug. Based on some specific reason if the developer
team feels that the defect is not appropriate then it is transferred
to either the ‘Rejected’ or ‘Deferred’ state.
4. Fixed: After necessary changes of codes or after fixing
identified bug developer team marks the state as ‘Fixed’.
5. Pending Request: During the fixing of the defect is completed,
the developer team passes the new code to the testing team for
retesting. And the code/application is pending for retesting on
the Tester side so the status is assigned as ‘Pending Retest’.
Contd…
6. Retest: At this stage, the tester starts work of retesting the
defect to check whether the defect is fixed by the developer or
not, and the status is marked as ‘Retesting’.
7. Reopen: After ‘Retesting’ if the tester team found that the bug
continues like previously even after the developer team has
fixed the bug, then the status of the bug is again changed to
‘Reopened’. Once again bug goes to the ‘Open’ state and goes
through the life cycle again. This means it goes for Re-fixing
by the developer team.
8. Verified: The tester re-tests the bug after it got fixed by the
developer team and if the tester does not find any kind of
defect/bug then the bug is fixed and the status assigned is
‘Verified’.
Contd…
9. Closed: It is the final state of the Defect Cycle, after fixing the
defect by the developer team when testing found that the bug has
been resolved and it does not persist then they mark the defect as
a ‘Closed’ state.
10. Rejected: If the developer team rejects a defect if they feel that
defect is not considered a genuine defect, and then they mark the
status as ‘Rejected’. The cause of rejection may be any of these
three i.e Duplicate Defect, NOT a Defect, Non-Reproducible.
11. Deferred: All defects have a bad impact on developed software
and also they have a level based on their impact on software. If
the developer team feels that the defect that is identified is not a
prime priority and it can get fixed in further updates or releases
then the developer team can mark the status as ‘Deferred’.
Contd…
12. Duplicate: Sometimes it may happen that the defect is repeated
twice or the defect is the same as any other defect then it is marked as
a ‘Duplicate’ state and then the defect is ‘Rejected’.
13. Not a Defect: If the defect has no impact or effect on other functions
of the software then it is marked as ‘NOT A DEFECT’ state and
‘Rejected’.
14. Non-Reproducible: If the defect is not reproduced due to platform
mismatch, data mismatch, build mismatch, or any other reason then
the developer marks the defect as in a ‘Non-Reproducible’ state.
15. Can’t be Fixed: If the developer team fails to fix the defect due to
Technology support, the Cost of fixing a bug is more, lack of required
skill, or due to any other reasons then the developer team marks the
defect as in ‘Can’t be fixed’ state.
Contd…
1. Tester finds the defect
2. Status assigned to defect- New
3. A defect is forwarded to Project Manager for analyze
4. Project Manager decides whether a defect is valid
5. Here the defect is not valid- a status is given “Rejected.”
6. So, project manager assigns a status rejected. If the defect is not rejected
then the next step is to check whether it is in scope. Suppose we have
another function- email functionality for the same application, and you
find a problem with that. But it is not a part of the current release when
such defects are assigned as a postponed or deferred status.
7. Next, the manager verifies whether a similar defect was raised earlier. If
yes defect is assigned a status duplicate.
Contd…
8. If no the defect is assigned to the developer who starts fixing the code.
During this stage, the defect is assigned a status in- progress.
9. Once the code is fixed. A defect is assigned a status fixed
10. Next, the tester will re-test the code. In case, the Test Case passes the
defect is closed. If the test cases fail again, the defect is re-opened and
assigned to the developer.
Example:
Consider a situation where during the 1st release of Flight Reservation a
defect was found in Fax order that was fixed and assigned a status closed.
During the second upgrade release the same defect again re-surfaced. In
such cases, a closed defect will be re-opened.