Introduction To Software Testing
Introduction To Software Testing
Software Testing is/can be done by all technical and non-technical people associated with the
software. Testing in its various phases is done by-
Developer - Developer does the unit testing of the software and ensure that the individual
methods work currently
Tester - Testers are the face of the software testing. A tester verifies the functionality,
usability of the application as functional tester, a tester checks the performance of the
application as a Performance tester, a tester automates the manual-functional test cases
and creates test scripts as an automation tester
Test Managers/Lead/Architects - Define the test strategy and test plan
End users - A group of end users do the User Acceptance Testing (UAT) of the
application to make sure the software can work in the real world
Based on the selection of different Software Development Life Cycle Model for the software
project, testing phase gets started in the different phases. There is a software myth that testing is
done only when some part of software is built but testing can (should) be started even before a
single line of code is written. It can be done in parallel with development phase e.g. in case of V
Model:
Development Phase Testing Activity
Requirement Design UAT Test Preparation
Functional Specification Functional Tests Preparation
Implementation Unit test preparation
Code Complete Test case execution
This question - "When to stop testing" or "how much testing is enough" is very tricky to answer
as we can never be sure that the system is 100% bug-free. But still there are some markers that
help us in determining the closure of the testing phase of software development life cycle.
Sufficient pass percentage - Depending on the system, testing can be stopped when an
agreed upon test case pass percentage is reached.
After successful test case execution - Testing phase can be stopped when one complete
cycle of test cases is executed after the last known bug fix.
On meeting deadline - Testing can be stopped after deadlines get met with no high
priority issues left in system.
Mean Time Between failures (MTBF) - MTBF is the time interval between to inherent
failures. Based on stakeholders decisions, if the MTBF is quite large one can stop the
testing phase
Based on Code coverage value - Testing phase can be stopped when the automated code
coverage reaches a certain acceptable value.
Software testing can be done both manually as well as using automation tools. Manual effort
includes verification of requirement and design; development of test strategy and plan;
preparation of test case and then the execution of tests. Automation effort includes preparation of
test scripts for UI automation, back-end automation, performance test script preparation and use
of other automation tools.
Software testing is basically the sum total of the two activities - Verification and Validation.
Verification is the process of evaluating the artifacts of software development in order to ensure
that the product being developed will comply with the standards. It is static process of analyzing
the documents and not the actual end product.
Whereas, Validation is the process of validating that the developed software product conforms
to the specified business requirements. It involves dynamic testing of software product by
running it.
Verification Validation
1. Verification involves evaluation of artifacts of Validation involves validation of
software development to ensure that the developed software product to check if it
product being developed will comply to its conforms to the specified business
requirement. requirements.
2. It is static process of analyzing the documents It involves dynamic testing of software
and not the actual end product. product by running it.
3. Verification is a process oriented approach. Validation is a product oriented approach.
4. Answers the question - "Are we building the Answers the question - "Are we building
product right?" the right product?"
5. Errors found during verification require lesser Errors found during validation require
cost/resources to get fixed as compared to be more cost/resources. Later the error is
found during validation phase. discovered higher is the cost to fix it.
6. It involves activities like document review, It involves activities like functional
test cases review, walk-throughs, inspection testing, automation testing etc.
etc.
Verification basically asks if the program is correct. To use your simple example, a=x + Y is
correct if x=1 and y=2 yields a = 3.
Validation asks if the correct program was produced. For example, calculate the area of a
rectangle with length x and width y. If x=1 and y =2, the result should be 2. The first program is
correct but not valid given the requirement. It is not the right program.
Correctness - Correctness measures the software quality for the conformance of the
software to its requirements
Reliability - Checks if the software performs its functions without any failure within the
expected conditions
Robustness - Robustness is the ability of the software to not crash when provided with
unexpected input
Usability - Usability is the ease of operating the software
Completeness - Completeness is the extent to which the software system meets its
specifications
Maintainable - Maintainability is the measure of the amount of effort required for
software maintenance after it has shipped to end user
Portability - Ability of the software to be transformed from one platform or
infrastructure to other
Efficiency - Efficiency is the measure of resources required for the functioning of the
software
3. Seven Testing Principles
A number of testing principles have been suggested over the past 40 years and offered general
guidelines common for all testing.
Different methodologies, techniques and types of testing is related to the type and nature of the
application. For example, a software application in a medical device needs more testing than a
games software. More importantly a medical device software requires risk based testing, be
compliant with medical industry regulators and possibly specific test design techniques. By the
same token, a very popular website, needs to go through rigorous performance testing as well as
functionality testing to make sure the performance is not affected by the load on the servers.
Unless the application under test (AUT) has a very simple logical structure and limited input, it
is not possible to test all possible combinations of data and scenarios. For this reason, risk and
priorities are used to concentrate on the most important aspects to test.
3. Early testing
The sooner we start the testing activities the better we can utilize the available time. As soon as
the initial products, such the requirement or design documents are available, we can start testing.
It is common for the testing phase to get squeezed at the end of the development lifecycle, i.e.
when development has finished, so by starting testing early, we can prepare testing for each level
of the development lifecycle.
Another important point about early testing is that when defects are found earlier in the lifecycle,
they are much easier and cheaper to fix. It is much cheaper to change an incorrect requirement
than having to change a functionality in a large system that is not working as requested or as
designed!
4. Defect clustering
During testing, it can be observed that most of the reported defects are related to small number of
modules within a system. i.e. small number of modules contain most of the defects in the system.
This is the application of the Pareto Principle to software testing: approximately 80% of the
problems are found in 20% of the modules.
If you keep running the same set of tests over and over again, chances are no more new defects
will be discovered by those test cases. Because as the system evolves, many of the previously
reported defects will have been fixed and the old test cases do not apply anymore. Anytime a
fault is fixed or a new functionality added, we need to do regression testing to make sure the new
changed software has not broken any other part of the software. However, those regression test
cases also need to change to reflect the changes made in the software to be applicable and
hopefully fine new defects.
Testing an application can only reveal that one or more defects exist in the application, however,
testing alone cannot prove that the application is error free. Therefore, it is important to design
test cases which find as many defects as possible.
Just because testing didn’t find any defects in the software, it doesn’t mean that the software is
ready to be shipped. Were the executed tests really designed to catch the most defects? or where
they designed to see if the software matched the user’s requirements? There are many other
factors to be considered before making a decision to ship the software.
Testing should not be performed by the person or team that developed the software since they
tend to defend the correctness of the program.
Because testing requires high creativity and responsibility only the best personnel must be
assigned to design, implement, and analyze test cases, test data and test results.
o Test for invalid and unexpected input conditions as well as valid conditions.
The program should generate correct messages when an invalid test is encountered and should
generate correct results when the test is valid.
The program must not be modified during the implementation of the set of designed test cases.
A necessary part of test documentation is the specification of expected results, even if providing
such results is impractical.
4. What is Software Testing Life Cycle?
Testing a software is not a single activity wherein we just validate the built product, instead it
comprises of a set of activities performed throughout the application lifecycle. Software testing
life cycle or STLC refers to all these activities performed during the testing of a software
product.
Phases of STLC
Requirement Analysis - In this phase the requirements documents are analyzed and
validated. Along with that the scope of testing is defined.
Test Planning and Control - Test planning is one of the most important activities in test
process. It involves defining the test specifications in order to achieve the project
requirements. Whereas, test Control includes continuous monitoring of test progress with
the set plan and escalating any deviation to the concerned stake holders.
Test Analysis and Design - This phase involves analyzing and reviewing requirement
documents, risk analysis reports and other design specifications. Apart from this, it also
involves setting up of test infrastructure, creation of high level test cases and creation of
traceability matrix.
Test Case Development - This phase involves the actual test case creation. It also
involves specification of test data and automated test scripts creation.
Test Environment Setup - This phase involves creation of a test environment closely
simulating the real world environment.
Test Execution - This phase involves manual and automated test case execution. During
test case execution any deviation from the expected result leads to creation of defects in a
defect management tool or manual logging of bugs in an excel sheet.
Exit Criteria Evaluation and Reporting - This phase involves analyzing the test
execution result against the specified exit criteria and creation of test summary report.
Test Closure - This phase marks the formal closure of testing. It involves checking if all
the project deliverable are delivered, archiving the test ware, test environment and
documenting the learning.
5. Different Levels of Software Testing
Software testing can be performed at different levels of software development process.
Performing testing activities at multiple levels help in early identification of bugs and better
quality of software product. In this tutorial we will be studying the different levels of testing
namely - Unit Testing, Integration Testing, System Testing and Acceptance Testing.
Now, we will describe the different testing level of testing in brief and in the next tutorials we
will explain each level individually, providing example and detailed explanation.
Unit Testing
Unit testing is the first level of testing usually performed by the developers.
In unit testing a module or component is tested in isolation.
As the testing is limited to a particular module or component, exhaustive testing is
possible.
Advantage - Error can be detected at early stage saving time and money to fix it.
Limitation - Integration issues are not detected in this stage, modules may work perfectly
on isolation but can have issues in interfacing between the modules.
Integration Testing
System Testing is the level of testing where the complete integrated application is tested
as a whole.
It aims at determining if the application conforms to its business requirements.
System testing is carried out in an environment which is very similar to the production
environment.
Acceptance Testing
Acceptance testing is the final and one of the most important levels of testing on
successful completion of which the application is released to production.
It aims at ensuring that the product meets the specified business requirements within the
defined standard of quality.
There are two kinds of acceptance testing- alpha and beta testing. When acceptance
testing is carried out by end users in developer's site it is known as alpha testing. User
acceptance testing done by end users at end-user's site is called beta testing.
Test design techniques are standards of test designing which allow creation of systematic and
widely accepted test cases. These techniques are based on the different scientific models and
over the year’s experiences of many QA professional.
The test design techniques can be broadly categorized into two parts - "Static test design
technique" and "Dynamic test design technique".
The Static test design techniques are the testing techniques which involve testing without
executing the code or the software application. So, basically static testing deals with Quality
Assurance, involving reviewing and auditing of code and other design documents. The various
static test design techniques can be further divided into two parts - "Static testing performed
manually" and "Static testing using tools".
6.1.1. Manual Static Design Techniques
Static analysis of code - The static analysis techniques for the source code evaluation
using tools are :
o Control flow analysis - The control flow analysis requires analysis of all possible
control flows or paths in the code.
o Data flow analysis - The data flow analysis requires analysis of data in the
application and its different states.
Compliance to coding standard - This evaluates the compliance of the code with the
different coding standards.
Analysis of code metrics - The tool used for static analysis is required to evaluate the
different metrics like lines of code, complexity, code coverage etc.
Dynamic test design techniques involves testing by running the system under test. In this
technique, the tester provide input data to the application and execute it, in order to verify its
different functional and non-functional requirements.
Specification based - Specification based test design techniques are also referred to as
black-box testing. These involve testing based on the specification of the system under
test without knowing its internal architecture. The different types of specification based
test design or black box testing techniques are - "Equivalence partitioning", "Boundary
value analysis", "Decision tables", "Cause-effect graph", "State transition testing" and
"Use case testing".
Structure based - Structure based test design techniques are also referred to as white
box testing. In this techniques the knowledge of code or internal architecture of the
system is required to carry out the testing. The various kinds of testing structure based or
white testing techniques are - "Statement testing", "Decision testing/branch testing",
"Condition testing", "Multiple condition testing", "Condition determination testing" and
"Path testing".
Experienced based - The experienced based techniques as the name suggest does not
require any systematic and exhaustive testing. These are completely based on the
experience or intuition of the tester. Two most common forms of experienced based
testing are – ad-hoc testing and exploratory testing.
Boundary value analysis - In boundary value analysis the boundary values of the
equivalence partitioning classes are taken as input to the application. E.g. for equivalence
classes limiting input between 0 and 100, the boundary values would be 0 and 100.
Decision tables - Decision tables testing is used to test application's behavior based on
different combination of input values. A decision table has different set of input
combination and their corresponding expected outcome on each row.
Cause-effect graph - A cause-effect graph testing is carried out using graphical
representation of input i.e. cause and output i.e. effect. We can find the coverage of cause
effect graphs based on the percentage of combinations of inputs tested out of the total
possible combinations.
State transition testing - The state transition testing is based on state machine model. In
this technique, we test the application by graphically representing the transition between
the different states of the application based on the different events and actions.
Use case testing - Use case testing is carried out using use cases. In this technique, we
test the application using use-cases, representing the interaction of the application with
the different actors.
Example
Moreover, it is expected that the system to behave the same for values inside each partition. i.e.
the way the system handles -6391 will be the same as -9. Likewise, positive integers, 5 and 3567
will be treated the same by the system. In this particular example, the value 0 is a single value
partition. It is normally a good practice to have a special case for number zero.
It is important to note that this technique does not only apply to numbers. The technique can
apply to any set of data that can be considered as an equivalent. E.g. an application that reads in
images of only three types, .jpeg, .gif and .png, then three sets of valid equivalent classes can be
identified.
These would be classed as set of invalid equivalent data. Trying to open the application with
non-acceptable or invalid file types is an example of negative testing, which is useful when
combined with equivalence partitioning technique which partitions the set of equivalent
acceptable and valid data.
Each of these ranges has the minimum and maximum boundary values. The Negative range has a
lower value of -100 and the upper value of -1. The Positive range has a lower value of 1 and the
upper value of 100
While testing these values, one must see that when the boundary values for each partition are
selected, some of the values overlap. So, the overlapping values are bound to appear in the test
conditions when these boundaries are checked.
These overlapping values must be dismissed so that the redundant test cases can be eliminated.
So, the test cases for the input box that accepts the integers between -100 and +100 through BVA
are:
Test cases with the data same as the input boundaries of input domain: -100 and +100 in
our case.
Test data having values just below the extreme edges of input domain: -101 and 99
Test data having values just above the extreme edges of input domain: -99 and 101
6.2.2. Structure Based Test Design Techniques White Box Testing Techniques
Condition testing - Testing the condition outcomes (TRUE or FALSE). So, getting
100% condition coverage required exercising each condition for both TRUE and FALSE
results using test scripts(For n conditions we will have 2n test scripts).
Path testing - Testing the independent paths in the system (paths are executable
statements from entry to exit points).
7. What is a Test Case?
A test case is a set of conditions for evaluating a particular feature of a software product to
determine its compliance with the business requirements. A test case has pre-requisites, input
values and expected results in a documented form which cover the different test scenarios.
TestCaseId - This field uniquely identifies a test case. It is mapped with automation
scripts (if any) to keep a track of automation status. The same field can be used for
mapping with the test scenarios for generating a traceability matrix.
E.g. - GoogleSearch_1
Component/Module - This field specifies the specific component or module that the test
case belongs to. E.g. - Search_Bar_Module
Priority - This field is used to specify the priority of the test case. Normally the
conventional followed for specifying the priority is either High, Medium, low or P0, P1,
P3, P3 etc with P0 being the most critical.
Description - In this field describe the test case in brief. E.g. - Verify that when a user
writes a search term and presses enter, search results should be displayed.
Pre-requisites - In this field specify the conditions or steps that must be followed before
the test steps executions. E.g. - Browser is launched.
Test Steps - In this field we mention each and every step for performing the test case.
The test steps should be clear and unambiguous. E.g.
1. Write the url - http://google.com in the browser's URL bar and press enter.
2. Once google.com is launched, write the search term - "Apple" in the google
search bar.
3. Press enter.
Test Data - In this field we specify the test data used in the test steps. E.g. in the above
test step example we could use the search term-"apple" as test data.
Expected Result - This steps marks the expected result after the test step execution. This
used to assert the test case. E.g. - Search results related to 'apple' should be displayed.
Actual Result - In this step we specify the actual result after the test step execution. E.g.
- Search results with 'apple' keyword were displayed.
Status/Test Result - In this step we mark the test case as pass or fail based on the
expected and actual result. Possible values can be - Pass, Fail, Not executed.
Test Executed by - In this field we specify the tester's name who executed the test case
and marked the test case as pass or fail.
Apart from these mandatory fields there are many optional fields that can be added as the
organization or application's need like Automation status - for marking test as automated or
manual, TestScenarioId - for mapping test case with its test scenario, AfterTest step - for
specifying any step required to be executed after performing the test case and TestType - to
specify if the test is applicable for Regression, Sanity, Smoke etc and DefectId - id of the defect
launched in any of the defect management tools etc.
Apart from these some other fields can be added for additional information like - Test Author,
Test Designed Date, Test Executed Date etc.
As we know that a test case is a set of conditions for evaluating a software product to determine
its compliance with the business requirements. Having an ill-formed test cases can lead to severe
defect leakage, which can cost both time and money. So, writing effective test cases is an utmost
requirement for the success of any software product.
1. Test design technique - Follow a test design technique best suited for your organization
or project specific needs like - boundary value analysis, equivalence class partitioning
etc. This ensures that well researched standards and practices are implemented during test
case creation.
2. Clear and concise tests - The test case summary, description, test steps, expected results
etc should be written in a clear and concise way, easily understandable by the different
stakeholders in testing.
3. Uniform nomenclature - In order to maintain consistency across the different test cases
a uniform nomenclature and set of standards should be followed while writing the test
cases.
4. Fundamental/Atomic Test cases - Create test cases as fundamental as possible, testing a
single unit of functionality without merging or overlapping multiple testable parts.
5. Leave no scope of ambiguity - Write test case with clear set of instruction e.g. instead of
writing "Open homepage", write - "Open homepage - http://www.{homepageURL}.com
in the browser and press enter".
6. No Assumptions - While writing test cases do not assume any functionality, pre-requisite
or state of the application. Instead, bound the whole test case creation activity to the
requirement documents - SRS, Use-case documents etc.
7. Avoid redundancy - Don't repeat the test cases, this leads to wastage of both time and
resources. This can be achieved by well-planned and categorized test cases.
8. Traceable tests - Use traceability matrix to ensure that 100% of the application's feature
in the scope of testing are covered in the test cases.
9. Ensure that different aspects of software are covered - Ensure that apart from the
functionality, the different aspects of software are tested like performance, usability,
robustness etc are covered in the test case by creating performance test cases and
benchmarks, usability test cases, negative test cases etc.
10. Test data - The test data used in testing should be as diverse and as close to real time
usage as possible. Having diverse test data can more reliable test cases.
7.3. Test Management Tools
Test management tools are used by test teams to capture requirements, design test cases, test
execution reports and lots more. If you are also one of those who are getting issues in test case
execution, need not to worry!
The test management tools will provide you a friendly UI environment, thus making your work
easier and more convenient.
JIRA has a few choices for test cases writing services, however, it is the best and perfectly
integrated bug tracker for Zephyr.
Many IT developers know that JIRA is mainly a bug tracker aiming to control development process
with tasks, bugs and other types of agile cards. Zephyr is one of the many JIRA’s plugins
extending JIRA’s capacities.
2. TestLink
TestRail, made by Gurock Software GmbH Company, was the first tool our team used for
planning and testing. The company founded in 2004 has created a range of test tools but the most
successful product is TestRail. It is another amazing test case management tool which generates
platform to create and run test cases. TestRail integrates with a ticket management tool called
Gemini and with many other issue-tracking tools and provides some external link for its test case
creation and execution support.
TestRail Features:
qTest
TestCollab
TestLodge
QACoverage
EasyQA
QMetry
8. Life Cycle of a Defect
A defect life cycle is the movement of a bug or defect in different stages of its lifetime, right
from the beginning when it is first identified till the time is marked as verified and closed.
Depending on the defect management tool used and the organization, we can have different
states as well different nomenclature for the states in the defect life cycle.
Requirement Gathering
Design Specification
Coding/Implementation
Testing
Deployment
Maintenance
1. Requirement Gathering
Requirement gathering is one of the most critical phase of SDLC. This phase marks as the basis
of whole software development process. All the business requirements are gathered from the
client in this phase. A proper document is made which tells the purpose and the guidelines for the
other phases of the life cycle. For example- if we want to make a website for a restaurant. The
requirement analysis phase will answer the questions like-
2. Design Specification
A software design or we can say a layout is prepared in this phase according to the requirements
specified in the previous step. In this phase, the requirements are broken down into multiple
modules like login module, signup module, menu options on other modules etc. So this design is
considered as the input for the next implementation phase.
3. Implementation
In this phase, the actual development gets started. The developer writes codes using different
programming languages depending upon the need of the product. The main stakeholders in this
phase are the development team.
4. Testing
After the completion of the development phase testing begins. Here testers test the software and
provide appropriate feedback to the developing team. The tester checks that whether the software
developed fulfills the desired requirements of the client that are described in the requirement
phase or not. Functional and non-functional testing is performed here before delivering it.
5. Deployment
After the testing gets completed these product developed gets live and is handed over to the
client. Now the client can publish it online and can decide about customer’s access.
6. Maintenance
In this phase, the maintenance of the software product is taken care of like making changes to the
software that are required for the intended functionality of the product over a period of time.
9.2. SDLC Models
There are various models in software development life cycle depending on the requirement,
budget, criticality and various other factors. Some of the widely used SDLC models are:
Waterfall model
Iterative model
Incremental model
Spiral model
V model
Agile model
A. Waterfall Model
Waterfall is the oldest and most straightforward of the structured SDLC methodologies — finish
one phase, then move on to the next. No going back. Each stage relies on information from the
previous stage and has its own project plan. Waterfall is easy to understand and simple to
manage. But early delays can throw off the entire project timeline. And since there is little room
for revisions once a stage is completed, problems can’t be fixed until you get to the maintenance
stage. This model doesn’t work well if flexibility is needed or if the project is long term and
ongoing.
B. Iterative Model
The Iterative model is repetition incarnate. Instead of starting with fully known requirements,
you implement a set of software requirements, then test, evaluate and pinpoint further
requirements. A new version of the software is produced with each phase, or iteration. Rinse and
repeat until the complete system is ready.
One advantage over other SDLC methodologies: This model gives you a working version early
in the process and makes it less expensive to implement changes. One disadvantage: Resources
can quickly be eaten up by repeating the process again and again.
C. Incremental Model
In incremental model the whole requirement is divided into various builds. Multiple development
cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into
smaller, more easily managed modules. Incremental model is a type of software development
model like V-model, Agile model etc.
In this model, each module passes through the requirements, design, implementation and testing
phases. A working version of software is produced during the first module, so you have working
software early on during the software life cycle. Each subsequent release of the module adds
function to the previous release. The process continues till the complete system is achieved.
D. Spiral Model
One of the most flexible SDLC methodologies, the Spiral model takes a cue from the Iterative
model and its repetition; the project passes through four phases over and over in a “spiral” until
completed, allowing for multiple rounds of refinement. This model allows for the building of a
highly customized product, and user feedback can be incorporated from early on in the project.
But the risk you run is creating a never-ending spiral for a project that goes on and on.
E. V-Shaped Model
Also known as the Verification and Validation model, the V-shaped model grew out of Waterfall
and is characterized by a corresponding testing phase for each development stage. Like
Waterfall, each stage begins only after the previous one has ended. This model is useful when
there are no unknown requirements, as it’s still difficult to go back and make changes.
F. Agile Model
By breaking the product into cycles, the Agile model quickly delivers a working product and is
considered a very realistic development approach. The model produces ongoing releases, each
with small, incremental changes from the previous release. At each iteration, the product is
tested.
This model emphasizes interaction, as the customers, developers and testers work together
throughout the project. But since this model depends heavily on customer interaction, the project
can head the wrong way if the customer is not clear on the direction he or she wants to go.
10. Overview of Scrum Agile Development Methodology
Scrum is an agile development methodology for managing and completing projects. It is a way
for teams to work together to achieve a set of common goals.
Scrum is an iterative and incremental approach to software development, meaning that a large
project is split into a series of iterations called “Sprints”, where in each sprint, the goal is to
complete a set of tasks to move the project closer to completion.
Each sprint typically lasts 2 to 4 weeks or a calendar month at most. Building products one small
piece at a time encourages creativity and enables teams to respond to feedback and change and to
build exactly what is needed.
The scrum framework has three components: Roles, Events and Artifacts.
Roles
Product Owner
Scrum Master
Team
Sprint Events
Sprint review
Sprint retrospective
Artifacts
Product backlog
Sprint Backlog
A list of tasks identified by the Scrum team to be completed during the sprint.
The team selects the items and size of the sprint backlog
Sprint Burndown Charts
Chart updated every day, shows the work remaining within the sprint
Gives an indication of the progress and whether some stories need to be removed and
postponed to the next sprint
More info about Agile Methodology and Scrum:
http://www.agilenutshell.com/
https://www.collab.net/sites/default/files/uploads/CollabNet_scrumreferencecard.pdf
11. The Bug Report
• Defect Identifier, ID - The identifier is very important in being able to refer to the defect
in the reports. If a defect reporting tool is used to log defects, the ID is normally a program
generated unique number which increments per defect log.
• Summary - The summary is an overall high level description of the defect and the
observed failure. This short summary should be a highlight of the defect as this is what the
developers or reviewers first see in the bug report.
• Description - The nature of the defect must be clearly written. If a developer reviewing
the defect cannot understand and cannot follow the details of the defect, then most probably the
report will be bounced back to the tester asking for more explanation and more detail which
causes delays in fixing the issue. The description should explain exactly the steps to take to
reproduce the defect, along with what the expected results were and what the outcome of the test
step was. The report should say at what step the failure was observed and what the actual result
is.
• Severity - The severity of the defect shows how sever the defect is in terms of damaging
to other systems, businesses, environment and lives of people, depending on the nature of the
application system. The severities are normally ranked and categorized in 4 or 5 levels,
depending on organization’s definition.
S1 – Critical: This means the defect is a show stopper with high potential damages and has no
workaround to avoid the defect. An example could be the application does not launch at all and
causes the operating system to shut down. This requires immediate attention and action and fix.
S2 – Serious: This means that some major functionalities of the applications are either missing or
do not work and there is no workaround. Example, an image viewing application cannot read
some common image formats.
S3 – Normal: This means that some major functionality do not work, but, a workaround exists to
be used as a temporary solution.
S4 – Cosmetic / Enhancement: This means that the failure causes inconvenience and annoyance.
Example can be that there is a pop-up message every 15 minutes, or you always have to click
twice on a GUI button to perform the action.
S5 – Suggestion: This is not normally a defect and a suggestion to improve a functionality. This
can be GUI or viewing preferences.
• Priority - Once the severity is determine, next is to see how to prioritize the resolution.
The priority determines how quickly the defect should be fixed. The priority normally concerns
the business importance such as impact on the project and the likely success of the product in the
marketplace. Like severity, priority is also categorized in to 4 or 5 levels.
P1 – Urgent: Means extremely urgent and requires immediate resolution
P2 – High: Resolution requirement for next external release
P3 – Medium: Resolution required for the first deployment (rather than all deployments)
P4 – Low: Resolution desired for the first deployment or subsequent future releases
It is important to note that a defect which has a high severity also bears a high priority, i.e. a
severe defect will require a high priority to resolve the issue as quick as possible. There can
never be a high severity and low priority defect. However, a defect can have a low severity but
have a high priority.
An example might be a company name is misspelled on the splash screen as the application
launches. This does not cause a severe damage to the environment or people’s lives, but can have
potential damages to company’s reputation and can harm business profits.
• Data and time - The date and time that the defect occurred or reported is also essential.
This is normally useful when you want to search for defects that were identified for a particular
release of software or from when the testing phase started.
• Version and Build of the Software Under Test - This is very important too. In most
cases, there are many versions of software; each version has many fixes and more functionality
and enhancements to the previous versions. Therefore, it is essential to note which version of the
software exhibited the failure that we are reporting. We may always refer to that version of
software to reproduce the failure.
• Reported by - Again, this is important, because if we may need to refer to the person
who raised the defect, we have to know who to contact.
• Related Requirement - Essentially, all features of a software application can be traced to
respective requirements. Hence, when a failure is observed, we can see what requirements have
been impacted. This can help in reducing duplicate defect reports in that if we can identify the
source requirement, then if another defect is logged with the same requirement number, we may
not need report it again, if the defects are of similar nature.
• Attachments/ Evidence - Any evidence of the failure should be captured and submitted
with the defect report. This is a visual explanation of the description of the defect and helps the
reviewer, developer to better understand the defect (screen-shots, video etc.)
As a tester tests an application and if he/she finds any defect, the life cycle of the defect starts
and it becomes very important to communicate the defect to the developers in order to get it
fixed, keep track of current status of the defect, find out if any such defect (similar defect) was
ever found in last attempts of testing etc. For this purpose, previously manually created
documents were used, which were circulated to everyone associated with the software project
(developers and testers), now a days many Bug Reporting Tools are available, which help in
tracking and managing bugs in an effective way.
It’s a good practice to take screen shots of execution of every step during software testing. If any
test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be
reported/logged for the same.
The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or
fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug
should be attached to the test case that is failed.
At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project,
Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in
Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority
and Bug ID etc.) are filled and detailed description of the bug is given along with the expected
and actual results. The screen-shots taken at the time of execution of test case are attached to the
bug for reference by the developer.
After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then
associated with the failed test case. This Bug ID helps in associating the bug with the failed test
case.
After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug
fixing process progresses.
If more than one tester are testing the software application, it becomes a possibility that some
other tester might already have reported a bug for the same defect found in the application. In
such situation, it becomes very important for the tester to find out if any bug has been reported
for similar type of defect. If yes, then the test case has to be blocked with the previously raised
bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such
bug reported previously, the tester can report a new bug and fail the test case for the newly raised
bug.
If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a
file with four columns containing Test Step No, Test Step Description, Expected Result and
Actual Result. The expected and actual results are written for each step and the test case is failed
for the step at which the test case fails.
This file containing test case and the screen shots taken are sent to the developers for reference.
As the tracking process is not automated, it becomes important keep updated information of the
bug that was raised till the time it is closed.
A bug tracking system, also known as a defect tracking system, is considered to be a software
application that helps to keep track of the reported software bugs in all the software development
projects. Bug tracking tools are regarded as a type of issue tracking system. It is kind of a
computer program, used by the team of application support professionals, to keep track of the
various issues that the software developers face.
Some of the best tracking tools in software industries are:
https://www.youtube.com/watch?v=9Z5ruL6JOHk
http://www.guru99.com/jira-tutorial-a-complete-guide-for-beginners.html
https://confluence.atlassian.com/jira064/jira-user-s-guide-720416011.html
12. Practical Tips for Software Testers
Below are a list of guidelines and tips for software testers and QA professionals when involved
in testing applications. These software testing tips are collected from many years of experience
in testing web applications in an agile environment. If you want to share your testing tips, then
add it in the comments field.
Don’t leave any questions unanswered. The acceptance criteria must be complete in order
to ensure you fully understand what the feature/story wants to achieve.
Consider the full end-to-end flows when thinking about test cases.
Consider all related error scenarios, e.g. web service connection down, invalid inputs, etc.
Consider mobile impact – mobile web and tablet – should any of the features behave
differently when used on a touch device, compared to using a keyboard to navigate?
Consider basics of security testing, such as https both URL and resources for protected
areas of the site.
Consider whether this story warrants being included in the automation test suite.
As a rough guide: only scenarios where its failure would result in a P1 or P2 in
production will be automated. This also includes scenarios with a lot of data to be
checked through, which would be very repetitive to do manually.
When you find bugs related to a story, raise them as bug-subtasks, to ensure the link to
the story is kept.
When signing a story or bug off as testing complete, ensure a comment is added in Jira
which includes the test environment and code version on which the tests were signed off.
If the story or bug can’t, or won’t be tested by a QA and will be tested by a developer
instead, ensure you review the test approach and add a note in Jira that you approve of the
dev’s test approach, ideally with a short description. Ensure the dev adds which version
is being signed off.
On Daily Tasks
What are the high priority stories and priorities work depending on day of sprint
On Sprint Planning
Be very proactive in the meeting by asking questions to get ideas for test
Thinking of test cases to validate features, applying various test techniques, positive,
negative, Boundary Values, Equivalent Partitions, etc
Use Mind maps to assist with test scenarios and user journeys
Consider risks – provide more test conditions around a feature of high risk
Always think about “What if”, “what else”, “how else” when designing test cases
Think about integration tests, how is this feature affecting nearest-neighbor features
Really understand what is going on when interacting with a feature rather than just
looking at it from a surface. Think about what back-end systems / DB / Web services are
being touched
When there are a lot of combinations of data to test, how can the permutations be reduced
without compromising quality / testing – e.g. using pair-wise test technique
Peer reviews of test conditions – discussing with developers what test cases have been
designed
Maintain the test packs and ensure all tests are up to date
What are current issues with the QA process / How can current issues be solved,
improved
Learn technical skills such as Databases, Coding, Web technologies to get a better
understanding of what is happening when testing