Software Testing
Software Testing
What is testing?
To monitor & measure the strength of development process organization follows SQA
concept.
Technical:
Non-Technical:
SDLC defines Software Development Life Cycle, It is a process and each and every
organization follows this process to get a quality of software. So there are several stages
comes under the SDLC such as:
Different Stages of SDLC process:
Information Gathering
Analysis
Design
Coding
Testing
Maintenance
Verification defines static testing. It starts from BRS, Analysis and analysis will be reviewed,
design will be reviewed and coding will be reviewed (WBT).
Business people act as a bridge between customer & technical people. This document
defines customer requirement to be developed as software. This document developed by
Business Analyst (BA people).
This document defines with respect to BRS. This document is also known as FS (Functional
Specification). This document defines functional requirement to be developed & system
requirement to be used. This document is also developed by Business Analyst category
people.
Review:
It is also known as External Design. This document defines structure/ hierarchy of all
possible functionality to be developed as main modules. This is developed by project
architect & software designer.
It is also known as Internal Design. This document defines static logic of every sub-module.
E.g. Entity Relationship (ER) Diagram, Class Diagram, DFD (Data Flow Diagram).
Prototype:
It is a coding level testing techniques used to check completeness & correctness of the
program. It is done by development team.
Program Technique Coverage: Take less no. of memory & CPU cycle.
2. Operation Testing:
3. Mutation Testing:
Mutation means change. Developer perform the change to estimate test coverage in the
program.
Integration Testing:
After completion of coding and their review developer combine all independent module to
form as system, during integration they apply integration testing on that coupled module
with respect to HLD & LLD.
There are three approaches conducts the integration testing such as:
a. Top Down Approach: In this approach test engineer conduct testing on main module
without going to sub-module, using STUB.
STUB is a temporary program used to check main module instead of under constructive
sub-module. This is also known as ‘Called Program’.
b. Bottom-Up Approach: In this approach test engineer conducts testing on the main
module instead of coming from the sub module, using DRIVER.
DRIVER is a temporary program which is used instead of main module. This is also known
as ‘Calling Program’.
Testing people with respect to combination of sub-module & with respect to functionality.
It is a build level testing technique. During this testing we people I mean to say Test Eng.
Validates internal functionality depends on external interface.
After completion of system integration testing & review we people concentrate on the
system and functional testing through a set of BBT techniques to validates the functionality
with respect to customer requirement.
1) Usability Testing: During this testing we people validates user friendliness of screen or
build or GUI.
3) Security Testing: During this testing we people validates privacy to user operations.
Usability Testing: In general testing process starts with or we people start test execution
with usability testing. Usability testing defines user friendliness of screen or build or GUI.
1) User Interface/GUI Testing: Less no. of event to complete task or easy validation.
E.g. If we gives some keyword in Google textbox it will gives the references I mean it says
what you want.
Functional Testing:
Sir I have done functional testing to define customer requirement in terms of (BIEBSC)
coverage functionality testing. It is major part of BBT technique. During this testing we
people validates completeness and correctness of the functionality with respect to
customer requirement.
a) Behavior Coverage: During this testing we people check the property of object.
b) Input Domain Testing: During this testing we people check the size & type of input
objects. We are going to apply this testing.
BVA defines Boundary Value Analysis. The size of input object. [Min & Max].
ECP defines Equivalence Class Partition. The size of data type. [Valid & Invalid].
2) Non-Functionality Testing: During this testing we people check all non functional
related issues I mean to say functionality would be developed and impact of system
requirement.
1) Recovery Testing: It is also known as Reliable testing. During this testing we people
check or validates whether an application recover from abnormal situation to normal
situation.
2) Compatibility Testing: This is also known as portability testing. During this testing we
people validates whether an application support/runs on customer expected platform or
not.
Basically we people involve into browser compatibility testing.
Customer expected platforms, customer expected platform means operating system,
compiler browser & other systems.
There are two types:
In general we people map the Internet Explorer browser with other existing browser in the
market.
E.g. Safari, chrome, Mozilla fireworks, opera.
E.g. One bank ATM card (SBI ATM card) is accepted by other bank ATM to withdraw money,
during this time we people validates the functionality w.r. to interconnection to other
software.
Note: Generally End to End testing is being performed after the completion of functionality
testing of each & every module.
5) Installation Testing: During this testing we people validates installation of our
application along with co-existed software in the customer expected like configuration
(environment) to validate the functionality w.r. to customer requirement.
We people concentrate on some factors such as:
1.Setup program execution before installation. (Whether all setup files are available or not).
2. Easy interface during installation.(Default Radio buttons should be there while
installation.
3. Occupied disk space after installation.
4. Verify un-installation.
8) Globalization Testing: It is the testing to ensure that our application supports multiple
languages or not.
a) Localization Testing: To ensure that our application supports local language or not.
b)Nationalization Testing: To ensure that our application supports Nationalization
languages or not.
2. Stress Testing (Stress defines maximum of load): The execution of our under
customer expected configuration and an internal load & peak load to estimate performance
is called Stress Testing. It is ensure that the maximum load on the application handle.
E.g. Maximum 700 users can use the application at a time.
3. Storage Testing (It defines maximum of data store): The execution of our application
under huge amount of resources to estimate storage limitation in our applications called
Storage Testing.
E.g. No of volume in byte, Mobile card handle only 2GB data as per card limit.
4. Data Volume Testing: The execution of our application under customer expected
configuration to estimate the peak limits of data is called Data Volume Testing.
E.g. No. of records in your database can be stored.
Security Testing: It is also known as advance level testing technique & complex to conduct.
During this technique we people validates the privacy to user operation as per customer
requirement.
1. Authorization: It is done by Test engineer, during this test we people validates whether
the user is valid or not.
E.g. Employee of wipro.
2. Access Control (Authentication): It is done by Test engineer, during this test we people
validates whether the user has permission for specific operation or not.
E.g. Employee of wipro but need to permission for specific operation.
3. Encryption & Decryption: Data conversion between client and server. It is done by
developer.
Encryption
Client Server
Decryption
After completion of an system & functional testing our organization concentrate on user
acceptance testing to collect the feedback from customer.
a) Alfa Testing: This testing is applicable for a software application. Alfa testing is a testing
technique which is done in controlled environment in the presence of developer & tester.
b) Beta Testing: This testing is applicable for a software product. Beta testing is done at
the customer side in the absence of developer & tester in uncontrolled environment.
Basically user manual is an main part of an user acceptance testing and we involve
in some manner.
After completion of an User Acceptance Testing & their review our organization
concentrate on released, from team formation which is known as build released team. This
release team consists of some Hardware engineer, some Developer & some Test engineer.
Basically this release team apply port testing on critical part of application to validate the
functionality before release the build. In general released testing will be performed within 2
days and it will be performed at client side like environment (configuration).
After completion of port testing, release team providing training session to customer
side people and come back to our organization. During maintenance customer side people
request our organization for change or modification that is known as change request.
# Impact analysis.
# Perform change.
Analyze impact at change senior team will handle change request during an test
execution. Change Control Board comes under configuration management. Configuration
management means to handle the change request during test execution.
Testing Terminology:
1.Monkey/Chimpanzee/Speed Testing:
Basically monkey testing defines maximum number of test cases with less number of
time for execution. During monkey testing we people concentrate on high priority test
cases.
2.Adhoc Testing:
It is done by senior tester. Test engineer do not have sufficient test data but with the
help of past experience have to conduct the test.
I mean to say we don’t have test data but we have domain knowledge.
It is a initial stage of Black Box Testing. Development team or developer estimate the
stability at build to check whether build is ready before testing or not, we people also
involved in sanity testing as test engineer to check the core functionality of application is
called as sanity testing.
We conduct a sanity testing after each build up-gradation that is after receiving new
build. In sanity testing we perform:
Smoke Testing:
Smoke testing is a extra shake (vibration) up of sanity testing. We trying to identify a
troubleshoot. During this execution if we find environmental issue/runtime error.
We are trying to identify invalid object then we are trying to identify package that
means the object belong to which package then we are going to request to database
administrative people to recompile that package that’s all about smoke testing.
Package is combination & clubbing of similar type of object & it is created by database
administrator.
UAT
Integration
Unit
Retesting:
Retesting of an same application or build with multiple test data to validate the
functionality with respect to customer requirement is called as retesting.
In other word we can say to see that whether failed functionality is working fine or not &
sir resting is applied only for failed test cases.
E.g. To define multiplication table test engineer choose the different types of combination in
terms of +ve, -ve, integer, float, min, max and zero that is same build with multiple test data.
Regression Testing:
Regression comes word from regret. Re-Execution of test on modify build to ensure
that bug fix and occurrence of side effect to validate functionality is known as regression
testing.
In other word we can say to see that whether unchanged functionality is working fine or
not & sir regression testing is applied only for passed test cases.
Testing Accept
Report the Defect
Modify Build
Regression Test
Test Engineer
#Scrum Meeting: Scrum meeting is taken under control of scrum master, each day start
with this meeting and we discussed in meeting I mean to say what did yesterday? , What
are going to do today? And what is the road blocks (Issues). In scrum meeting involve
people are BA, scrum master, development team & testing team as well as all including all
project members. Time span of this meeting is 15 to 20 min but it would be extend
depending upon issues.
Advantages of Agile:
# Less Cost.
# Fast Delivery.
Disadvantages of Agile: When project is complex/ big and it has lot of independent
module in that case it get difficult to implement.
Sprint Test Plan of Agile:
28 Dec = Estimation.
Requirements Analysis.
Test case design.
Test case review.
PM TL T.E. T.E. TL
Test Initiation Test Plan Test Design Test Execution Test Closer
Defect Report
rt
In many organization testing process starts with initiation process. During this
stage project manager concentrate on the scope of the project, requirement of the project &
the risk involved in the project.
After that in the test plan Test Lead mainly concentrate on job allocation in the
terms of
#What to test?
#How to test?
#When to Test?
#Who will test?
During test design we people I mean to say test engineer prepare the test case From
SRS/Functional specification to validate the functionality with respect to customer
requirement.
After completion of test design we people execute the test case to validate the functionality.
During this execution if we find any mismatch or defect then we log the defect and send it to
development team to fix that defect.
After fixing that bug developer send the modified build and on that modified build we
people apply the regression testing to check whether the defect is resolve or not and to
check the occurrence of any side effect due to bug fix. This is the cyclic process and we
will continue till all defect get resolve.
After completion of test execution we prepare that report & send it to test lead.
During test closer test lead concentrate on whether all the testing process going correctly
or not. That’s all about STLC.
Reject
Close
New Open Fix
Re-Open
Differed
During system and functional testing we people get new defect & its status is new and log
the defect in defect tracking tool.
After discussion with the senior tester we people set the defect status as open & assign it to
developer.
After receiving the defect from the testing team, developer analyze & verify the defect. If
developer doesn’t accept it, he set defect status as reject.
If developer decides to fix the defect in later version he set defect status as differed.
If developer accept the defect & he fix it, then he send the modified build to testing team.
Then we people I mean to say test engineer verify whether the defect has been fix or not. If
defect has been fix we people set the defect status as closed.
If defect is re-occurring any side effect due to bug fix, then we people set the defect status as
reopen and send it to developer.
Test Methodology
Test Plan
Team Lead (TL)
Test Case
Defect Report
Test Summery Or
Software Release
Note (SRN)
Test Policy
It is a company level document and developed by quality control people. (Almost management) this
document defines “testing objective” to be achieved.
Name of Company
Address of company
Location of company
Signature of
(C.E.O)
FP – Functional point.
Example: No of screens / No of forms / No. of reports / No. of inputs / No. of outputs / No. of queries.
Basically this document shown to the client side people in according to gain new project.
Test Strategy
It is also a company level document and developed by quality analyst people (project manager level). This strategy
document defines testing approach to be followed by testing team.
1) Scope and objective: The definition & purpose of testing in our organization.
2) Business issues: Budget control for testing, cost of testing is estimated accordingly cost of a project. (How much we
are going to spend on testing).
3) Testing Approach: Mapping between development stages and testing issues (Factors) TRM is prepared.
Note: TRM is prepared by PM & send it to customer accordingly money will be given.
4) Test deliverables: Required testing task to be completed before start testing w.r. to H/W configuration & resource
document. Entry & Exit criteria should be define properly.
5) Roles and responsibilities: Names of jobs in testing team and responsibilities of every job during testing.
6) Communication and status reporting: Required negotiation between every two consecutive jobs in testing team.
7) Test automation and testing tool: Purpose of automation and availability of testing tools in your organization.
8) Defect reporting and tracking: Required negotiation between testing team and development team when testers got
mismatches in testing. Because it may affect on production or releases.
9) Testing measurements and metrics: This is the unit to measure testing process. QAM, TMM, PCM
10) Risks and Mitigations: If any problem occurs during testing then the solutions to overcome.
11) Change and configuration management: Ability to handle change request during execution. How to handle sudden
changes in customer requirements during testing.
12) Training plan: Required number of sessions to understand customer requirements by testing team.
TEST FACTORS OR TESTING ISSUES
To define one quality software. Software engineering people are using 15 factors/issues.
2) Access control: Whether a valid user have permissions to use specific service or not?
14) Maintainable: Whether our application build is long time serviceable to customer site people or not?
15) Methodology: Whether our testing team is following standards or not? (during testing)
1) Authorization:
2) Access control:
3) Audit trail:
4) Continuity of processing:
12) Performance: Load testing, stress testing, storage testing, data volume testing.
Compliance testing: Whether the testing team is following standards or not during testing, is called compliance testing.
Compliance means that complete plan.
Test Methodology
It is a project level document & it is developed by Project Manager/QA category level people. & this document QA/PM
defines a required testing approach/issues/factors to be followed for corresponding project. In this document QA/PM
select possible test issues or factors for current project requirement.
Note: Test strategy is overall but test methodology is applied on selected area. So test methodology [Project Level] is
important than test strategy [Company Level]
To develop test methodology, project manager/quality analyst follows some approach. Before start every project testing.
Outsourcing * * √ √ *
Maintenance * * * * √
Note:- Depending on project type, project manager delete some of the columns from TRM (test responsibility matrix) for
this project testing.
Note: – Depending on requirements in the project, PM delete unwanted factors (rows) from TRM for this project testing.
Note: – Depending on expected future enhancements, PM is adding some of previously deleted factors to TRM for this
project testing.
Note: – Depending on analyzed risks, PM is deleting some of the factors from selected TRM for this project testing.
Because that not might be supported by organization environment.
After completion of test initiation and testing process finalization, test lead category people are concentrating on test plan
document preparation in terms of “what to test?”, “how to test?” , “when to test?” , “who to test?” .
1. Team Formation
Development Strategy (Resource Allocation)
Testing team formation: In general test planning process starts with testing team formation. In this stage test lead is
depending on below factors.
Case study:
Identify Tactical Risks: After formation of testing team, test lead is analyzing selected team level risks. This risk analysis
is also known as Root Cause Analysis.
Ex: Risk 1 : Lack of knowledge of testing team on that domain.
Prepare test plan: after completion of testing team formation and risks analysis, test lead concentrate on test plan
document preparation in IEEE format (Institute of Electrical and Electronics Engineers).
IEEE Format:
Features to be tested: New module names for test design. (What to test)
Features not to be tested: Which ones and why not? (Copy test cases from server)
Approach: Selected list of testing techniques by project manager to be applied on above modules,(finalized TRM)
Feature pass/fail criteria: When a module is pass and when a module is fail.
Test environment: Required hardware’s and software’s to conduct testing on above modulus. Ex: WinRunner
EX: test cases, test procedures, test scripts, test log, defect reports for every modules.
Staff and training needs: Names of selected test engineers for this project testing
(Work allocation)
After completion of first copy of test plan document development, test lead conducts a review on that document for
completeness and correctness. In this review meeting test lead concentrate on coverage analysis.
Coverage analysis:
After finalizations of test plan, test lead is providing some training sessions to selected testing team on project
requirements.
After finalization of test plan and after completion of training sessions, test engineers are concentrating on test cases
development for responsible modules. There are three methods to prepare test cases such as:
Business logic based test case design (depending on SRS or FS for Application)
Input domain based test case design (design documents for Product)
In general, test engineers are preparing maximum test cases depending on use cases in SRS. Every use case is describing
functionality in terms of inputs, process and outputs.
Depending on that use case. Every use case is also known as functional specification. Every test case describes a testable
condition to be applied on build.
Step2: Selecting a use case and their dependencies from above collected list of use case.
During test design test engineers are preparing test cases in IEEE format. Through these formats test engineers are
documenting every test case.
Test suit id: The corresponding batch id, in that batch this case is also member.
Test environment: Required hard wares and soft wares including testing tool to execute this test case.
Test procedure: This step-by-step procedure from base state to end state
Test case pass/fail criteria: When this case is pass/ when this case is fail
NOTE: In general, test engineers are not maintaining complete format for every test case. They can try to maintain test
procedure as mandatory for every test case.
Input domain based test case design
In general, test engineers are preparing test cases depending on use cases or functional specifications in SRS. Sometimes
they can go to depending on design documents also. Because, use cases are not providing complete information about size
and type of input objects. Due to this reason, test engineers are studying data models in design documents.
Ex: ER-diagrams
Step2: Study every input attribute in terms of size and type with constraints
Step3: Prepare BVA and ECP for every input attribute in below format
This table is called DATA MATRIX. This table is providing information about every object
Critical inputs are involving in internal manipulations. Non-critical inputs used for printing purpose.
NOTE: If our test case is covering an operation, then test engineers are preparing step-by-step procedure from base state
to end state. If our test case is covering an object, then test engineers are preparing data matrix.
To conduct usability testing, test engineers are preparing test cases depending on global user inter face convection, our
organization rules and interest of customer site people.
Test case2: Graphics check (alignment, font, style, color and other micro soft six rules)
NOTE: test case1 to test case6 are indicating user inter face testing and test case7 is indicating manuals support testing.
Before receiving build from development team to start test execution, test lead is analyzing the completeness and
correctness of prepared test cases by test engineers through a review meeting.
1) Self Review.
2) Peer Review (Along with colleague).
3) Internal Review (PM/TL/BA/TE)
4) External Review (Customer)
At the end of this review, test lead is preparing requirements trace ability matrix (RTM). This matrix defines mapping
between customer requirements and prepare test cases. This matrix is also known as requirements validation matrix
(RVM).
Traceability Matrix define mapping between business requirement & prepared test cases to validate the customer
requirement. This matrix is prepared by the TL.
a) Forward Traceability Matrix: Mapping between prepared test case & business requirement is called as
Forward Traceability Matrix
b) Backward Traceability Matrix: It is a mapping between defects and prepared test cases
Requirement Validation Matrix: Sometimes defect is valid defect and written test case is also correct but still
defect is occurred that time we have add extra test case for that defect is called as Requirement Validation Matrix.
During level-1 and level-2 test execution, test engineers are reporting mismatches to development team in IEEE format.
Format:
Build Version Id: Version number of build, in which test engineers found this defect
Feature: The Corresponding module name, in which test engineer found this defect
Test Case Name: The name of test condition, during this case execution, test engineer found this defect
Re-producible: Yes, means defect appears every time in test execution No, means defect appears rarely in test execution
Assigned To: The responsible person at development side to receive this defect
High – Not able to continue test execution with out resolving that defect
Low – Able to continue remaining testing but optional to resolve (may/may not)
Resolution Type:
NOTE: In above format development people try to change priority of defect with respect to customer importance
Defect age: The time gap between defect reported date and defect resolved date is called defect age
Defect Resolution Type: During test execution, test engineers are reporting mismatches to development team as defects.
After receiving defect reporting from testing team, development people are conducting bug-fixing review and they will
send resolution type report to corresponding testing team. There are 12 types of resolutions to report to testing team.
Enhancement: rejected due to this defect related to future requirement of the customer
Hard ware limitation: rejected due to this defect related to limitations of hard ware devices
Soft ware limitation: rejected due to this defect related to limitations of soft ware technologies
Functions as designed: rejected due to coding is correct with respect to design logic
Need more information: not accepted and not rejected but developer’s required extra information to under stand the
defect
Not reproducible: not accepted and not rejected but developer’s required correct producer to reproduce that defect
No plan to fix it: not accepted and not rejected but developer’s required extra time to fix
Types Of Bugs: During test execution either in manual or in automation, test engineers are finding below types of bugs.
Users Inter Face Bugs: (low severity)
Ex1: Dependent out puts are wrong (application show stopper) (high priority)
Ex2: Find out put is wrong (module show stopper) (low priority)
Ex1: Not able to establish connection to hard ware device (high priority)
Ex: Wrong logo, logo missing, copy right window missing, wrong version number, soft ware title mistake, team members
names missing——etc.