WINSEM2024-25 BCSE301L TH VL2024250502249 2025-03-04 Reference-Material-I
WINSEM2024-25 BCSE301L TH VL2024250502249 2025-03-04 Reference-Material-I
VALIDATION
AND
VERIFICATION
Presented By,
Dr.Baiju B V
Assistant Professor
School of Computer Science and
Engineering VIT, Vellore 1
Strategic Approach to Software Testing
• Testing is a set of activities that can be planned in advance and
conducted systematically.
• A strategy for software testing must accommodate
– low-level tests that are necessary to verify that a small source
code segment has been correctly implemented
– high-level tests that validate major system functions against
customer requirements.
• It is a set of guidelines that an internal QA department or an
external QA team must adhere to in order to deliver the standard of
quality you have established.
1. Verification and Validation
• Verification refers to the set of tasks that ensure that software
correctly implements a specific function.
• Validation refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements.
• Moving inward along the spiral, you come to design and finally to coding.
• To develop software, start with high-level ideas and gradually refine them,
moving step by step toward detailed implementation.
• A strategy for software testing may also be viewed in the context of the
spiral.
• Unit testing starts at the core of development, focusing on testing
individual components like functions, classes, or modules to ensure they
work correctly in the source code.
• Testing moves outward in stages, reaching integration testing, where
the focus is on design and software architecture.
• Validation testing checks if the final software meets the initial
requirements. It ensures that the software does what it was designed to
do.
• Finally, in system testing, the entire software and its components are
tested together to ensure they work as a complete system.
• To test software, you start small and gradually expand testing in a spiral
pattern, covering more features with each step.
3. Strategic issues
• Tom Gilb says that a software testing strategy will succeed when software
testers
(i) Specify product requirements in a quantifiable manner long
before testing commences.
– A good testing strategy also assesses other quality characteristics such
as portability, maintainability, and usability .
– These should be specified in a way that is measurable so that testing
results are unambiguous.
– The system should process 1,000 transactions per second with a
response time of under 2 seconds.
(ii) State testing objectives explicitly
– Specific objectives of testing should be stated in measurable terms.
– For example,
• Test effectiveness (Identify at least 90% of critical defects before release)
• Test coverage
• Meantime- to-failure (System should operate for at least 1000 hours
before encountering a failure)
• Cost to find and fix defects
(iii) Understand the users of the software and develop a profile for
each user category.
– Use cases that describe the interaction scenario for each class of user can
reduce overall testing effort by focusing testing on actual use of the
product
(iv) Develop a testing plan that emphasizes ―rapid cycle testing.‖
– Recommends that a software team “learn to test in rapid cycles (2 percent
of project effort) of customer-useful, at least field „trialable,‟ increments of
functionalityand/or quality improvement.”
(v) Build ―robust‖ software that is designed to test itself.
– Software should be designed in a manner that uses antibugging
techniques.
(vi) Use effective technical reviews as a filter prior to testing.
– Technical reviews can be as effective as testing in uncovering errors.
(vii) Conduct technical reviews to assess the test strategy and test
cases themselves.
– Technical reviews can uncover inconsistencies, omissions, and outright
errors in the testing approach. This saves time and also improves product
quality.
Testing Fundamentals
• The goal of testing is to find errors, and a good test is one that has a high
probability of finding an error.
(i) A good test has a high probability of finding an error.
– To achieve this goal, the tester must understand the software and
imagine possible ways it might fail.
(ii) A good test is not redundant.
– Testing time and resources are limited.
– There is no point in conducting a test that has the same purpose as
another test.
– Every test should have a different purpose.
(iii) A good test should be ―best of breed‖ .
– When time and resources are limited, only the tests most likely to
reveal major errors should be run.
Redundant Testing (Inefficient)
Test Case 1: Verify that a user can add an item to the shopping cart.
Test Case 2: Verify that clicking the "Add to Cart" button adds the item to
the cart.
Most of us use an email account, and when you want to use an email
account, for this you need to enter the email and its associated
password.
If both email and password are correctly matched, the user will be
directed to the email account's homepage; otherwise, it will come back
to the login page with an error message specified with "Incorrect
Email" or "Incorrect Password."
4. State Transition Testing
• This technique is used when the software behavior depends on past
values of inputs.
• The software is considered to have a finite number of states.
• The transition from one state or another of Application Under Test (AUT)
happens in the responses to the action of the users.
• State transition testing helps understand the system‟s behavior and
covers all the conditions.
• The four main components for
a state transition diagram are
as follows:
• States
• Transition
• Events
• Actions
Consider a bank application that allows users to log in with valid
credentials. But, if the user doesn’t remember the credentials, the
application allows them to retry with up to three attempts. If they
provide valid credentials within those three attempts, it will lead to a
successful login. In case of three unsuccessful attempts, the
application will have to block the account.
Example : Let take a simple example of an application with just a login
functionality.
Rule:
Functional Regression
Non-functional Testing
Testing Testing
In the second build, the previous defects are fixed. Now the test engineer
understands that the bug fixing in Module D has impacted some features
in Module A and Module C. Hence, the test engineer first tests the Module D
where the bug has been fixed and then checks the impact areas in Module A
and Module C.
3) Full Regression Testing [FRT]
• Full Regression Testing (FRT)
checks all the features of an
application, both new and old.
• It is usually done in later releases
and before launching.
• FRT ensures that modified
features work correctly without
breaking existing ones.
Test Plan
• A Test Plan is a comprehensive document outlining the policies, goals,
timeline, equipment, technology, estimates, due dates, and
manpower that will be used to perform testing for the software products.
• A test plan is a document that consists of all future testing-related
activities.
• In any company whenever a new project is taken up before the tester is
involved in the testing the test manager of the team would prepare a test
Plan.
• The test plan serves as the blueprint that changes according to the
progressions in the project and stays current at all times.
• It serves as a base for conducting testing activities and coordinating
activities among a QA team.
Types of Test Plan in Software Testing
• There are three types of test plans
• This includes the overall plan, roadmap, and software testing life
cycle outline.
1. Enter the
Check that with username:
username
the correct abcdefghijkl Login
2. Enter the
1. username and m Login successful success Pass
password
password able password: ful
3. Click the
to log in. welcome
login
User Acceptance Test The user feedback is taken if the login page is
Case loading properly or not.
Test
Test Test Test Test Actual Stat
Expected Remarks
Id Condition Steps Input Result us
Result
The login
Check if the page is not
1. Click Welcome Welcome
loading page loaded due to
on the to the to the
1. loading None Fail a browser
login login login
efficiently for compatibility
button. page. page.
the client. issue on the
user‟s side.
Test Cases for Module : Login
VTOP Login Page Pre-conditions :User must have a registered VTOP account
States Description
Pass Test case is executed, and the expected result is the same
Fail Test case is executed, and the expected result is not the same
Inconclusive Test case is executed, and there is no clear result
Block Test case cannot be executed because one of the test case
preconditions is not met.
Deferred Test case is not executed yet and will run in the future.
In progress Test case is currently running.
Not run Test case has not been executed yet.
3. Activities for Test Execution
• The following are the 5 main activities that should be carried out during
the test execution.
(i) Defect Finding and Reporting
• Defect finding is the process of identifying bugs or errors while testing
the code.
• If a test case fails or an error appears, it is recorded and reported to the
development team.
• End users may also find and report errors during user acceptance
testing.
• The respective team will review the recorded errors and work on fixing
them.
(ii) Defect Mapping
• After an error is detected and reported, the development team fixes it as
needed.
• Then, the testing team runs test cases again on the updated code to ensure
it works correctly.
(iii) Re-Testing
• Re-testing ensures a smooth release by testing modules or the entire
product again.
• If a new feature is added after release, all modules are re-tested to prevent
new defects.
(iv) Regression Testing
• Regression Testing checks if recent code changes work correctly.
• It ensures that new modules or functions do not affect the normal operation
of the application or product.
(v) System Integration Testing:
• System Integration Testing checks whether all components or modules of a
system work together as a whole.
• Instead of testing each part separately, it ensures everything functions
correctly in a single test environment.
4. Test Execution Process
• The test execution technique has three phases that help process the test
results and confirm their accuracy.
1. Creation of Test Cases
• The first phase is to create suitable test cases for each module or
function.
• Tester with good domain knowledge is essential to create suitable test
cases.
• Test cases should be simple and created on time to avoid delays in
product release
• The created test cases should not be repeated again.
• It should cover all the possible scenarios raised in the application.
2. Test Cases Execution
• After test cases have been created, execution of test cases will take place.
• Quality Analyst team will do automated or manual testing depending
upon the test case scenario.
• It is always preferable to do both automated as well as manual testing to
have 100% assurance of correctness.
• The selection of testing tools is also important to execute the test cases.
3. Validating Test Results
• Execute the test cases and record the results in a separate file or report.
• Compare the actual results with the expected results.
• Note down the time taken to complete each test case.
• If any test case fails or does not meet the expected outcome, report it to
the development team for validation.
5. Test Execution Report
• The Test Execution Report is nothing but a document that contains all the
information about the test execution process. It is documentation that will be
recorded and updated by the QA team. The documentation or the report contains
various information. They are:
– Who all are going to execute the test cases?
– Who is doing the unit testing, integration testing, system testing, etc.,
– Who is going to write test cases?
– The number of test cases executed successfully.
– The number of test cases failed during the testing.
– The number of test cases executed today.
– The number of test cases yet to be executed.
– What are the automation test tools used for today’s test execution?
– What are the modules/functions testing today?
– Recording the issues while executing the test cases.
– What is today’s testing plan?
– What is tomorrow’s testing plan?
– Recording the pending plans.
– Overall success rate.
– Overall failure rate.
Test Review
• A Test Review is a formal process to check software and ensure that recent
changes work correctly.
• The reviewer verifies the correctness, accuracy, flow, and coverage of the
test case.
Test Case Repository
• A test case repository is a central place where all approved test cases are
stored and managed.
• It includes test
cases that contain
all the key
possibilities of
workflow execution,
thus ensuring all
variations in the
application and test
scenarios are
covered.
Test Case Review Process
• This correction process will continue until both authors, and the reviewer
will satisfy.
• Once the review is successful, the reviewer sends it back to the test lead
for the final approval process.
• Once the test case is reviewed, the review comments will be sent to the
test case review template.
Test Execution Report
Review
The main objectives of Software Review
(i) Detecting Problems Early
– This early detection helps save time, effort, and resources down the
road.
(ii) Enhancing Quality
– They fine-tune the software to be reliable and high-quality, making
sure it works well and meets user needs.
(iii) Team Collaboration
– Software reviews bring team members together to share ideas, group
their expertise, and learn from each other.
– This teamwork leads to better outcomes and a stronger sense of
fellowship.
(iv) Following Standards
– Software reviews ensure that the software adheres to these standards,
making it consistent and aligned with best practices.
Types of Review in Software Testing
1. Software Peer Review
• Software peer review is considered as a collaborative effort among
professionals to elevate the quality of their work.
a. Code Reviews
– This review process, like a team of skilled programmers checking code,
ensures it follows standards, works efficiently, and is free of errors.
b. Design Reviews
– Design reviews evaluate the software’s architecture, and design
choices.
– This guarantees efficient resource utilization, scalability, and
adherence to best practices.
c. Document Reviews
– Document reviews ensure technical documents, user guides,
manuals, test cases are well-written, clear, and user-friendly.
– This careful review makes the documentation more effective and helps
users understand and use the software easily.
2. Software Management Reviews
• The objective of this type of review is to evaluate the work status.
• These reviews help decide the next steps in the process.
a. Project Progress Review
– Project progress reviews monitor and evaluate the project‟s
advancement.
– These reviews provide valuable insights into project milestones,
potential delays, and the need for adjustments, enabling timely
decision-making and resource allocation.
b. Resource Allocation Review
– Resource allocation reviews examine the allocation of human
resources, tools, and budget.
– By ensuring efficient resource utilization, these reviews contribute to
streamlined project execution and cost-effectiveness.
3. Software Audit Reviews
• Software audit reviews is consider as similar to regulatory audits in the
corporate companies, ensuring compliance with industry standards
and regulations.
• These reviews encompass:
Regulatory Compliance Review:
– These ensure that the software aligns with specific industry
regulations, legal standards, and ethical practices.
Security Audit:
– Security audit reviews assess the software‟s vulnerability to breaches
and cyber threats.
– These reviews scrutinize the software‟s ability to withstand
cyberattacks, safeguard sensitive data, and protect user privacy.
Inspection and Auditing
1. INSPECTIONS
• Inspections are formal reviews where moderators check documents
thoroughly before a meeting.
• A meeting is then held to review the code and the design.
• Inspection meetings can be held both physically and virtually.
• The purpose of these meetings is to review the code and the design with
everyone and to report any bugs found.
• Software inspection is divided into two types:
1. Document Inspection
The documents produced for a given phase are inspected, further
focusing on their quality, correctness, and relevance.
2. Code inspection
The code, program source files, and test scenarios are inspected
and reviewed.
A. Participants and Roles
Participants Roles
Moderator A facilitator who organizes and reports on inspection.
Author A person (Programmer or designer) who produces the report.
Reader Present the code at an inspection meeting, where they read
the document one by one
Recorder/ A participant who is responsible for documenting the defects
Scribe found during the inspection process
Inspector The inspection team member responsible for identifying the
defects.
B. Software Inspection Process
1. Planning
• The planning phase starts with the selection of a group review team
(developers, testers, and analysts).
• A moderator plans the activities performed during the inspection and
verifies that the software entry criteria are met.
A software development company is reviewing a new mobile banking app.
The team assigns a moderator and selects reviewers, including a tester
and a security expert.
2. Overview Meeting
• The purpose is to provide background information about the software.
• A presentation is given to the inspector with some background
information needed to review the software product properly.
The development team explains how the login and transaction features
work in the mobile banking app, helping the reviewers understand the
system before inspection.
3. Preparation
• In the individual preparation phase, the inspector collects all the
materials needed for inspection.
• Reviewers use checklists and past defect records to guide their review.
A security expert analyzes the login mechanism and notices a potential
vulnerability in password encryption. A tester finds an issue where users
cannot reset their passwords correctly.
4. Meeting
• The moderator conducts the meeting to collect and review defects.
• The reader reads through the product line by line while the inspector
points out the flaws.
• All issues are raised, and suggestions may be recorded.
During the review meeting, the security expert raises concerns about weak
password hashing, and the tester reports a password reset bug. These
issues are documented for rework.
5. Rework
• Based on meeting notes, the author changes the work product.
The development team fixes the password hashing issue and corrects the
password reset bug in the mobile banking app.
6. Follow-up
• The moderator checks if all defects are resolved.
• A defect summary report is created to track fixes.
The moderator verifies that the password hashing now meets security
standards and the reset functionality works correctly. A summary report is
prepared for documentation.
SDLC without Inspection
SDLC with Inspection
2. AUDITING
• A software audit is a detailed review of a software product to check
its quality, progress, standards and regulations.
• It helps assess the product's overall health.
Types of Software Audit
(i) Audit to Verify Compliance:
• This audit checks if the process is within the given standards.
• If the testing has set standards, the audit makes sure they are followed.
(ii) Audit for process improvement:
• A software audit helps find any needed changes to improve the process.
• This involves checking each step, finding problems, and fixing them.
(iii) Audit for Root Cause Analysis
• Software audit helps find the root cause of a problem using various tests.
• It focuses on specific issues that need attention and fixing.
(iv) Internal audit:
• These audits are done within the organization
(v) External audit:
• These are done by independent contractors or external agencies
• There are various metrics that are monitored during an audit to
ensure that the expected outcome is being achieved.
1. Project Metrics
• Percentage of test case execution: Measures how many test cases have
been executed
Percent of Test Case Execution = (Number of Passed Tests + Number of
Failed Tests + Number of Blocked Tests) / Number of Test Cases
If there are 100 test cases and 80 have been executed (passed, failed,
or blocked), the execution percentage is 80%.
2. Product Metrics
• Critical defects: Shows the number of serious issues in the product
• A higher cyclomatic complexity means the code has more decision points
(like loops and conditionals), making it harder to test and maintain.
Steps that you should follow when you are calculating cyclomatic complexity
and test cases design:
Path
Execution Flow
No.
Start → a = 10 → a > b (Yes) → a = b
Path 1
→ Print a, b, c → Stop
Start → a = 10 → a > b (No) → a > c
Path 2
(Yes) → b = c → Print a, b, c → Stop
Start → a = 10 → a > b (No) → a > c
Path 3
(No) → c = a → Print a, b, c → Stop
Design the test cases
Test Stat
Case ID
Test Condition Test Input Expected Output
us
a > b is true, a > c is not Pass/
TC_001 a = 10, b = 5, c = 8 a = 5, b = 5, c = 8
checked Fail
Pass/
TC_002 a > b is false, a > c is true a = 10, b = 15, c = 8 a = 10, b = 8, c = 8
Fail
Pass/
TC_003 a > b is false, a > c is false a = 10, b = 15, c = 20 a = 10, b = 15, c = 10
Fail
• Cyclomatic Complexity = 3
• Three Independent Paths Identified
• Three Test Cases Created to Cover All Paths
2.Calculate cyclomatic Control flow graph Cyclomatic
complexity for the Complexity
given code V(G) = E - N + 2 * P
IF X = 300 =8-7+2*1
THEN IF Y > Z =3
THEN X = Y
ELSE X = Z
V(G) = P + 1
END IF
=2+1
END IF
=3
PRINT X
1. IF X = 300 V(G) = R + 1
2. THEN IF Y > Z =2+1
3. THEN X = Y =3
4. ELSE X = Z
5. END IF
6. END IF
7. PRINT X
Identify the independent paths. IF X = 300
THEN IF Y > Z
Path 1: 1 -> 2 -> 3 -> 5 -> 6 -> 7 THEN X = Y
Path 2: 1 -> 2 -> 4 -> 5 -> 6 -> 7 ELSE X = Z
END IF
Path 3: 1 -> 6 -> 7
END IF
PRINT X
Test
Expected
Case Scenario Test Input
Output
ID
Path 1:
TC_001 X = 300, Y = 500, Z = 200 X = 500
X = 300, Y > Z
Path 2:
TC_002 X = 300, Y = 100, Z = 200 X = 200
X = 300, Y ≤ Z
Path 3:
TC_003 X = 150, Y = 400, Z = 200 X = 150
X ≠ 300
Calculate cyclomatic complexity for the
given code
begin int x, y, power; 1. begin int x, y, power;
float z; 2. float z;
input(x, y); 3. input(x, y);
if(y<0) 4. if(y<0) Method-01:
Cyclomatic Complexity
power = -y; 5. power = -y;
else else V(G)= E – N + 2
power = y; 6. power = y; = 16 – 14 + 2
z=1; 7. z=1; =4
while(power!=0) 8. while(power!=0) Method-02:
{ { Cyclomatic Complexity
9. z=z*x; V(G) = R + 1
z=z*x;
10. power=power-1; =3+1
power=power-1;
} =4
} 11. if(y<0) Method-03:
if(y<0) 12. z=1/z; Cyclomatic Complexity
z=1/z; 13. output(z); V(G) = P + 1
output(z); 14. end =3+1
end =4
Identify the independent paths.
Path 1: 1-2-3-4-6-7-8-9-10-11-13-14
Path 2: 1-2-3-4-5-7-8-9-10-11-12-13-14
Path 3: 1-2-3-4-5-7-8-11-12-13-14
Path 4: 1-2-3-4-6-7-8-11-13-14
class Main {
public static void main(String[] args){
int sum;
int max = 20 ;
int pre = 0;
int next = 1;
System.out.println("The Fibonacii series is : " +pre);
while(next<= max){
System.out.println(next);
sum = pre + next;
pre = next;
next = sum;
}
}
}
Calculate cyclomatic complexity for the given code-
{ int i, j, k;
for (i=0 ; i<=N ; i++)
p[i] = 1;
for (i=2 ; i<=N ; i++)
{
k = p[i]; j=1;
while (a[p[j-1]] > a[k]
{
p[j] = p[j-1];
j--;
}
p[j]=k;
}
Object Oriented Testing
• Object-oriented testing is a type of software testing that focuses on verifying the
behaviour of individual objects or classes in an object-oriented system.
• The goal of object-oriented testing is to ensure that each object or class in the
system performs its functions correctly and interacts properly with other objects or
classes.
• Object-oriented programming emphasises the use of objects and classes to organise
and structure software, and object-oriented testing is built on these ideas.
• In object-oriented testing, the behaviour of an object or class is tested by creating
test cases that simulate different scenarios or inputs that the object or class might
encounter in the real world.
OBJECT-ORIENTED TESTING STRATEGIES
(i) Unit Testing in the OO Context
• In object-oriented software, units shift from individual modules to encapsulated
classes, which bundle data attributes and operations.
• Testing focuses on these encapsulated classes rather than isolated modules, altering
the approach due to the potential overlap of operations across various classes.
• In object-oriented software, class testing resembles unit testing in traditional
software.
• However, while conventional unit testing emphasizes module algorithms and data
flow, class testing in OO software centers on the encapsulated operations and state
behavior within each class.
(ii) Integration Testing in the OO Context
• There are two different strategies for integration testing of OO systems.
(a) Thread-based testing
– Integrates the set of classes required to respond to one input or event for the
system.
– Each thread is integrated and tested individually.
– Regression testing is applied to ensure that no side effects occur.
(b) Use-based testing
– Begins the construction of the system by testing those classes (called
independent classes) that use very few (if any) of server classes.
– After the independent classes are tested, the next layer of classes, called
dependent classes, that use the independent classes are tested.
– This sequence of testing layers of dependent classes continues until the entire
system is constructed.
Cluster testing
• A cluster of collaborating classes (determined by examining the CRC and object
relationship model) is exercised by designing test cases that attempt to uncover
errors in the collaborations.
(iii) Validation Testing in an OO Context
• At the validation or system level, the details of class connections disappear.
• Validation of OO software focuses on user-visible actions and user-recognizable
outputs from the system.
• To assist in the derivation of validation tests, the tester should draw upon use cases
that are part of the requirements model.
• The use case provides a scenario that has a high possibility of uncovered errors in
user-interaction requirements.
• Conventional black-box testing methods can be used to drive validation tests.
• You may choose to derive test cases from the object behavior model and from an
event flow diagram created as part of OOA.
OBJECT-ORIENTED TESTING METHODS
• An overall approach to OO test-case design has been suggested by Berard
1. Each test case should be uniquely identified and explicitly associated with
the class to be tested.
2. The purpose of the test should be stated.
3. A list of testing steps should be developed for each test and should contain:
a. A list of specified states for the class that is to be tested
b. A list of messages and operations that will be exercised as a consequence
of the test
c. A list of exceptions that may occur as the class is tested
d. A list of external conditions (i.e., changes in the environment external to
the software that must exist in order to properly conduct the test)
e. Supplementary information that will aid in understanding or implementing
the test
Object-Oriented Testing Levels /Techniques
(a) Fault-based testing:
• The main focus of fault-based testing is based on consumer specifications or code
or both.
• Test cases are created in a way to identify all possible faults and flush them all.
• This technique finds all the defects that include incorrect specification and interface
errors.
• In the traditional testing model, these types of errors can be detected through
functional testing.
• While Object Oriented Testing in Software Testing will require scenario-based
testing.
(b) Scenario-Based Test Design
• Fault-based testing misses two main types of errors:
(1) Incorrect specifications
(2) Interactions among subsystems.
• Scenario-based testing focuses on simulating user actions rather than solely testing
product functions.
• It involves capturing user tasks through use cases and using them as test scenarios.
• These scenarios help uncover errors in how different parts of the system interact.
• To do this effectively, test cases must be more complex and realistic compared to
simple fault-based tests.
• Scenario-based testing often involves testing multiple parts of the system at once,
reflecting real user behavior.
• Consider the design of scenario-based tests for a text editor by reviewing the use
cases that follow
Use Case: Fix the Final Draft
Background:
It’s not unusual to print the “final” draft, read it, and discover some annoying errors that
weren’t obvious from the on-screen image. This use case describes the sequence of events
that occurs when this happens.
1. Print the entire document.
2. Move around in the document, changing certain pages.
3. As each page is changed, it’s printed.
4. Sometimes a series of pages is printed.
This scenario describes two things: a test and specific user needs. The user needs
are obvious: (1) a method for printing single pages and (2) a method for printing a
range of pages. As far as testing goes, there is a need to test editing after printing
(as well as the reverse). Therefore, you work to design tests that will uncover errors
in the editing function that were caused by the printing function; that is, errors that
will indicate that the two software functions are not properly independent.
Testing Web Based System
• Quality is incorporated into a Web application as a consequence of good design.
• Both reviews and testing, examine one or more of the following quality dimensions.
(i) Content
• It is evaluated at both a syntactic and semantic level.
• At the syntactic level, spelling, punctuation, and grammar are assessed for text-based
documents.
• At a semantic level, correctness (of information presented), consistency (across the entire
content object and related objects), and lack of ambiguity are all assessed.
(ii) Function
• Tested to uncover errors that indicate lack of conformance to customer requirements.
• Each WebApp function is assessed for correctness, instability, and general conformance to
appropriate implementation standards (e.g., Java or AJAX language standards).
(iii) Structure
• Assessed to ensure that it properly delivers WebApp content and function, that it is
extensible, and that it can be supported as new content or functionality is added.
(iv) Usability
• Tested to ensure that each category of user is supported by the interface and can learn
and apply all required navigation syntax and semantics.
(v) Navigability
• Tested to ensure that all navigation syntax and semantics are exercised to uncover any
navigation errors (e.g., dead links, improper links, wrong links).
(vi) Performance
• Tested under a variety of operating conditions, configurations, and loading to ensure
that the system is responsive to user interaction and handles extreme loading without
unacceptable operational degradation.
(vii) Compatibility
• Tested by executing the WebApp in a variety of different host configurations on both the
client and server sides.
• The intent is to find errors that are specific to a unique host configuration.
(viii) Interoperability
• Tested to ensure that the WebApp properly interfaces with other applications and/or
databases.
(ix) Security
• Tested by assessing potential vulnerabilities and attempting to exploit each.
• Any successful penetration attempt is deemed a security failure.
TESTING PROCESS
1. Functional Testing
• Testing the features and operational behavior of a product to ensure they
correspond to its specifications.
• These are the most important factors to consider while assessing the website’s
functionality.
– Verify that all links and buttons function properly.
– Validation and submission of test forms
– Check that your website is responsive and operates on various devices and
browsers.
– Ensure that your website’s navigation is simple.
Check out all the links
• Test the outgoing links from all the pages to the specific domain under test.
• Test all internal links.
• Test links jumping on the same page.
• Test links are used to send emails to admin or other users from web pages.
• Test to see if there are any orphan pages.
• Finally, link checking includes checking for broken links in all the above-mentioned
links.
Test forms on all pages
• Forms are an integral part of any website.
• Forms are used to receive information from users and interact with them.
– Check all the validations in each field.
– Check for default values in the fields.
– Wrong inputs in the forms
– Options to create forms, if any, form deletes a view or modify the forms.
Cookie Testing
• Cookies are small files stored on the user’s machine.
• This is basically used to maintain the session – mainly the login sessions.
• Test the application by enabling or disabling the cookies in your browser options.
• Cookie Testing will include
– Testing cookies (sessions) are deleted either when cache is cleared or when they
reach their expiry.
– Delete cookies (sessions) and test that login credentials are asked for when you
next visit the site.
Validate your HTML/CSS
• Test HTML and CSS to ensure that search engines can crawl your site easily. This will
include
– Checking for Syntax Errors
– Readable Color Schemas
– Standard Compliance (Ensure standards such W3C, ISO, ECMA, or WS-I are
followed)
2. Database Testing
• Database testing in websites is all about ensuring data is getting in the correct
format in the database.
• The transactions, rollback events, speed of getting the huge list, and password
encryption are all maintained in database testing.
• Performance and speed of transactions are also an integral part of here.
• Data consistency is also very important in a web application.
– Test data integrity.
– Look for errors when updating, modifying, or performing any functionality
related to the database.
– Test all queries to see whether they are executing and retrieving data correctly.
3. Usability Testing:
• Usability testing is the process by which the human-computer interaction
characteristics of a system are measured, and weaknesses are identified for
correction.
– Ease of learning
– Navigation
– Subjective user satisfaction
– General Appearance
Test for Navigation
• Navigation means how a user surfs the web pages, different controls like buttons,
boxes, or how the user uses the links on the pages to surf different pages.
Test the Content:
• Content should be logical and easy to understand.
• Content should be legible with no spelling or grammatical errors.
• Images if present should contain an “alt” text
4. Interface Testing
• For web testing, the server-side interface should be tested.
• This can be done by verifying that the communication is done properly.
• The compatibility of the server with software, hardware, network, and database
should be tested.
• The main interfaces are:
– Web server and application server interface
– Application server and Database server interface.
Application
• The application provides access via UI or REST/SOAP API.
Web Server
• It is responsible for handling all the incoming requests at the backend.
• It should ensure that every incoming request is handled properly and not declined by
the webserver.
Database
• Data Integrity should not be violated, and the database should provide appropriate
outcomes to every query being thrown to it.
• Direct access should not be permitted, and a proper access restriction message should
be returned.
5. Compatibility Testing
• It ensures the compatibility of applications across various devices and browsers.
Device Compatibly
• Your application should be responsive enough to fit into different types of devices
of varying sizes and shapes.
• Device compatibility testing is necessary today as everyone carries a separate device
that suits their needs.
Browser Compatibility
• Application should be able to render itself across various browsers (Firefox,
Chrome, Internet Explorer, Safari, etc).
• Browser compatibility testing ensures no AJAX, JavaScript, HTML, and CSS issues.
OS Compatibility:
• Test your web application on different operating systems like Windows, Unix, MAC,
Linux, and Solaris with different OS flavors.
6. Performance Testing
• It tests the application’s response time when put through varying load conditions.
• Performance testing can be grouped into the following categories of testing:
a. Stress Test
• It tests the maximum limit to which the web application can accept the load.
• The application is put through a load above limits, and its behavior is tested
afterward.
• Stress is generally given to input fields, login, and sign-up areas.
b. Load Test
• It tests the response time of the application under varying amounts of load.
• It also measures the application server and the database’s capacity.
c. Soak Test (Endurance testing)
• It measures memory utilization and CPU utilization under high load.
d. Spike Test
• Application is put through fluctuating load, and its performance is measured.
• For example, sudden decrease and increase in the number of users trying to access
the application and see how the application handles these spikes.
7. Security testing
• Security Testing is vital for e-commerce website that store sensitive customer
information like credit cards.
• Testing Activities will include
– Test unauthorized access to secure pages should not be permitted
– Restricted files should not be downloadable without appropriate access
– Check sessions are automatically killed after prolonged user inactivity
– On use of SSL certificates, website should re-direct to encrypted SSL pages.
• The primary reason for testing the security of a web is to identify potential
vulnerabilities and subsequently repair them.
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection
Types of Web Application Testing
1. Static Website Testing:
• Static websites are where data on the UI is directly displayed from the database.
• There is no user interaction via the form.
• There is no contact us, comment section, or login element.
• Static Website testing is about fonts, colors, images, and user experience.
2. Dynamic Website Testing:
• Any website with a form is dynamic.
• Forms indicate user interaction.
• The data is sent from UI to the server, which stores the data in the database.
• Dynamic site testing includes UI/UX testing, API testing to ensure API is working, and
API endpoints security testing.
• Finally, database testing tests that data is stored in the correct format.
3. E-Commerce Website Testing:
• E-commerce sites like Amazon and Flipkart, from where users can place their orders,
are heavy on user interaction as with every button press, and location change - the
data updates in a fraction of a second.
• Payment integration is also a crucial element of the e-commerce site.
• Testing all these functionalities is a part of the e-commerce site.
4. Mobile-Based Web Testing:
• Nowadays, all the sites are responsive.
• That is, the site is adjustable to the device's screen size.
• For example, the same site would have a different user experience based on screen
size.
• The user experience of a site on the mobile screen, tablet screen, and laptop would
be different as per the screen dimensions.
• This is a crucial element in mobile-based web testing.
Mobile App testing
• Mobile App Testing refers to the process of validating a mobile app (Android or iOS)
for its functionality and usability before it is released publicly. Testing mobile apps
help verify whether the app meets the expected technical and business
requirements.