Software Testing Fundamentals
Software Testing Fundamentals
V Model
URS UAT planning User Acceptance Testing
SRS
System Testing
Verification
HLD
Integration Testing
LLD
Unit testing
Coding
Role of a Tester
Assuring that the software meets users needs Software can be used with negligible risks This is achieved through Verification Validation
Verification
Verification It is the process of determining whether or not the product of given phase fulfill the spec. from the previous phase
Uses reviews, inspections, and demonstrations throughout development to ensure the quality of the product of that phase, including that it meets the requirements from the previous phase Are we building the product right?
Validation
The process of evaluating the software at the end of development to ensure compliance with the specified requirements Includes what is commonly thought of as testing and comparing test results to expected results. Validation occurs at the end of the development process. Are we building the right product?
Conclusions
White box testing does not guarantee 100% conformance to requirements Black box testing does not concentrate on logic of the program, but ensures conformance to requirements Hence, both white box and black box testing is required to ensure product quality All types of testing, whether static or dynamic, white box or black box are part of verification and validation activities. Let us see verification and validation activities.
STLC Activities
Test Requirements document Test Planning Test Design Test Execution Defect Tracking
Test Planning
Mainly, Test Plan addresses Scope and objectives of testing Schedule, Resources and Reporting Types of testing and methodology Phases of testing applicable and scope of testing in each phase Software and hardware requirements Identified risks and strategy for mitigating those risks Information regarding tools used through entire testing life cycle
Test Design
Test Design is applicable to both white box and black box testing Test design activity involves designing test cases for a given requirement (Black box testing) or for a given program (white box testing). Test case is defined as a set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement [IEEE]
Test Execution
Test execution involves Executing developed test cases on a piece of program developed (Code based test cases) or on the entire software application (Requirements based test cases) The status of test case is updated during execution Possible states include Pass, Fail, Unable to test, deferred Test execution statistics are collected and analyzed for test progress monitoring
Defect Tracking
When actual result obtained from the software application during testing, deviates from expected result written in the test case, it is termed as a defect. The test case is failed and a defect posted on the software. The defect is fixed by the development team and the fix is provided in subsequent releases. The fix provided for the defect is validated and if found to be working, the test case passes and the defect closed. The defect posting, tracking, closing the defects are done in a defect tracking tool.
SDLC Vs STLC
Requirements Phase Test Requirements document Test Planning Design Phase Coding Phase Deployment Phase Test Case Design Unit Test Execution System Test Execution Defect Tracking
Requirement Reviews
Requirement reviews
Requirement quality affects work performed in subsequent phases of the system life cycle. Requirements of poor quality Increase cost and schedule: effort is spent during design and implementation trying to figure out what the requirements are Decrease product quality: poor requirements cause the wrong product to be delivered or de-scoping to meet schedule or cost constraints
Requirement characteristic:Independent
The requirement does not rely on another requirement to be fully understood. Requirements that need proxies are not independent. Parent requirements rely on their children to be fully defined. In testing, a parent is not satisfied until all its children are met. Why retain them? These may be source requirements that must be retained.
Requirement characteristic:Independent
Also, using them to structure the proxies or children improves understandability. Example: "user friendly" can be used to assign, talk about, or locate the group of proxies defining "user friendly" for that particular project.
Requirement attributes
Unique identifier Organizational information--for example, what are the parents/children of the requirement, its category or type Method of validation Item(s) that satisfy the requirement Source of requirement (legal citation, business policy, etc.) Association with the test plan/tests(s) Requirement owners (subject matter expert, analyst) Requirement status
Design Review
Design reviews
Reviews for software design focus on data design, architectural design and procedural design. In general, there are two types of design reviews Preliminary design review Design walkthrough
Code Reviews
Advantages:Code review
Finding and correcting errors at this stage is relatively inexpensive Code reviews tend to reduce the more expensive process of handling, locating, and fixing bugs during later stages of development or after code delivery to users
Error handling
Are assertions used everywhere data is expected to have a valid value or range? Are errors properly handled each time a function returns? Are resources and memory released in all error paths? Are all thrown exceptions handled properly? Is the function caller notified when an error is detected? Has error handling code been tested?
Resource Leaks
Is allocated memory (non-garbage collected) freed? Are all objects (Database connections, Sockets, Files, etc.) freed even when an error occurs? Is the same object released more than once? Does the code accurately keep track of reference counting?
Thread safeness
Are all global variables thread-safe? Are objects accessed by multiple threads thread-safe? Are locks released in the same order they are obtained? Is there any possible deadlock or lock contention?
Control Structures
Are loop ending conditions accurate? Is the code free of unintended infinite loops?
Performance
Do recursive functions run within a reasonable amount of stack space? Are whole objects duplicated when only references are needed? Does the code have an impact on size, speed, or memory use? Are you using blocking system calls when performance is involved? Is the code doing busy waits instead of using synchronization mechanisms or timer events?
Functions
Are function parameters explicitly verified in the code? Are arrays explicitly checked for out-of-bound indexes? Are functions returning references to objects declared on the stack? Are variables initialized before they are used? Does the code re-write functionality that could be achieved by using an existing API?
Bug fixes
Does a fix made to a function change the behavior of caller functions? Does the bug fix correct all the occurrences of the bug?
Statement coverage
Statement Coverage Each statement in the program is executed at least once 100% of the statements in the program should be executed at-least once Weakness: It is necessary but not sufficient. When there is a decision, you have to ensure that it takes a correct path. It is not done by statement coverage.
Branch/Decision Coverage
Statement coverage does not address all outcomes of decisions. Branches like If..Else, Do..While are to be evaluated for both true and false Test each condition for a true and a false value That is, each branch direction must be traversed at-least once
Ex: For the condition (A>=5) or (B<2) THEN X=1, the test cases are: A=6 and B=4 True (Here, A is true and B is false) A=2 and B=3 False (Here, A is false and B is false) That is, check how many decisions are there. For each decision, write one test case for true and one test case for false
Conditions Coverage
All the conditions should be executed at least once for both false and true conditions. True and false outcome of each condition in a decision must be tested. Do not look for combinations.
Example: For the condition (A>=5) or (B<2) THEN X=1, the test cases are: A=6 and B=3 True (Here, A is true and B is False) A=2 and B=1 True (Here, A is false and B is true)
Condition/Decision coverage
Condition/Decision Coverage It may not always result in decision coverage. In such cases, go in for decision +condition coverage. Multiple Condition Coverage: Go for combinations. For Example: For the condition (A>=5) or (B<2) THEN X=1, the test cases are: A=6, B=6 A=6, B=3 A=2, B=1 A=2, B=3
Path Coverage
Errors are sometimes revealed in a path including combination of branches. More general coverage requires executing all possible paths, known as path coverage criteria. Number of paths may be infinite if there are loops. 100% path coverage is impossible
Cyclomatic Complexity
Cyclomatic complexity provides quantitative measure of logical complexity of the program Cyclomatic complexity provides minimum number of independent paths in the given program Based on the Cyclomatic complexity value obtained, the decision to accept the program for testing or not, can be made
Unit Testing
Unit testing involves testing of the individual modules and pages that make up the application In general, unit tests check the behavior of a given page i.e. does the application behave correctly and consistently given either good or bad input Some of the types of checking would include: Invalid input (Missing output, out of bound input, entering an integer when float expected, and vice versa, control characters in strings etc.,) Alternate Input Format (e.g., 0 instead of 0.0, 0.00000001 instead of 0 etc.,)
Unit Testing
Button click testing e.g., multiple clicking with and without pauses between clicks. Immediate reload after button click prior to response having been received. Multiple reloads in the same manner as above. Random input and random click testing. This testing involves a user randomly pressing buttons (including multiple clicks on "hrefs") and randomly picking checkboxes and selecting them.
Unit Testing
There are two forms of output screen expected: An error page indicating the type of error encountered. A normal page showing either the results of the operation or the normal next page where more options may be selected. In no event should a catastrophic error occur
Log into the system and then attempt to jump to any page in any order once a session has been established. Use bookmarks and set up temporary web pages to redirect into the middle of an application using faked session information
Usability testing
Usability testing ensures that all pages present a cohesive look to the user, including spelling, graphics, page size, response time, etc Examples of usability testing include: Spelling checks Graphical user interface checks (colors, dithering, aliasing, size, etc.,) Adherence to web GUI Standards Meaningful error messages Accuracy of data displayed
Page Navigation Context sensitivity Editorial continuity Accessibility Accuracy of data in the database as a result of user input Accuracy of data in the database as a result of external factors (e.g. imported data) Meaningful help pages including context sensitive help
Functional Testing
Functional testing ensures Conformance to functional requirements of the application Scenarios/Test cases are designed to find out conformance to the requirements Whole business logic gets tested as part of the functional testing
Load Testing
Load testing the application involves generation of varying loads (in terms of concurrent users) against web server, the databases supporting the web server and the middle ware/application server logic connecting those pages to the databases Load testing includes verification of data integrity on the web pages, within the back end database and also the load ramping or surges in activity against the application
Load Testing
"Does the site scale", "Is the site's response time deterministic, etc. Examples of load testing would include: Sustained low load test (50 users for around 48 hours). Sustained high load test (300+ users for 12 hours). Surge test (e.g. run 50 users, then surge to 500 users and then return to 50, no memory leaks, lost users, orphaned processes, etc., should be seen). The system should continue running with multiple surges at various times during the day. This test should run for 48 hours.
Performance Testing
Performance Testing refers to the response time by the software to process and present the requests made by the end users Performance depends on Speed of the network Hardware configuration of application server, web server, database server and the client system (Processor, RAM etc) Volume of data in the database
Security Testing
Security testing involves verifying weather both the servers and the application are managing security correctly Security from server perspective Attempt to penetrate system security both internally and externally to ensure the system that houses the application is secure from bother internal and external attacks. Attempt to cause things like buffer overflow to result in root access being given accidentally, (such code does exist, but explaining it is beyond the scope of this document)
Attempt to cause the application to crash by giving it false or random information Ensure that the server OS is up to correct patch levels from security viewpoint Ensure that the server is physically secure
Faked sessions. Sessions information must be valid and secure. (e.g. a URL containing a session identifier cannot be copied from one system to another and then the application be continued from the different system without being detected) Multiple login testing by a single user from several clients
Attempt to break into the application by running username/password checks using password-cracking program Security audit, e.g. examine log files, etc., no sensitive information should be left in raw text/human readable form in any log file Automatic logout after N minutes of inactivity with positive feedback to the user
Regression Testing
Regression testing ensures that during the lifetime of the application, any fixes do not break other parts of the application This type of testing typically involves running all the tests, or a relevant subset of those tests when defect fixes are made or new functionalities added The regression tests must also be kept up to date with planned changes in the application. As the application evolves, so must the tests
External Testing
External testing deals with checking the effect of external factors on the application. Example of external factors would be the web server, the database server, the browser, network connectivity issues, etc. Examples of external testing are: Database unavailability test (e.g., is login or further access to the application permitted should the database go into a scheduled maintenance window) Database error detection and recovery test (e.g., simulate loss of database connectivity, the application should detect this, and report an error accordingly). The application should be able to recover without human intervention when the database returns
External Testing
Database authentication test (check access privileges to the database). Connection pooling test (ensure that database connections are used sparingly, and will not run out under load). Web page authentication test. Browser compatibility tests for example, does the application behave the same way on multiple browsers, does the JavaScript work the same way, etc.,
Connectivity Testing
Connectivity testing involves determining if the servers and clients behave appropriately under varying circumstances This testing is difficult to accomplish from a server perspective since it is expected that the servers will be operating with standby power supplies as well as being in a highly available configuration Thus the server tests need not be run using a poweroff scenario; simply removing the network connection to the PC may be sufficient
Terminated
In work
Re-work
Reporting Defects
This helps to analyze how many bugs were uncovered during a particular phase of testing and facilitates comparison of finding out defects across phases
Case Study
Study the following defects observed while testing a software product and re-write them in proper format and assign appropriate severity and priority to the defects.
Thank You