Software Testing For Beginners
Software Testing For Beginners
1. To discover defects. 2. To avoid user detecting problems 3. To prove that the software has no faults 4. To learn about the reliability of the software. 5. To avoid being sued by customers 6. To ensure that product works as user expected.
7. To stay in business 8. To detect defects early, which helps in reducing the cost of defect fixing?
Why start testing early? Introduction : You probably heard and read in blogs Testing should start early in the life cycle of development". In this chapter, we will discuss Why start testing Early? very practically.
Fact One Lets start with the regular software development life cycle:
First weve got a planning phase: needs are expressed, people are contacted, meetings are booked. Then the decision is made: we are going to do this project. After that analysis will be done, followed by code build. Now its your turn: you can start testing. Do you think this is what is going to happen? Dream on. This is what's going to happen:
Planning, analysis and code build will take more time then planned. That would not be a problem if the total project time would prolonger. Forget it; it is most likely that you are going to deal with the fact that you will have to perform the tests in a few days. The deadline is not going to be moved at all: promises have been made to customers, project managers are going to lose their bonuses if they deliver later past deadline. Fact Two The earlier you find a bug, the cheaper it is to fix it.
Price of Buggy Code If you are able to find the bug in the requirements determination, it is going to be 50 times cheaper (!!) than when you find the same bug in testing. It will even be 100 times cheaper (!!) than when you find the bug after going live.
Easy to understand: if you find the bug in the requirements definitions, all you have to do is change the text of the requirements. If you find the same bug in final testing, analysis and code build already took place. Much more effort is done to build something that nobody wanted. Conclusion: start testing early! This is what you should do:
Testing should be planned for each phase Make testing part of each Phase in the software life cycle Start test planning the moment the project starts Start finding the bug the moment the requirements are defined Keep on doing that during analysis and design phase Make sure testing becomes part of the development process And make sure all test preparation is done before you start final testing. If you have to start then, your testing is going to be crap! Want to know how to do this? Go to the Functional testing step by step page. (will be added later)
Test the correctness of the functionality with the help of Inputs and Outputs. User doesnt require the knowledge of software code. Black box testing is also called as Functionality Testing. It attempts to find errors in the following categories:
Incorrect or missing functions. Interface errors. Errors in data structures or external data base access. Behavior or performance based errors. Initialization or termination errors.
Approaches used in Black Box Testing The following basic techniques are employed during black box testing:
For each piece of the specification, generate one or more equivalence Class Label the classes as Valid or Invalid Generate one test case for each Invalid Equivalence class Generate a test case that covers as many Valid Equivalence Classes as possible
A specific numeric value A range of values A set of related values A Boolean condition
If an input condition specifies a range, one valid and two invalid equivalence class are defined. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence classes are defined. If an input condition is Boolean, one valid and one invalid classes are defined. Boundary Value Analysis
Generate test cases for the boundary values. Minimum Value, Minimum Value + 1, Minimum Value -1 Maximum Value, Maximum Value + 1, Maximum Value - 1 Error Guessing.
Testing the Internal program logic White box testing is also called as Structural testing. User does require the knowledge of software code. Purpose
Testing all loops Testing Basis paths Testing conditional statements Testing data structures Testing Logic Errors Testing Incorrect assumptions
Structure = 1 Entry + 1 Exit with certain Constraints, Conditions and Loops. Logic Errors and incorrect assumptions most are likely to be made while coding for special cases. Need to ensure these execution paths are tested. Approaches / Methods / Techniques for White Box Testing Basic Path Testing (Cyclomatic Complexity(Mc Cabe Method)
Measures the logical complexity of a procedural design. Provides flow-graph notation to identify independent paths of processing Once paths are identified - tests can be developed for - loops, conditions Process guarantees that every statement will get executed at least once.
Structure Testing:
Condition Testing - All logical conditions contained in the program module should be tested. Data Flow Testing- Selects test paths according to the location of definitions and use of variables. Loop Testing: Simple Loops Nested Loops Concatenated Loops Unstructured Loops
Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
WHY LEARN EQUIVALENCE PARTITIONING?
Equivalence partitioning significantly reduces the number of test cases required to test a system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.
DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING
To use equivalence partitioning, you will need to perform two steps 1. Identify the equivalence classes 2. Design test cases
Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class). Following are some general guidelines for identifying equivalence classes: a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, if an item in inventory (numeric field) can have a quantity of +1 to +999, identify the following classes: 1. One valid class: (QTY is greater than or equal to +1 and is less than or equal to +999). This is written as (+1 < = QTY < = 999) 2. The invalid class (QTY is less than 1), also written as (QTY < 1) i.e. 0, -1, 2, so on 3. The invalid class (QTY is greater than 999), also written as (QTY >999) i.e. 1000, 1001, 1002, 1003 so on. Invalid class 0 -1 -2 -3 -4 So on
b) If the requirements state that the number of items input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are too many inputs. For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product. The equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and less than or equal to 4, also written as (1 < = no. of purchase orders < = 4) the
invalid class (no. of purchase orders> 4) the invalid class (no. of purchase orders < 1) c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set. Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer: - One partition: number of inputs - Classes x<4, 4<=x<=24, 24<x - Chosen values: 3,4,5,14,23,24,25
Equivalence partitioning mentioned before Boundary value analysis Use case testing Decision tables Cause-effect graph State transition testing Classification tree method Pair-wise testing
Models, either formal or informal, are used for the specification of the problem to be solved, the software or its components. From these models test cases can be derived systematically.
Experienced-based techniques:
Error Guessing Exploratory Testing Read Unscripted testing Approaches for the above.
Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called Error Guessing. To be successful at Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. This test case design technique can be very effective at pin-pointing potential problem areas in software. It is often be used by creating a list of potential problem areas/scenarios, then producing a set of test cases from it. This approach can often find errors that would otherwise be missed by a more structured testing approach. An example of how to use the Error Guessing method would be to imagine you had a software program that accepted a ten digit customer code. The software was designed to only accept numerical data.
Here are some example test case ideas that could be considered as Error Guessing: 1. Input of a blank entry 2. Input of greater than ten digits 3. Input of mixture of numbers and letters 4. Input of identical customer codes What we are effectively trying to do when designing Error Guessing test cases, is to think about what could have been missed during the software design. This testing approach should only be used to compliment an existing formal test method, and should not be used on its own, as it cannot be considered a complete form of testing software.
Exploratory Testing
This type of testing is normally governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it.
Ad-hoc Testing
This type of testing is considered to be the most informal, and by many it is considered to be the least effective. Ad-hoc testing is simply making up the tests as you go along. Often, it is used when there is only a very small amount of time to test something. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Even if this information is included, more often than not additional information is not logged such as, software versions, dates, test environment details etc. Ad-hoc testing should only be used as a last resort, but if careful consideration is given to its usage then it can prove to be beneficial. If you have a very small window in which to test something, then the following are points to consider: 1. Take some time to think about what you want to achieve 2. Prioritize functional areas to test if under a strict amount of testing time 3. Allocate time to each functional area when you want to test the whole item 4. Log as much detail as possible about the item under test and its environment 5. Log as much as possible about the tests and the results
Random Testing
A Tester normally selects test input data from what is termed an input domain in a structured manner. Random Testing is simply when the Tester selects data from the input domain randomly. In order for random testing to be effective, there are some important open questions to be considered: 1. Is random data sufficient to prove the module meets its specification when tested? 2. Should random data only come from within the input domain? 3. How many values should be tested? As you can tell, there is little structure involved in Random Testing. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. Most experts agree that using random test data provides little chance of producing an effective test. There are many tools available today that are capable of selecting random test data from a specified data value range. This approach is especially useful when it comes to tests associated at the system level. You often find in the real world that Random Testing is used in association with other structured techniques to provide a compromise between targeted testing and testing everything. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The knowledge and experience of people are used to derive the test cases. Knowledge of testers, developers, users and other stakeholders about the software, its usage and its environment. Knowledge about likely defects and their distribution.
White-box techniques
Also referred as structure-based techniques. These are based on the internal structure of the component. Tester must have knowledge of internal structure or code of software under test. Structural or structure-based techniques includes:
Statement testing Condition testing LCSAJ (loop testing) Path testing Decision testing/branch testing
Information about how the software is constructed is used to derive the test cases, for example, code and design. The extent of coverage of the software can be measured for existing test cases, and further test cases can be derived systematically to increase coverage.
Objective: [What is to be verified? ] Assumptions & Prerequisites Steps to be executed: Test data (if any): [Variables and their values ] Expected result: Status: [Pass or Fail with details on Defect ID and proofs [o/p files, screenshots (optional)] Comments: Any CMMi company would have defined templates and standards to be adhered to while writing test cases.
Failure: External behavior is incorrect Fault: Discrepancy in code that causes a failure. Error: Human mistake that caused fault Note:
-Exploratory Testing
This type of testing is normally governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it.
-Ad-hoc Testing
This type of testing is considered to be the most informal, and by many it is considered to be the least effective. Ad-hoc testing is simply making up the tests as you go along. Often, it is used when there is only a very small amount of time to test something. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Even if this information is included, more often than not additional information is not logged such as, software versions, dates, test environment details etc. Ad-hoc testing should only be used as a last resort, but if careful consideration is given to its usage then it can prove to be beneficial. If you have a very small window in which to test something, then the following are points to consider: 1. Take some time to think about what you want to achieve 2. Prioritize functional areas to test if under a strict amount of testing time 3. Allocate time to each functional area when you want to test the whole item 4. Log as much detail as possible about the item under test and its environment 5. Log as much as possible about the tests and the results
-Random Testing
A Tester normally selects test input data from what is termed an input domain in a structured manner. Random Testing is simply when the Tester selects data from the input domain randomly. In order for random testing to be effective, there are some important open questions to be considered: 1. Is random data sufficient to prove the module meets its specification when tested? 2. Should random data only come from within the input domain? 3. How many values should be tested? As you can tell, there is little structure involved in Random Testing. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. Most experts agree that using random test data provides little chance of producing an effective test. There are many tools available today that are capable of selecting random test data from a specified data value range. This approach is especially useful when it comes to tests associated at the system level. You often find in the real world that Random Testing is used in association with other structured techniques to provide a compromise between targeted testing and testing everything.
The left side shows the classic software life cycle & Right side shows the verification and validation for Each Phase
Lets discuss Left side of V Model: - Global and detailed design Development translates the analysis documents into technical design. - Code / Build Developers program the application and build the application. - Note: In the classic waterfall software life cycle testing would be at the end of the life cycle. The V-model is a little different. We already added some testing review to it.
The right side shows the different testing levels : - Component & Component integration testing These are the tests development performs to make sure that all the issues of the technical and functional analysis is implemented properly. - Component testing (unit testing) Every time a developer finishes a part of the application he should test this to see if it works properly. - Component integration testing Once a set of application parts is finished, a member of the Development team should test to verify whether the different parts do what they have to do. Once these tests pass successfully, system testing can start. - System and System integration testing In this testing level we are going to check whether the features to test, destilated from the analyses documents, are realised properly. Best results will be achieved when these tests are performed by professional testers. - System testing In this testing level each part (use case, screen description) is tested apart. - System integration testing Different parts of the application now are tested together to examine the quality of the application. This is an important (but sometimes difficult) step. Typical stuff to test: navigation between different screens, background processes started in one screen, giving a certain output (PDF, updating a database, consistency in GUI,...). System integration testing also involves testing the interfacing with other systems. E.g. if you have a web shop, you probably will have to test whether the integrated Online payment services works. These interface tests are usually not easy to realise, because you will have to make arrangements with parties outside the project group.
- Acceptance testing Here real users (= the people who will have to work with it) validate whether this application is what they really wanted. This comic explains why end users need to accept the application:
This is what actually Client Needs :-( During the project a lot off interpretation has to be done. The analyst team has to translate the wishes of the customer into text. Development has to translate these to program code. Testers have to interpret the analysis to make features to test list. Tell somebody a phrase. Make him tell this phrase to another person. And this person to another one... Do this 20 times. You'll be surprised how much the phrase has changed! This is exactly the same phenomenon you see in software development! Let the end users test the application with the real cases you listed up in the test preparation sessions. Ask them to use real life cases! And - instead of getting angry - listen when they tell you that the application is not doing what it should do. They are the people who will suffer the applications shortcomings for the next couple of years. They are your customer!
Fig 1: W Model
Fig 2: Each phase is verified/validated. Dotted arrow shows that every phase in brown is validated/tested through every phase in sky blue.
Point 1 refers to - Build Test Plan & Test Strategy. Point 2 refers to - Scenario Identification. Point 3, 4 refers to Test case preparation from Specification document and design documents Point 5 refers to review of test cases and update as per the review comments. So if you see, the above 5 points covers static testing. Point 6 refers to Various testing methodologies (i.e. Unit/integration testing, path testing, equivalence partition, boundary value, specification based testing, security testing, usability testing, performance testing). After this, there are regression test cycles and then User acceptance testing. Conclusion - V model only shows dynamic test cycles, but W models gives a broader view of testing. the connection between the various test stages and the basis for the test is clear with W Model (which is not clear in V model).
More comparison of W Model with other SDLC models>> Document PDF.
Takes nothing at face value. Always asks the question why? Seek to drive out certainty where there is none. Seek to illuminate the darker part of the projects with the light of inquiry. Sometimes this attitude can bring argument with Development Team. But development team can be testers too! If they can accept and adopt this state of mind for a certain portion of the project, they can offer excellent quality in the project and reduce cost of the project. Identifying the need for Testing Mindset is the first step towards a successful test approach and strategy.
The domain of possible inputs of a program is too large to be completely used in testing a system. There are both valid inputs and invalid inputs. The program may have a large number of states. There may be timing constraints on the inputs, that is, an input may be valid at a certain time and invalid at other times. An input value which is valid but is not properly timed is called an inopportune input. The input domain of a system can be very large to be completely used in testing a program. The design issues may be too complex to completely test. The design may have included implicit design decisions and assumptions. For example, a programmer may use a global variable or a static variable to control program execution. It may not be possible to create all possible execution environments of the system. This becomes more significant when the behaviour of the software system depends on the real, outside world, such as weather, temperature, altitude, pressure, and so on. [From book - Software testing and quality assurance: theory and practice By Kshirasagar Naik, Priyadarshi Tripathy]
Testing Limitations
You cannot test a program completely We can only test against system requirements - May not detect errors in the requirements. - Incomplete or ambiguous requirements may lead to inadequate or incorrect testing.
Exhaustive (total) testing is impossible in present scenario. Time and budget constraints normally require very careful planning of the testing effort. Compromise between thoroughness and budget. Test results are used to make business decisions for release dates. Even if you do find the last bug, youll never know it You will run out of time before you run out of test cases You cannot test every path You cannot test every valid input You cannot test every invalid input
The Impossibility of Complete Testing by Dr. Cem Kaner >>>> Document PDF.
After preparation of the Test Plan, Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds. After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.
9. If any point of the specs is not clear then get your queries resolved from the Business Analyst or Product Manager as soon as possible. 10. If any mentioned scenario is complex then try to break it into points. 11. If there is any open issue (under discussion) in the specs (sometimes to be resolved by client), then keep track of those issues. 12. Always go thru the revision history carefully. 13. After the specs are sign off and finalized, if any change come, then see the impacted areas.
Defect Detection In Defect detection, role of a tester include Implementing the most appropriate approach/strategy for testing ,preparation/execution of effective test cases and conducting the necessary tests like - exploratory testing, functional testing, etc. To increase the defect detection rate, tester should have complete understanding of the application. Ad hoc /exploratory testing should go in parallel with the test case execution as a lot of bugs can be found through that means.
9. If any point of the specs is not clear then get your queries resolved from the Business Analyst or Product Manager as soon as possible. 10. If any mentioned scenario is complex then try to break it into points. 11. If there is any open issue (under discussion) in the specs (sometimes to be resolved by client), then keep track of those issues. 12. Always go thru the revision history carefully. 13. After the specs are sign off and finalized, if any change come, then see the impacted areas.
After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design.
Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases. After preparation of the Test Plan, Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds. After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.
5)
Types of Traceability Matrix Disadvantages of not using Traceability Matrix Benefits of using Traceability Matrix in testing Step by step process of creating an effective Traceability Matrix from requirements. Sample formats of Traceability Matrix basic version to advanced version. In Simple words - A requirements traceability matrix is a document that traces and maps user requirements [requirement Ids from requirement specification document] with the test case ids. Purpose is to make sure that all the requirements are covered in test cases so that while testing no functionality can be missed. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id Note We can make it a Test case Coverage checklist document by adding few more columns. We will discuss in later posts Types of Traceability Matrix: Forward Traceability Mapping of Requirements to Test cases Backward Traceability Mapping of Test Cases to Requirements Bi-Directional Traceability - A Good Traceability matrix is the References from test cases to basis documentation and vice versa.
Why Bi-Directional Traceability is required? Bi-Directional Traceability contains both Forward & Backward Traceability. Through Backward Traceability Matrix, we can see that test cases are mapped with which requirements. This will help us in identifying if there are test cases that do not trace to any coverage item in which case the test case is not required and should be removed (or maybe a specification like a requirement or two should be added!). This backward Traceability is also very helpful if you want to identify that a particular test case is covering how many requirements? Through Forward Traceability we can check that requirements are covered in which test cases? Whether is the requirements are coved in the test cases or not? Forward Traceability Matrix ensures We are building the Right Product. Backward Traceability Matrix ensures We the Building the Product Right. Traceability matrix is the answer of the following questions of any Software Project:
How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customers needs? How can I certify that the final software product meets the customers needs? Now we can only make sure requirements are captured in the test cases by traceability matrix. Disadvantages of not using Traceability Matrix [some possible (seen) impact]:
No traceability or Incomplete Traceability Results into: 1. Poor or unknown test coverage, more defects found in production 2. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. Then a lot of discussions arguments with other teams and managers before release. 3. Difficult project planning and tracking, misunderstandings between different teams over project dependencies, delays, etc Benefits of using Traceability Matrix
Make obvious to the client that the software is being developed as per the requirements. To make sure that all requirements included in the test cases To make sure that developers are not creating features that no one has requested Easy to identify the missing functionalities. If there is a change request for a requirement, then we can easily find out which test cases need to update. The completed system may have Extra functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.
Steps to create Traceability Martix: 1. Make use of excel to create Traceability Matrix: 2. Define following columns: Base Specification/Requirement ID (If any) Requirement ID Requirement description TC 001 TC 002 TC 003.. So on. 3. Identify all the testable requirements in granular level from requirement document. Typical requirements you need to capture are as follows: Used cases (all the flows are captured) Error Messages Business rules Functional rules SRS FRS So on 4. Identity all the test scenarios and test flows. 5. Map Requirement IDs to the test cases. Assume (as per below table), Test case TC 001 is your one flow/scenario. Now in this scenario, Requirements SR-1.1 and SR-1.2 are covered. So mark x for these requirements. Now from below table you can conclude Requirement SR-1.1 is covered in TC 001 Requirement SR-1.2 is covered in TC 001 Requirement SR-1.5 is covered in TC 001, TC 003 [Now it is easy to identify, which test cases need to be updated if there is any change request].
TC 001 Covers SR-1.1, SR, 1.2 [we can easily identify that test cases covers which requirements]. TC 002 covers SR-1.3.. So on.. Requirement ID SR-1.1 SR-1.2 SR-1.3 Requirement TC 001 description User should be able to x do this User should be able to x do that On clicking this, following message should appear x x TC 002 TC 003
x x x
This is a very basic traceability matrix format. You can add more following columns and make it more effective: ID, Assoc ID, Technical Assumption(s) and/or Customer Need(s), Functional Requirement, Status, Architectural/Design Document, Technical Specification, System Component(s), Software Module(s), Test Case Number, Tested In, Implemented In, Verification, Additional Comments,
Use Cases
A use case defines a goal-oriented set of interactions between external actors and the system under consideration. Actors are parties outside the system that interact with the system (UML 1999, pp. 2.113- 2.123). An actor may be a class of users, roles users can play, or other systems. Cockburn (1997) distinguishes between primary and secondary actors. A primary actor is one having a goal requiring the assistance of the system. A secondary actor is one from which the system needs assistance. A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. It describes the sequence of interactions between actors and the system necessary to deliver the service that satisfies the goal. It also includes possible variants of this sequence, e.g., alternative sequences that may also satisfy the goal, as well as sequences that may lead to failure to complete the service because of exceptional behavior, error handling, etc. The system is treated as a "black box", and the interactions with system, including system responses, are as perceived from outside the system. Thus, use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system. Generally, use case steps are written in an easy-to-understand structured narrative using the vocabulary of the domain. This is engaging for users who can easily follow and validate the use cases, and the accessibility encourages users to be actively involved in defining the requirements.
Scenarios
A scenario is an instance of a use case, and represents a single path through the use case. Thus, one may construct a scenario for the main flow through the use case, and other scenarios for each possible variation of flow through the use case (e.g., triggered by options, error conditions, security breaches, etc.). Scenarios may be depicted using sequence diagrams.
By these concepts you should design a test case, which should have the capability of finding the absence of defects. Read: Art of Test case writing 5. How to launch the test cases in Quality Centre (Test Director) and where it is saved? You create the test cases in the test plan tab and link them to the requirements in the requirement tab. Once the test cases are ready we change the status to ready and go to the Test Lab Tab and create a test set and add the test cases to the test set and you can run from there. For automation, in test plan, create a new automated test and launch the tool and create the script and save it and you can run from the test lab the same way as you did for the manual test cases. The test cases are sorted in test plan tab or more precisely in the test director, lets say quality centers database. test director is now referred to as quality center. 6. How is traceability of bug follow? The traceability of bug can be followed in so many ways. 1. Mapping the functional requirement scenarios(FS Doc) - test cases (ID) Failed test cases(Bugs) 2. Mapping between requirements(RS Doc) - Test case (ID) - Failed test cases. 3. mapping between test plan (TP Doc) - test case (ID) - failed test cases. 4. Mapping between business requirements (BR Doc) - test cases (ID) - Failed test cases. 5. Mapping between high level design(Design Doc) - test cases (ID) - Failed test cases. Usually the traceability matrix is mapping between the requirements, client requirements, function specification, test plan and test cases. 7. What is the difference between use case, test case, test plan? Use Case: It is prepared by Business analyst in the Functional Requirement Specification(FRS), which are nothing but a steps which are given by the customer. Test cases: It is prepared by test engineer based on the use cases from FRS to check the functionality of an application thoroughly Test Plan: Team lead prepares test plan, in it he represents the scope of the test, what to test and what not to test, scheduling, what to test using automation etc.
Create your Req.Structure Create the test case structure and the test cases Map the test cases to the App.Req Run and report bugs from your test cases in the test lab module. The database structure in Quality Centre is mapping test cases to defects, only if you have created the bug from Application. test case may be we can update the mapping by using some code in the bug script module(from the customize project function), as per as i know, it is not possible to map defects directly to an requirements. 3. how do you run reports from Quality Centre. This is how you do it 1. Open the Quality Centre project 2. Displays the requirements modules 3. Choose report Analysis > reports > standard requirements report 4. Can we upload test cases from an excel sheet into Quality Centre? Yes go to Add-In menu Quality Centre, find the excel add-In, and install it in your machine. Now open excel, you can find the new menu option export to Quality Centre. Rest of the procedure is self explanatory.
5. Can we export the file from Quality Centre to excel sheet. If yes then how? Requirement tab Right click on main req/click on export/save as word, excel or other template. This would save all the child requirements Test plan tab: Only individual test can be exported. no parent child export is possible. Select a test script, click on the design steps tab, right click anywhere on the open window. Click on export and save as. Test lab tab: Select a child group. Click on execution grid if it is not selected. Right click anywhere. Default save option is excel. But can be saved in documents and other formats. Select all or selected option Defects Tab: Right click anywhere on the window, export all or selected defects and save excel sheet or document. 6. How many types of tabs are there in Quality Centre. Explain? There are four types of tabs are available 1. Requirement : To track the customer requirements 2. Testplan : To design the test cases and to store the test scripts 3. test lab : To execute the test cases and track the results. 4. Defect : To log a defect and to track the logged defects. 7. How to map the requirements with test cases in Quality Centre? 1. In requirements tab select coverage view 2. Select requirement by clicking on parent/child or grandchild 3. On right hand side(In coverage view window) another window will appear. It has two tabs a) Tests coverage b) Details Test coverage tab will be selected by default or you click on it. 4. Click on select tests button a new window will appear on right hand side and you will see a list of all tests. You cans elect any test case you want to map with your requirements. 8. How to use Quality Centre in real time project? Once completed the preparing of test cases 1. Export the test cases into Quality Centre( It will contained total 8 steps) 2. The test cases will be loaded in the test plan module 3. Once the execution is started. We move the test cases from test plan tab to the test lab module. 4. In test lab, we execute the test cases and put as pass or fail or incomplete. We generate the graph in the test lab for daily report and sent to the on site (where ever you want to deliver) 5. If we got any defects and raise the defects in the defect module. when raising the defects, attach the defects with the screen shot.
9. Difference between Web Inspect-QA Inspect? QA Inspect finds and prioritizes security vulnerabilities in an entire web application or in specific usage scenarios during testing and presents detail information and remediation advise about each vulnerability. Web Inspect ensures the security of your most critical information by identifying known and unknown vulnerabilities within the web application. With web Inspect, auditors, compliance officers and security experts can perform security assessments on a web enabled application. Web inspect enables users to perform security assessments for any web application or web service, including the industry leading application platforms.
10. How can we add requirements to test cases in Quality Centre? Just you can use the option of add requirements. Two kinds of requirements are available in TD. 1. Parent Requirement 2. Child requirements.
Parent Requirements nothing but title of the requirements, it covers high level functions of the requirements Child requirement nothing but sub title of requirements, it covers low level functions of the requirements.