Manual Testing FAQS Part-I: Q: How Do You Introduce A New Software QA Process?
Manual Testing FAQS Part-I: Q: How Do You Introduce A New Software QA Process?
net
Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible. Q: What should be done after a bug is found? A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problemtracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. Q: What is configuration management? A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts, and can easily adapt to your software tool and process needs. Q: What if the software is so buggy it can't be tested at all? A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of
Q: What if the project isn't big enough to justify extensive testing? A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under "What if there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc" testing, or write up a limited test plan based on the risk analysis. Q: What can be done if requirements are changing continuously? A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to... Ensure the code is well commented and well documented; this makes changes easier for the developers. Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails. Q: How do you know when to stop testing? A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are... Deadlines, e.g. release deadlines, testing deadlines; Test cases completed with certain percentage passed; Test budget has been depleted; Coverage of code, functionality, or requirements reaches a specified point; Bug rate falls below a certain level; or Beta or alpha testing period ends.
Q: What if the application has functionality that wasn't in the requirements? A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user
Q: Why do you recommend that we test during the design phase? A: Because testing during the design phase can prevent defects later on. We recommend verifying three things... 1. Verify the design is good, efficient, compact, testable and maintainable. 2. Verify the design meets the requirements and is complete (specifies all relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module and how to guarantee the state of each module). 3. Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product. Q: What is software quality assurance? A: Software Quality Assurance, when Rob Davis does it, is oriented to
Q: What is unit testing? A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable. Q: What is functional testing? A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing. Q: What is usability testing? A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers. Q: What is incremental integration testing? A: Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application's functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. Incremental testing may be performed by programmers, software engineers, or test engineers. Q: What is parallel/audit testing? A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. Q: What is integration testing? A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
Q: What is security/penetration testing? A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques. Q: What is recovery/error testing? A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Q: What is compatibility testing? A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing. Q: What is security/penetration testing? A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
10
Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager. You CAN get a job in testing. Click on a link! Q: What is a Test Engineer? A: We, test engineers, are engineers who specialize in testing. We, test engineers, create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. We also... Speed up the work of the development staff; Reduce your organization's risk of legal liability; Give you the evidence that your software is correct and operates properly; Improve problem tracking and reporting; Maximize the value of your software; Maximize the value of the devices that use it; Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down; Help the work of your development staff, so the development team can devote its time to build up your product; Promote continual improvement; Provide documentation required by FDA, FAA, other regulatory agencies and your customers; Save money by discovering defects 'early' in the design process, before failures occur in production, or in the field; Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.
: What is a Test Build Manager? A: Test Build Managers deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager. Q: What is a System Administrator? A: Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.
11
12
Inputs for this process: Approved Test Strategy Document. Test tools, or automated test tools, if applicable. Previously developed scripts, if applicable. Test documentation problems uncovered as a result of testing. A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.
Outputs for this process: Approved documents of test scenarios, test cases, test conditions, and test data. Reports of software design issues, given to software developers for correction.
Q: How do you execute tests? A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.
13
Inputs for this process: Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. Test tools, including automated test tools, if applicable. Developed scripts. Changes to the design, i.e. Change Request Documents. Test data. Availability of the test team and project team. General and Detailed Design Documents, i.e. Requirements Document, Software Design Document. A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager. Test Readiness Document. Document Updates.
Outputs for this process: Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables. Changes to the code, also known as test fixes. Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems. Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues. Formal record of test incidents, usually part of problem tracking. Base-lined package, also known as tested source and object code, ready for migration to the next level.
14
Outputs for this process: An approved and signed off test strategy document, test plan, including test cases. Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.
Q: What is security clearance? A: Security clearance is a process of determining your trustworthiness and reliability before granting you access to national security information. Q: What are the levels of classified access? A: The levels of classified access are confidential, secret, top secret, and sensitive compartmented information, of which top secret is the highest. What's a 'test plan'? A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: * Title
15
* Identification of software including version/release numbers. * Revision history of document including authors, dates, approvals. * Table of Contents. * Purpose of document, intended audience * Objective of testing effort * Software product overview * Relevant related document list, such as requirements, design documents, other test plans, etc. * Relevant standards or legal requirements * Traceability requirements * Relevant naming conventions and identifier conventions * Overall software project organization and personnel/contact-info/responsibilties * Test organization and personnel/contact-info/responsibilities * Assumptions and dependencies * Project risk analysis * Testing priorities and focus * Scope and limitations of testing * Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable * Outline of data input equivalence classes, boundary value analysis, error classes * Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems * Test environment validity analysis - differences between the test and production systems and their impact on test validity. * Test environment setup and configuration issues * Software migration processes * Software CM processes
16
17
18
* Description of problem cause * Description of fix * Code section/file/module/class/method that was fixed * Date of fix * Application version that contains the fix * Tester responsible for retest * Retest date * Retest results * Regression testing requirements * Tester responsible for regression tests * Regression testing results * A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers. What if the software is so buggy it can't really be tested at all? * The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem. How can it be known when to stop testing? This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: * Deadlines (release deadlines, testing deadlines, etc.) * Test cases completed with certain percentage passed * Test budget depleted * Coverage of code/functionality/requirements reaches a specified point
19
20
21
22
23
24