UNIT 4 : Defect Management: Detailed Summary
What is a Defect?
A defect is an error or flaw in software caused by mistakes during its design, coding, or
other development stages. These flaws lead to deviations from expected functionality or
performance, which are identified as defects. They can arise due to various reasons such
as:
- Miscommunication of requirements.
- Unrealistic development deadlines.
- Inadequate experience in design or coding practices.
- Human errors during implementation.
- Poor version control or buggy third-party tools.
- Late-stage requirement changes or insufficient testing skills.
Defect Classification
Defects are classified based on their nature, source, and impact. The classifications
include :
1. By Severity
- Major: Causes noticeable product failure or deviation from requirements.
- Minor: Does not significantly impact the product’s execution or functionality.
- Fatal: Leads to system crashes, abrupt closures, or interference with other
applications.
2. By Work Product
Defects originating from various stages such as:
- System Study Document (SSD)
- Functional Specification Document (FSD)
- Architectural Design Document (ADS)
- Source Code
- Test Plan or Test Cases
- User Documentation (manuals).
3. By Type of Errors
- Computational Errors: Incorrect formulae or business validations in the code.
- Database Errors: Mistakes in schema design or data operations.
- Logic Errors: Missing or ambiguous functionality in the source code.
- Interface Errors: Issues in handling parameters, alignment, or screen design.
- Boundary Conditions Neglected: Errors in handling edge cases.
- Performance Errors: Suboptimal code impacting performance.
- Ambiguous Requirements: Requirements unclear to stakeholders or developers.
- Standards Violations: Deviation from design or coding standards.
4. By Status
- Open: Awaiting action or review.
- Closed: Successfully resolved.
- Deferred: Postponed for future releases.
- Rejected: Determined to be invalid or non-issues.
Defect Management Process
Managing defects involves systematically identifying, fixing, and minimizing their
occurrence. The process includes:
1. Defect Prevention: Adopting techniques, methodologies, and processes to avoid
defects early.
2. Deliverable Baseline: Establishing checkpoints where deliverables are marked as
complete. Errors detected after baseline are classified as defects.
3. Defect Discovery: Identifying defects through testing and reporting them to
developers.
4. Defect Resolution: Prioritizing, fixing, and retesting defects to ensure they no longer
exist.
5. Process Improvement: Analyzing processes to identify root causes of defects and
implementing improvements to prevent recurrence.
6. Management Reporting: Summarizing defect data to assist in decision-making, risk
management, and improving development practices.
Defect Life Cycle
Defects progress through various states in a life cycle:
1. New: Reported for the first time.
2. Open: Validated by the lead tester and marked for action.
3. Assigned: Allocated to a developer for resolution.
4. Fixed: Developer resolves the issue and sends it for testing.
5. Test/Retest: The testing team verifies the fix.
6. Deferred: Scheduled for future resolution due to low priority or resource constraints.
7. Rejected: Determined not to be a defect.
8. Verified: Confirmed resolved by the tester.
9. Reopened: Re-tested due to incomplete or faulty resolution.
10. Closed: Confirmed resolved with no recurrence.
Conclusion
Defect Management is crucial for delivering high-quality software. It combines
systematic detection, classification, resolution, and prevention strategies to minimize
risks and optimize resources. A well-documented process and continuous improvement
efforts lead to fewer defects and more reliable products.
Unit 1 : Basics of Software Testing and Testing Methods
Basics of Software Testing
1. Definition: Software testing involves verifying and validating the software to ensure it
meets the required standards and works as intended. It is executed with the goal of
identifying and fixing errors.
2. Purpose: The main goal is to uncover bugs, improve the quality of the software, and
ensure it aligns with business and user expectations.
Objectives of Testing
1. Find Bugs: Detect defects or issues in the software early.
2. Build Confidence: Provide assurance that the software is reliable and meets quality
standards.
3. Prevent Defects: Minimize errors by identifying them early.
4. Ensure Requirements Are Met: Confirm the software fulfills business and user needs.
5. Satisfy Specifications: Make sure the software aligns with Business Requirement
Specifications (BRS) and System Requirement Specifications (SRS).
Key Terms in Testing
1. Error: A mistake made by a person while coding or designing software.
2. Bug: An error discovered when the software is running.
3. Fault: A flaw in the software caused by an error.
4. Failure: The software does not perform as expected.
5. Defect: A broader term for bugs or faults that can affect the software’s functionality.
Skills Needed for Testers
• Communication Skills: Clearly share findings and collaborate with teams.
• Technical Knowledge: Understand software systems and tools.
• Curiosity: Think critically and explore possible failure points.
• User Perspective: See the software through the eyes of an end-user.
• Planning and Analysis: Strategize testing procedures and analyze results.
Verification vs. Validation
1. Verification:
- Ensures the product is built correctly according to specifications.
- Involves reviewing documents, designs, and plans without executing the software.
- Examples: Reviews, walkthroughs, inspections.
2. Validation:
- Ensures the right product is being built for user needs.
- Involves running the software to check for functional correctness.
- Examples: Functional testing, integration testing, and system testing.
Testing Methods
1. Static Testing:
- Done without executing the software.
- Focuses on finding defects in documents, code, or designs through inspections,
walkthroughs, or reviews.
- Advantages: Early detection of errors, cost-effective.
2. Dynamic Testing:
- Involves running the software to check its behavior and outputs.
- Includes methods like unit testing, system testing, and regression testing.
- Advantages: Identifies runtime errors and ensures functionality.
Test Design Techniques
1. Boundary Value Analysis:
- Tests the software at boundary values (e.g., minimum and maximum inputs).
- Ensures the software behaves correctly at edge cases.
- Example: If an input range is 1–100, test with values like 0, 1, 100, and 101.
2. Equivalence Partitioning:
- Groups input values into partitions where behavior is expected to be the same.
- Reduces the number of test cases while maintaining coverage.
- Example: For age groups <35, 35–59, and >60, test with representative values like 20, 50,
and 70.
3. Requirement-based Testing:
- Designs tests based on specific requirements (functional and non-functional).
- Ensures all requirements are tested, and gaps are minimized.
Quality Assurance (QA) vs. Quality Control (QC)
1. Quality Assurance (QA):
- Focuses on planning and processes to ensure quality during development.
- Activities include defining standards, creating plans, and ensuring adherence.
2. Quality Control (QC):
- Focuses on the product and involves inspecting and testing it to meet defined
standards.
- Activities include inspections, measurements, and performance checks.
Unit 2 : Types and Levels of Testing
Levels of Testing
1. Unit Testing:
- Tests individual units or components of the software.
- Ensures each unit works as designed, such as functions or methods.
- Performed by developers using White Box Testing.
- Example: Testing if a loop or function works correctly.
2. Integration Testing:
- Verifies the interaction between integrated components or modules.
- Approaches include:
• Incremental: Combines and tests modules step-by-step using stubs and drivers.
• Non-Incremental (Big-Bang): Integrates all modules at once and tests them as a whole.
• Top-Down: Starts with high-level modules, using stubs for incomplete submodules.
• Bottom-Up: Starts with low-level modules, using drivers for main module simulation.
• Bi-Directional (Sandwich): Combines both top-down and bottom-up approaches.
Performance Testing
- Ensures the software performs well under expected workloads.
- Focuses on:
• Speed: Checks if the application responds quickly.
• Scalability: Tests the maximum user load the system can handle.
• Stability: Verifies stability under varying loads.
1. Load Testing: Measures performance under increasing loads to find limits.
2. Stress Testing: Tests the system's performance under extreme conditions.
3. Security Testing: Verifies principles like confidentiality, integrity, and availability.
Acceptance Testing
- Validates the system’s compliance with business requirements.
- Ensures the software is acceptable for delivery to end users.
- Types include:
• User Acceptance Testing: Conducted by end-users.
• Operational Acceptance Testing: Focuses on operational requirements.
• Contract Acceptance Testing: Ensures compliance with contract criteria.
• Compliance Testing: Adheres to legal or safety regulations.
Special Tests
1. Regression Testing:
- Ensures new code changes do not break existing functionalities.
- Methods:
• Retest All: Re-execute all tests (time-consuming).
• Regression Test Selection: Execute selected test cases.
• Prioritization: Test critical functionalities first.
2. GUI Testing:
- Tests graphical interfaces like buttons, menus, and icons.
- Focuses on usability, design consistency, and user experience.
- Example: Testing navigation and layout in a login form or web page.
Alpha and Beta Testing
- Alpha Testing:
• Conducted in-house by skilled testers before the product's public release.
• Detects issues early in a controlled environment.
- Beta Testing:
• Conducted by real users in a real-world environment.
• Provides feedback for final adjustments before launch
Unit 3: Test Management
Test Plan
A test plan is a document outlining the scope, approach, resources, and schedule for testing
activities. It includes test items, features to be tested, testing tasks, responsibilities, test
environment, and entry/exit criteria. A well-prepared test plan ensures clarity and efficient
test management.
Steps for Preparing a Test Plan
1. Analyze the product and understand it thoroughly.
2. Develop a test strategy to define scope, risks, and issues.
3. Define test objectives and criteria (entry/exit, pass/fail).
4. Plan resources, test environment, and scheduling.
5. List test deliverables like test cases, reports, and results.
Types of Test Plans
- Master Test Plan: A high-level plan covering all other test plans.
- Testing Level Specific Plans: Separate plans for Unit, Integration, System, and Acceptance
Testing.
- Testing Type Specific Plans: Plans for specific types like Performance or Security Testing.
Test Plan Guidelines
- Be concise and avoid redundancy.
- Specify details like OS version when defining test environments.
- Use lists and tables instead of long paragraphs.
- Review and update the test plan regularly.
Test Deliverables
Test deliverables include:
- Test cases, plans, and strategies.
- Test scripts, data, traceability matrix, and results.
- Summary reports, defect logs, and release notes.
Test Reporting
Test reporting ensures effective communication during the testing process. Types include:
1. Test Incident Report: Logs issues found during testing.
2. Test Cycle Report: Summarizes activities and defects for each test cycle.
3. Test Summary Report: Final evaluation of testing, including phase-wise and final
assessments.
Test Process Management
Effective test management includes resource and environment planning. Key elements:
1. Test Infrastructure Management: Maintains test case databases and defect repositories.
2. Test People Management: Involves hiring, training, and motivating the team.
3. Test Lead Responsibilities: Includes planning, resource allocation, and creating a
productive environment.
Defect Management (Chapter 5)
1. What is a Defect?
A defect, or bug, is an error in the software caused by mistakes during its design or
development. These errors indicate flaws in the system, which can affect functionality,
performance, or user experience. Defects can occur at various stages of the Software
Development Life Cycle (SDLC), such as requirements gathering, design, coding, testing, or
deployment. The cost and impact of defects depend on the stage in which they arise and
when they are detected. The earlier a defect is found, the easier and cheaper it is to fix.
2. Causes of Software Defects
1. Miscommunication of Requirements: Incomplete or incorrect understanding of
requirements leads to errors in implementation.
2. Unrealistic Development Timelines: Tight deadlines force developers to rush, increasing
the likelihood of mistakes.
3. Lack of Design and Coding Experience: Inadequate knowledge or skills can introduce
flaws in the design or code.
4. Human Factors: Simple errors caused by developers, such as typos or logic mistakes.
5. Lack of Version Control: Poor version management can result in mismatched or outdated
code.
6. Defective Third-Party Tools: Bugs in external libraries or tools can introduce issues in the
software.
7. Last-Minute Requirement Changes: Sudden changes can disrupt existing workflows and
introduce errors.
8. Inadequate Testing: Poor testing skills or insufficient coverage can leave defects
unnoticed.
3. Defect Classification
Defects are categorized based on Severity and Priority:
a) Severity-Wise Classification:
1. Critical: Causes complete system failure or blocks key functionality. Needs immediate
fixing.
2. Major: Causes noticeable product failure or deviation from expected behavior, but the
system is still usable.
3. Minor: Causes small, non-critical issues that do not affect core functionality.
4. Cosmetic: Issues related to UI or appearance that have no impact on functionality.
b) Priority-Wise Classification:
1. High Priority: Must be fixed immediately as it affects critical functionality or the delivery
schedule.
2. Medium Priority: Important but can be scheduled for a later fix.
3. Low Priority: Non-urgent issues that can be resolved in future releases.
4. Importance of Defect Management
1. Quality Assurance: Ensures the final product meets user expectations and specifications.
2. Cost Control: Identifying defects early reduces the cost of fixing them.
3. Improved Processes: Helps identify weaknesses in the development process and
implement corrective actions.
4. Customer Satisfaction: Delivering defect-free software enhances user trust and
experience.