T&V DHRUV PA1 v1
T&V DHRUV PA1 v1
Bachelor of Engineering
Subject Code: 3171111
Semester – VII
Subject Name: Testing and Verification
DHRUV MAMTORA
DHRUV MAMTORA
VLSI Testing, Levels of Abstraction in VLSI Testing, Historical Review of VLSI Test
Technology.
DHRUV MAMTORA
1. Introduction to VLSI Testing: Understanding the importance of testing in the VLSI
lifecycle, challenges, and abstraction levels.
2. Design for Testability (DFT): Concepts like scan design, testability analysis, and scan
architecture.
3. Logic and Fault Simulation: Techniques for logic simulation and fault modeling.
4. Verification: The process and importance of verification, including different methods
and tools.
5. Verification Techniques using SystemVerilog: Advanced verification concepts like
code coverage, functional coverage, and assertions.
Practical Assignments:
DHRUV MAMTORA
counters, and verification of designs using assertion-based methodologies.
• Testing vs. Verification: Found in almost every paper, it's essential to clearly differentiate
between testing (ensuring the system functions correctly by detecting errors) and verification
(ensuring that the system behaves according to the specification).
• Fault Terminology: Yield, fault coverage, error, defect, failure, repair time, reliability, reject rate,
fault detection efficiency. These are basic but crucial concepts.
• VLSI Testing Challenges: This involves fault models, test pattern generation, and handling
complexities of large-scale integration.
• Single Stuck-at Faults: A frequently asked problem involves calculating the number of single
stuck-at faults and determining test vectors to detect them.
DHRUV MAMTORA
• Bridging Faults: Explanation of bridging fault models (common question).
• Transistor Faults: Faults in gates like two-input CMOS NOR or NAND, and transistor-level testing
methodologies.
• Delay Faults, Coupling Faults, Pattern Sensitivity Faults: Different fault types and their relevance
in testing VLSI circuits.
• SCOAP: Controllability and Observability using the SCOAP technique for logic gates and full
adders.
• Scan Design & Scan Chains: Muxed-D scan cells, LSSD scan cells, and clocked scan cells are often
asked, including scan configuration and how to address challenges in sequential circuit testing.
• Control Point Insertion: Used to improve testability by adding control points in circuits.
DHRUV MAMTORA
• Fault Simulation Algorithms: Serial, parallel, deductive fault simulation—these simulation
methods are key topics across multiple papers.
• Compiled Code Simulation: Explaining flow and its role in testing and verification.
5. Verification:
• Assertions and Verification Plan: A consistent focus across papers. Knowing the role and
implementation of assertions, and how to create a comprehensive verification plan, is vital.
• Code Coverage: White box vs. black box testing, code coverage concepts like functional and
branch coverage, and what it means to achieve 100% coverage.
• Simulation and Functional Verification: Different functional verification approaches, such as
equivalence checking, grey-box vs. white-box verification.
• Test Bench Design: Designing and writing test benches for common digital circuits like
multiplexers (4x1), demultiplexers (1x4), adders, etc., using Verilog or System Verilog.
• Practical Applications: Design-based questions also often include constructing testable circuits,
determining fault models, and simulating these circuits for testing.
DHRUV MAMTORA
Great! Based on the syllabus you've provided for the progressive assessment, the topics are
relatively focused. Here's a study guide tailored to help you prepare efficiently:
Key Topics:
• Importance of Testing: Why testing is essential during the design and manufacturing
process of VLSI circuits (to ensure functionality, reduce costs, and improve yield).
• VLSI Lifecycle Testing: Testing at different stages such as pre-fabrication, post-
fabrication, and during operation (online/offline testing).
• Challenges in VLSI Testing: Scale and complexity of circuits, defect density, process
variations, power consumption, test pattern generation, time constraints, etc.
• Levels of Abstraction in VLSI Testing:
o Gate-level: Testing logic gates.
o Register-transfer level (RTL): Testing functionality at a higher design
abstraction.
o Transistor-level: Testing transistors and their interactions (shorts, opens).
• Historical Review of VLSI Test Technology: Understanding the evolution of VLSI
testing technologies, including automatic test pattern generation (ATPG) and
advancements in fault models.
Potential Questions:
DHRUV MAMTORA
2. Design and Testability:[PA1]
Key Topics:
• Testability Analysis: How easy it is to control and observe internal nodes of a circuit
(controllability & observability).
• Design for Testability (DFT) Basics:
o Controllability: The ability to set a specific internal state of the circuit.
o Observability: The ability to observe outputs that reveal internal faults.
o Techniques such as adding test points or using scan-based testing improve
testability.
• Scan Cell Designs:
o Muxed-D Scan Cells: Used in scan design to improve controllability and
observability in sequential circuits.
o LSSD (Level-Sensitive Scan Design): A scan design methodology that ensures
predictable and controllable testing of sequential logic.
• Scan Design Rules: Rules to be followed for incorporating scan design (e.g., no
combinational feedback loops, latch usage restrictions).
• Scan Design Flow: Steps involved in scan design, including insertion, verification, and
testing.
• Special Purpose Scan Designs: Custom scan designs for particular challenges in
testability.
• RTL Design for Testability: How to structure RTL code for better testability (e.g.,
avoiding asynchronous resets, simplifying control logic).
Potential Questions:
DHRUV MAMTORA
3. Logic and Fault Simulation:[PA1]
Key Topics:
• Simulation Models:
o Event-driven simulation: Used for simulating digital circuits at the gate level.
o Cycle-based simulation: Focuses on simulating circuit behavior cycle-by-cycle,
often faster but less detailed.
• Logic Simulation: Simulating the logic function of a circuit to verify that it behaves as
expected (often used before fault simulation).
• Fault Simulation: Used to determine the effectiveness of test patterns by simulating
various fault models.
o Stuck-at Fault Model: One of the most common fault models where a signal is
assumed to be permanently stuck at 0 or 1.
o Delay Fault Model: Focuses on timing issues, where a signal transition takes
longer than expected.
• Fault Simulation Algorithms:
o Serial Fault Simulation: Simulates one fault at a time.
o Parallel Fault Simulation: Simulates multiple faults simultaneously for faster
testing.
o Deductive Fault Simulation: A more advanced method where the effect of a
fault is deduced based on its impact on logic values.
Potential Questions:
• What are the key simulation models used in VLSI testing, and what are their benefits?
• Explain the difference between logic simulation and fault simulation.
• Describe the stuck-at fault model and its importance in VLSI testing.
• Compare serial fault simulation and parallel fault simulation.
DHRUV MAMTORA
Study Strategy:
1. Review Key Definitions: Be sure you can clearly define important terms like
controllability, observability, stuck-at fault, scan cell, and fault simulation models.
2. Understand Examples: Practice with example circuits (like a simple NAND or NOR
gate) and calculate controllability, observability, and possible fault scenarios.
3. Work on Flow and Diagrams: Be able to sketch and explain the flow of a scan design,
logic simulation, and fault simulation algorithms.
4. Review Past Papers: Use the past GTU papers to identify frequently asked questions
(many of which have been provided already). This will help you focus on areas that are
more likely to appear in your exam.
• Short Answer: Definitions and comparisons (e.g., Testing vs. Verification, Logic vs.
Fault Simulation).
• Descriptive: Explaining processes (e.g., Scan design flow, Fault simulation algorithms).
• Calculation/Diagram-based: Problems related to controllability/observability (SCOAP),
fault analysis for circuits (e.g., stuck-at faults).
By focusing on these areas and practicing with real examples, you should be well-prepared for
your progressive assessment.
Let’s dive into the first topic, Introduction to VLSI Testing. I'll break down each key area as
outlined.
Importance of Testing
• Functionality Assurance: Ensures that the integrated circuit (IC) performs its intended
functions. If the design has flaws, the final product may not work correctly, leading to
failures in systems that rely on it.
• Cost Reduction: Identifying and fixing defects early in the design or manufacturing
process is cheaper than doing so post-production. This reduces the overall cost of
development and production.
• Yield Improvement: Testing can identify defective parts of the manufacturing process,
allowing manufacturers to improve yields by eliminating sources of defects. This is
particularly important in VLSI, where even small defects can significantly affect
performance.
• Scale and Complexity: As circuits grow in size and complexity, testing becomes more
challenging. Ensuring comprehensive coverage of all possible states and conditions can
be difficult.
• Defect Density: High-density circuits are more prone to defects, which increases the
likelihood of failure. Testing must be designed to identify subtle defects that may not be
easily detectable.
• Process Variations: Variations in manufacturing processes can lead to differences in the
behavior of ICs, making it essential to test for a wide range of conditions.
• Power Consumption: Testing may consume a significant amount of power, particularly
in large circuits. Managing power consumption during testing is critical to prevent
overheating or other issues.
• Test Pattern Generation: Creating effective test patterns that can adequately cover all
possible faults in a circuit is a complex task and requires sophisticated algorithms.
• Time Constraints: With tight schedules for product releases, there is often a limited
amount of time available for testing, which can lead to incomplete testing and undetected
defects.
• Gate-level Testing: Focuses on individual logic gates, verifying their functionality and
interconnections. Fault models, such as stuck-at faults, are commonly used at this level.
• Register-transfer level (RTL) Testing: Involves testing the functionality of the design
at a higher abstraction level where the focus is on data flow between registers. It checks if
the design meets specifications without delving into gate-level details.
• Transistor-level Testing: Tests the individual transistors and their interactions. This
level addresses issues like short circuits and open circuits that can affect overall
performance.
DHRUV MAMTORA
Great! Let’s move on to the next topic: Design and Testability.
Testability Analysis
Testability analysis focuses on how easily the internal states of a circuit can be controlled and
observed. Two key concepts in testability analysis are:
• Controllability: This refers to the ability to set a specific internal state of the circuit
using test inputs. High controllability means that the test engineer can control the states
of the circuit easily, which is essential for effective testing.
• Observability: This is the ability to observe outputs that can reveal the presence of
internal faults. A circuit is said to have high observability if it is easy to determine
whether a fault exists based on the output responses.
Testability analysis aims to enhance both controllability and observability through design
strategies, ensuring that defects can be detected and localized efficiently.
Design for Testability involves incorporating features into the design of an integrated circuit that
make it easier to test. Key concepts include:
• Test Points: These are specific locations within a circuit where test signals can be
applied or outputs can be observed. Adding test points can significantly enhance
observability.
• Scan-Based Testing: This method utilizes scan chains, which are specific configurations
of flip-flops that allow for easy shifting of test data into and out of the circuit. This
improves both controllability and observability.
Scan cells are specialized flip-flops that are used in scan-based testing. They can function as
regular storage elements during normal operation but can also be reconfigured to facilitate
testing. Key types of scan cell designs include:
• Muxed-D Scan Cells: These cells use multiplexers to switch between normal operation
and test mode. They improve controllability and observability by allowing external test
patterns to be applied and observed.
• LSSD (Level-Sensitive Scan Design): This is a scan design methodology that ensures
predictable and controllable testing of sequential logic. It employs level-sensitive latches
to enhance testing efficiency.
Scan Design Rules
• No Combinational Feedback Loops: Feedback loops can complicate testing as they can
create states that are difficult to control or observe.
• Latch Usage Restrictions: Certain types of latches may introduce complexity into the
scan design, so their usage should be minimized or regulated.
Following these rules helps ensure that the scan design remains effective and that test patterns
can be applied successfully.
The scan design flow involves several steps to ensure that scan-based testing is implemented
correctly:
1. Insertion: Adding scan cells into the design and connecting them appropriately to form
scan chains.
2. Verification: Ensuring that the scan design operates correctly and that it does not
introduce new faults into the circuit.
3. Testing: Conducting tests using the scan chains to validate the functionality of the
design.
This structured approach helps maintain the integrity of the design while enhancing its
testability.
In certain cases, standard scan designs may not be sufficient to address specific testing
challenges. Special purpose scan designs may be created to target unique scenarios, such as:
• Built-in Self-Test (BIST): This technique incorporates self-testing capabilities into the
design, allowing the circuit to perform tests on itself.
• Hybrid Scan Designs: These may combine different scan methodologies to address
particular constraints or requirements.
Register-Transfer Level (RTL) design can also be structured for better testability. Techniques
include:
DHRUV MAMTORA
Great! Let’s proceed to the next topic: Logic and Fault Simulation.
Simulation Models
Simulation models are essential tools used in VLSI testing to verify the functionality of circuits.
The two primary types of simulation models include:
• Event-Driven Simulation: This model simulates digital circuits at the gate level by
tracking events (changes in signal states) as they occur. It is efficient for large circuits as
it only processes events that change rather than evaluating the entire circuit in every
simulation cycle.
• Cycle-Based Simulation: This focuses on simulating circuit behavior in discrete time
cycles, processing all signals simultaneously at each clock cycle. While this method can
be faster for certain analyses, it is often less detailed than event-driven simulations.
Logic Simulation
Logic simulation is the process of simulating the logic function of a circuit to verify that it
behaves as expected. This typically involves:
• Testing Functional Correctness: Before applying fault simulations, engineers use logic
simulation to ensure that the design behaves correctly under various input conditions.
• Verification Against Specifications: Logic simulation helps verify that the design meets
its specifications by checking output responses for given inputs.
Fault Simulation
Fault simulation is a crucial step to evaluate the effectiveness of test patterns by simulating
various fault models. It is used to determine how well a design can detect faults.
• Stuck-at Fault Model: This is one of the most commonly used fault models in which a
signal is assumed to be permanently stuck at either a logical 0 (ground) or a logical 1
(VDD). For example, if a wire is stuck at 0, it can no longer transmit a high signal. Stuck-
at faults are simpler to simulate and provide a basic understanding of fault coverage.
• Delay Fault Model: This model focuses on timing issues, where a signal transition takes
longer than expected. It simulates real-world conditions where timing paths can be
affected by various factors, such as temperature or process variations.
Fault simulation algorithms help determine how different faults affect the output of the circuit.
Key algorithms include:
• Serial Fault Simulation: In this approach, faults are simulated one at a time. While
straightforward, it can be time-consuming for large circuits because each fault must be
tested separately.
• Parallel Fault Simulation: This method simulates multiple faults simultaneously. It is
faster than serial simulation, especially for large circuits, as it can quickly identify
multiple fault conditions.
• Deductive Fault Simulation: This advanced method deduces the effect of a fault based
on its impact on logic values. It efficiently narrows down the potential faults that could
affect the output by analyzing how changes in input states propagate through the circuit.
DHRUV MAMTORA
Questions
• Functionality Verification: Testing ensures that the VLSI design meets its specified
functional requirements. It helps identify logical errors and design flaws that may prevent
the circuit from performing as intended.
• Cost Reduction: Early detection of faults can save significant costs associated with
rework, recalls, and product failures. By testing at various stages, manufacturers can
catch defects before they escalate into more significant issues.
• Yield Improvement: Testing improves manufacturing yield by allowing the
identification of defects in the production process. By ensuring that only fully functional
chips are shipped, companies can maximize their return on investment.
• Reliability Assurance: Testing helps validate the reliability and robustness of VLSI
circuits under different operating conditions. This is particularly important in safety-
critical applications like automotive or medical devices, where failures can have severe
consequences.
• Design Iteration: Throughout the VLSI lifecycle, iterative design improvements can be
guided by testing feedback. This continuous loop of design and testing fosters better
designs and innovations.
Summary: Testing at various stages of the VLSI lifecycle not only ensures that the final product
meets quality standards but also plays a pivotal role in cost management, reliability, and design
improvement.
VLSI testing faces several challenges, which impact the testing process:
• Complexity and Scale: Modern VLSI circuits are highly complex, often containing
millions or billions of transistors. This complexity makes it difficult to design effective
tests that cover all possible fault scenarios.
• Defect Density: As technology scales down, the defect density in chips increases. Small
manufacturing defects can lead to significant failures, requiring robust testing strategies
to identify them.
• Process Variations: Variability in semiconductor manufacturing can result in different
behavior in chips made from the same design. Testing must account for these variations
to ensure all chips perform reliably.
• Power Consumption: Testing circuits often requires high power levels, which can lead
to overheating and damage. Managing power during testing is crucial to avoid false
failures or damage to the device.
• Time Constraints: The demand for faster product release cycles puts pressure on testing
teams to develop and execute tests quickly. Balancing thorough testing with time
efficiency can be a significant challenge.
• Test Pattern Generation: Generating effective test patterns that can detect a wide range
of faults without excessive time or cost is challenging. Automatic test pattern generation
(ATPG) techniques must be refined to improve test quality.
VLSI testing involves multiple levels of abstraction, each playing a vital role:
• Gate-Level Testing:
o Description: At the gate level, testing focuses on individual logic gates (AND,
OR, NOT, etc.).
o Relevance: This level allows for detailed fault simulation (like stuck-at faults)
and helps verify the logical correctness of the circuit at the lowest level.
• Register-Transfer Level (RTL) Testing:
o Description: RTL testing involves the representation of the circuit's functionality
in terms of data transfers between registers.
o Relevance: It abstracts the complexity of lower-level details while allowing the
verification of functionality. It is crucial for testing large designs quickly and
efficiently, ensuring that the overall behavior matches specifications.
• Transistor-Level Testing:
o Description: This level focuses on the physical implementation of transistors and
their interactions.
o Relevance: It is essential for identifying physical defects (like shorts and opens)
and ensuring the correct functioning of the individual transistors in various
operating conditions.
Summary: Each level of abstraction is relevant for different types of testing and fault
identification. Gate-level testing is precise, RTL testing simplifies functional verification, and
transistor-level testing addresses physical design issues.
Conclusion
Testing during the VLSI lifecycle is essential for ensuring functionality, reducing costs,
improving yield, and assuring reliability. However, the complexity and challenges associated
with VLSI testing require a strategic approach across different levels of abstraction, ensuring
thorough coverage and effective fault detection.
DHRUV MAMTORA
1. What is Testability Analysis, and Why is it Important?
Testability Analysis refers to evaluating how easily a circuit can be tested to identify faults. It
assesses two key metrics: controllability and observability.
• Controllability:
o Definition: Controllability refers to the ability to set an internal signal (node) of a
circuit to a specific logical value (0 or 1). It assesses how easily a test pattern can
control the internal states of the circuit.
o Example: If a circuit has a path from the input to a flip-flop that can be driven by
a specific input signal, it is considered controllable.
• Observability:
o Definition: Observability refers to the ability to observe the internal states of a
circuit from its outputs. It assesses how well the outputs of a circuit reflect the
internal states.
o Example: If a fault occurs within the circuit and this fault can be detected through
the outputs (like failing to match expected output), then the internal state is
considered observable.
Summary: Controllability deals with the ability to set internal states, while observability focuses
on the ability to detect those states from the outputs. Both are essential for effective testing and
diagnosis of faults.
Summary: Both muxed-D scan cells and LSSD are crucial in enhancing the testability of
sequential circuits by providing mechanisms to shift test data in and out while maintaining
reliable operation.
Scan Design Flow consists of several key steps that must be followed to successfully implement
scan testing in VLSI designs. The typical steps include:
1. Insertion: Adding scan cells into the design, replacing some flip-flops with scan flip-
flops.
2. Verification: Checking the integrity of the scan design to ensure that it correctly
implements the intended functionality and testability.
3. Testing: Applying test patterns to the design, using the scan path to capture responses.
4. Analysis: Evaluating the test results to identify any faults in the design.
Importance:
• Ensures Coverage: A well-defined scan design flow ensures that all internal states can
be tested, maximizing fault coverage.
• Streamlines Testing: It facilitates efficient testing by providing structured steps that
simplify the testing process and minimize errors.
• Improves Debugging: A systematic approach allows for better fault localization, making
it easier to identify and fix defects.
• No Combinational Feedback Loops: Avoid feedback loops that can complicate test
pattern application and affect observability.
• Latch Usage Restrictions: Ensure that latches are used in ways that do not interfere with
scan operations, as this can lead to timing issues and unpredictable behavior.
• Single Scan Chain: If possible, create a single scan chain to simplify the design and
testing process. However, multiple chains can be used if they are necessary for design
constraints.
Conclusion
Understanding testability analysis, the concepts of controllability and observability, and the
design rules surrounding scan testing is crucial for efficient VLSI design. Implementing effective
scan cell designs and following a structured scan design flow allows engineers to ensure that
their circuits can be thoroughly tested, leading to higher reliability and better performance.
DHRUV MAMTORA
1. Key Simulation Models Used in VLSI Testing and Their Benefits
Simulation Models are essential tools in VLSI testing that help analyze and verify circuit
behavior before fabrication. Here are some of the key simulation models:
• Event-Driven Simulation:
o Description: This model reacts to changes in circuit signals (events), updating
only the affected parts of the circuit. It is widely used for digital circuit
simulation.
o Benefits:
▪ Efficient for large circuits since it only simulates changes, reducing
computational load.
▪ Allows for accurate timing analysis by considering signal propagation
delays.
• Cycle-Based Simulation:
o Description: This model simulates the circuit in discrete time cycles, analyzing
the circuit's behavior at each clock cycle.
o Benefits:
▪ Faster than event-driven simulation as it processes the entire circuit for
each clock cycle without focusing on individual signal changes.
▪ Suitable for performance evaluation of synchronous circuits.
• Logic Simulation:
o Description: Verifies the logical correctness of the circuit without considering
physical parameters like timing or electrical behavior.
o Benefits:
▪ Ensures that the design functions as intended by verifying logical
relationships between inputs and outputs.
▪ Often used in the early design phases for functional verification.
• Fault Simulation:
o Description: Simulates the effects of various fault models to assess the
effectiveness of test patterns. It helps identify how well a design can detect faults.
o Benefits:
▪ Evaluates the fault coverage of test patterns, providing insights into
potential weaknesses in the design.
▪ Facilitates optimization of test patterns to improve fault detection rates.
• Logic Simulation:
o Purpose: To verify that a circuit behaves correctly according to its design
specifications.
o Focus: Tests the logical functionality of the circuit without considering fault
scenarios. It checks whether the outputs match expected results for given inputs.
o Use Case: Primarily used during the design phase to ensure that the circuit logic
is correct.
• Fault Simulation:
o Purpose: To evaluate how well a test pattern can detect faults in a circuit.
o Focus: Simulates specific faults (e.g., stuck-at faults) to assess the fault coverage
of test patterns. It helps determine whether the circuit can detect and isolate faults
during testing.
o Use Case: Utilized after logic simulation to validate the effectiveness of testing
strategies and improve test patterns.
Conclusion
Understanding the various simulation models, their applications, and differences is crucial for
effective VLSI testing. Logic simulation focuses on verifying design correctness, while fault
simulation assesses the circuit's ability to detect faults. The stuck-at fault model serves as a
foundational tool for test pattern generation, and choosing between serial and parallel fault
simulation depends on the needs for detail versus efficiency.
DHRUV MAMTORA