UNIT-5
Functional Coverage and UVM
Dr. Aruru Sai Kumar
Assistant Professor
VNR VJIET
saikumar_a@[Link]
7013251431
Part-A
Functional Coverage
1. Functional Coverage :
• Functional coverage is a measure of which design features have been
exercised by the tests. Start with the design specification and create a
verification plan with a detailed list of what to test and how.
• For example, if your design connects to a bus, your tests need to exercise all the
possible interactions between the design and bus, including relevant design states,
delays, and error modes.
• In many complex systems, you may never achieve 100% coverage as
schedules don’t allow you to reach every possible corner case. After all, you
didn’t have time to write directed tests to get sufficient coverage, and even
CRT(constrained-random testing) is limited by the time it takes you to create
and debug test cases, and analyze the results.
Functional Coverage :
• Figure shows the feedback loop to analyze the coverage results and decide
on which actions to take in order to converge on 100% coverage. Your first
choice is to run existing tests with more seeds; the second is to build new
constraints.
Functional Coverage :
• Explicit coverage is described directly in the test environment using System
Verilog features. Implicit coverage is implied by a test — when the “register
move” directed test passes, you have hopefully covered all register
transactions.
• With CRT, you are freed from hand crafting every line of input stimulus, but
now you need to write code that tracks the effectiveness of the test with
respect to the verification plan. You are still more productive, as you are
working at a higher level of abstraction.
• You have moved from tweaking individual bits to describing the interesting
design states. Reaching for 100% functional coverage forces you to think
more about what you want to observe and how you can direct the design into
those states.
Functional Coverage :
• Each individual simulation generates a database of functional coverage
information, the trail of footprints from the random walk. You can then
merge all this information together to measure your overall progress using
functional coverage as shown in Figure.
Functional Coverage :
• Each simulation vendor has its own format for storing coverage data and as
well as its own analysis tools. You need to perform the following actions
with those tools.
• Run a test with multiple seeds:
• For a given set of constraints and coverage groups, compile the testbench and design
into a single executable. Now you need to run this constraint set over and over with
different random seeds.
• You can use the Unix system clock as a seed, but be careful, as your batch system
may start multiple jobs simultaneously.
• These jobs may run on different servers or may start on a single server with multiple
processors. So combine all these values to make a truly unique seeds. The seed must
be saved with the simulation and coverage results for repeatability.
Functional Coverage :
• Check for pass/fail:
• Functional coverage information is only valid for a successful simulation. When a simulation
fails because there is a design bug, the coverage information must be discarded.
• The coverage data measures how many items in the verification plan are complete, and this
plan is based on the design specification. If the design does not match the specification, the
coverage values are useless.
• Some verification teams periodically measure all functional coverage from scratch so that it
reflects the current state of the design.
• Analyze coverage across multiple runs:
• You need to measure how successful each constraint set is, over time. If you are not yet getting
100% coverage for the areas that are targeted by the constraints, but the amount is still
growing, run more seeds.
• If the coverage level has plateaued, with no recent progress, it is time to modify the
constraints. Only if you think that reaching the last few test cases for one particular section
may take too long for constrained-random simulation should you consider writing a directed
test.
• Even then, continue to use random stimulus for the other sections of the design, in case this
“background noise” finds a bug.
Functional Coverage :
• Functional coverage is a crucial concept in verification methodologies, especially in System
Verilog, which focuses on ensuring that the design under test (DUT) has been exercised across
all functional scenarios defined in the specification.
• It provides a measure of how much of the intended functionality has been verified.
• Key Concepts of Functional Coverage:
• Definition: Functional coverage checks what has been tested rather than how the tests were
executed. It complements code coverage, which measures structural aspects.
• Usage:
• Functional coverage is used to identify missing scenarios in the testbench.
• It ensures that the DUT is exercised across all specified use cases.
• Components:
• Covergroup: A container for coverage-related constructs.
• Coverpoint: Defines a variable or expression whose values are tracked.
• Bins: Specify the range or values of interest for coverpoints.
• Cross Coverage: Tracks coverage of combinations of values across multiple coverpoints.
2. Coverage Types :
• Coverage is a generic term for measuring progress to complete design
verification. The coverage tools gather information during a simulation and
then post-process it to produce a coverage report.
• Functional coverage is a vital aspect of verification in hardware design and
system testing. It measures how much of the design's intended functionality has
been exercised during simulation or testing.
• There are various types of functional coverage, categorized based on what
aspect of the design is being covered. These types include:
1. Code Coverage
• The easiest way to measure verification progress is with code coverage. Here you are
measuring how many lines of code have been executed (line coverage), which paths
through the code and expressions have been executed (path coverage), which single-bit
variables have had the values 0 or 1 (toggle coverage), and which states and transitions
in a state machine have been visited (FSM coverage).
• Code coverage measures how thoroughly your tests exercised the “implementation” of
the design specification, but not the verification plan.
• Though not functional coverage itself, it's often considered a starting point. It checks
how much of the code has been exercised during simulation:
• Statement Coverage: Verifies if every line of code is executed.
• Branch Coverage: Ensures that all branches (true/false conditions) in the code are
taken.
• Condition Coverage: Tests all logical conditions within expressions.
• FSM (Finite State Machine) Coverage: Ensures all states and transitions of state
machines are covered.
[Link] Coverage
• The goal of verification is to ensure that a design behaves correctly in its real
environment. Functional coverage is tied to the design intent and is sometimes
called “specification coverage,” while code coverage measures how well you
have tested the RTL code and is known as, “implementation coverage.” These
are two very different metrics.
• Consider what happens if a block of code is missing from the design. Code
coverage cannot catch this mistake and could report that you have executed
100% of the lines, but functional coverage will show that the functionality does
not exist.
Bug Rate:
• An indirect way to measure coverage is to look at the rate at which fresh bugs are
found, show in the graph.
3. Expression Coverage
• Focuses on evaluating all possible values of expressions, especially complex
combinatorial logic. It ensures that all combinations of inputs are tested.
4. Toggle Coverage
• Ensures that every signal (bit) in the design toggles at least once (0 → 1 and 1
→ 0). It helps identify dead code or unused portions of the design.
5. Path Coverage
• Measures the coverage of all possible paths between start and end points in the
design. It's especially useful in finite state machines and sequential logic.
6. Cross Coverage
• Checks combinations of multiple related variables or conditions to ensure that
all possible scenarios are tested.
7. Scenario Coverage
• Measures how many predefined test scenarios or sequences of events have been
executed.
8. Assertions Coverage
• Ensures that all assertions (used for design checking) are triggered during
simulation. This helps verify the correctness of specific conditions in the
design.
9. Parameterized Coverage
• Verifies designs with parameter variations to ensure functionality is preserved
across a range of configurations.
10. Interface Coverage
• Focuses on interactions between modules, ensuring all types of communication
are tested, such as handshakes, bus transactions, or protocols.
3. Functional Coverage Strategies:
• Functional coverage strategies define systematic approaches to achieve
comprehensive verification of a design.
• These strategies aim to ensure that all functional aspects of the design are
adequately exercised and validated. Below are key functional coverage
strategies commonly used:
1. Define Coverage Goals Early
Purpose:
• Identify and prioritize key functionality and scenarios that need to be verified during the
design and test planning phase.
Steps:
• Analyze the design specification and identify key features and behaviors.
• Define coverage points for critical functionalities, boundary conditions, and expected
corner cases.
Functional Coverage Strategies:
2. Use a Coverage-Driven Verification Plan
Purpose:
• Align verification efforts with measurable coverage metrics.
Steps:
• Develop a test plan outlining all functional aspects to be verified.
• Define measurable coverage metrics for each functionality.
• Iterate and refine the plan based on simulation results.
3. Divide and Conquer (Modular Approach)
Purpose:
• Simplify the verification process by breaking it into smaller, manageable parts.
Steps:
• Divide the design into functional blocks or modules.
• Define and track coverage points for each block independently.
• Combine results for overall system-level coverage.
Functional Coverage Strategies:
4. Use Randomized and Directed Testing
• Randomized Testing: Generate random input stimuli within constrained
environments. Use functional coverage to identify gaps and refine constraints.
• Directed Testing: Target specific scenarios or edge cases identified in the test
plan. Ensure specific corner cases or critical paths are explicitly covered.
5. Focus on Corner Cases
Purpose:
• Validate the behavior of the design under extreme or unexpected conditions.
Steps:
• Analyze the design for boundary conditions and edge cases.
• Define coverage points for these conditions.
• Use directed or constrained-random testing to ensure these cases are exercised.
Functional Coverage Strategies:
6. Cross Coverage for Combinations
Purpose:
• Ensure interactions between multiple variables or scenarios are adequately covered.
Steps:
• Identify variables or conditions with potential interdependencies.
• Define cross-coverage bins to track all possible combinations.
• Prioritize testing uncommon or critical combinations.
7. Assertions and Functional Coverage Synergy
Purpose:
• Combine assertion-based verification with functional coverage for thorough testing.
Steps:
• Use assertions to validate specific properties or conditions in the design.
• Monitor functional coverage points to ensure all scenarios are exercised.
Functional Coverage Strategies:
8. Use Coverage Automation Tools
Purpose:
• Improve efficiency by leveraging tools for automated coverage monitoring and analysis.
Popular Tools:
• Synopsys VCS
• Cadence Incisive/XP
• Mentor Graphics Questa
• Open-source tools like Cocotb
9. Perform Coverage Closure Analysis
Purpose:
• Ensure that all gaps in coverage are analyzed and addressed before sign-off.
Steps:
• Review uncovered scenarios to determine their relevance.
• Address gaps by refining constraints or adding directed tests.
• Document any exclusions with justification (e.g., unreachable states).
Functional Coverage Strategies:
4. Simple Functional Coverage Example:
• To measure functional coverage, you begin with the verification plan
and write an executable version of it for simulation.
• In your System Verilog testbench, sample the values of variables and
expressions.
• These sample locations are known as cover points. Multiple cover
points that are sampled at the same time (such as when a transaction
completes) are placed together in a cover group.
Simple Functional Coverage Example:
Simple Functional Coverage Example:
• Sample 9.2 creates a random transaction and drives it out to an
interface. The testbench samples the value of the dst field using the
CovDst2 cover group. Eight possible values, 32 random transactions
— did your testbench generate them all?
• Samples 9.3 and 9.4 have part of a coverage report from VCS.
Because of randomization, every simulator will give different results.
• As you can see, the testbench generated dst values of 1, 2, 3, 4, 5, 6,
and 7, but never generated a 0. The at least column specifies how
many hits are needed before a bin is considered covered.
Simple Functional Coverage Example:
Part-B
UVM
UVM TEST BENCH ARCHITECTURE:
• The Universal Verification Methodology (UVM) is an industry-
standard methodology used for verifying complex digital designs.
• It provides a structured and reusable framework for building System
Verilog testbenches.
• The UVM testbench architecture typically follows a layered
approach and consists of several key components, each with specific
responsibilities.
• Below is an overview of the UVM testbench architecture:
UVM TEST BENCH ARCHITECTURE:
UVM TEST BENCH ARCHITECTURE:
1. UVM Testbench Architecture Overview:
• The UVM testbench is divided into different layers, which include:
• Test Layer
• Environment Layer
• Component Layer
• Sequence Layer
• Driver, Monitor, and Scoreboard Layer
2. UVM Testbench Components
a. Test
• Defines the configuration and initialization of the testbench.
• Instantiates the top-level environment.
• Controls the sequences to be executed.
• Overrides default behavior using UVM factory overrides.
b. Environment (uvm_env)
• Encapsulates all the components of the testbench.
• Acts as a container for reusable components such as agents, scoreboards, and analysis ports.
c. Agent (uvm_agent)
• Contains a Driver, Monitor, and Sequencer.
• Can operate in active or passive mode:
• Active Mode: Both Driver and Monitor are active.
• Passive Mode: Only Monitor is active.
3. Key Components
i. Sequence (uvm_sequencer)
• Controls the flow of test sequences.
• Sends stimulus to the Driver.
ii. Driver (uvm_driver)
• Converts high-level sequence items into low-level signals.
• Interfaces directly with the Device Under Test (DUT).
iii. Monitor (uvm_monitor)
• Observes the signals on the DUT interface.
• Extracts transactions and sends them to other components like scoreboards or coverage
collectors.
iv. Scoreboard (uvm_scoreboard)
• Compares expected results with actual results.
• Detects functional mismatches or errors.
v. Coverage Collector
• Tracks functional and code coverage.
• Ensures the verification objectives are met.
UVM Factory:
• The UVM Factory is a core feature of the Universal Verification
Methodology (UVM). It provides a powerful and flexible mechanism
for creating and customizing UVM components and sequences at
runtime.
• This flexibility enables the reuse of testbench components and allows
for easy configuration and modification of testbench behavior
without altering the source code.
UVM Factory:
Key Features of the UVM Factory
• Object Creation: The UVM Factory creates UVM objects (e.g., components, sequences,
or transactions) dynamically.
• Override Mechanism: Enables replacing default object types with user-specified types
without modifying the code.
• Reusable Framework: Promotes code reuse and abstraction, enabling the use of base
classes with specialized derived classes.
• Centralized Control: Manages object creation across the testbench from a single location.
UVM Factory:
How the UVM Factory Works
1. Object Registration
• Every UVM component or object that uses the factory must be registered with it.
• Use macros like:
• uvm_component_utils for components.
• uvm_object_utils for objects.
2. Object Creation
• The factory uses the create() method to instantiate objects dynamically.
• The type of object to be created can be decided at runtime.
3. Overrides
• The factory allows you to specify an alternative class to be instantiated instead of the default.
• Two types of overrides:
• Type Override: Replaces one type with another.
• Instance Override: Replaces a specific instance of a type with another.
UVM Factory:
Advantages of Using the UVM Factory
• Dynamic Behavior Modification: Modify the behavior of the testbench at runtime
without changing the source code.
• Flexible and Scalable: Easily switch between different configurations or scenarios.
• Promotes Code Reusability: Develop generic components that can be specialized as
needed.
• Centralized Object Management: Simplifies debugging and testbench maintenance.
UVM Components:
UVM Components:
1. uvm_test
• Top-level component of a testbench.
• Specifies test configurations, sequences, and DUT-specific settings.
• Instantiates the testbench environment.
2. uvm_env
• Represents the verification environment.
• Serves as a container for UVM components like agents, scoreboards, and
coverage collectors.
• Promotes reusability by encapsulating related components.
UVM Components:
3. uvm_agent
• Encapsulates a driver, monitor, and sequencer.
• Acts as an interface between the DUT and the testbench.
• Operates in:
• Active Mode: Generates and drives stimulus.
• Passive Mode: Monitors DUT activity without driving signals.
4. uvm_driver
• Converts high-level transactions into low-level signal activities on the DUT
interface.
• Interacts with the DUT via a virtual interface.
• Pulls transactions from the sequencer.
UVM Components:
5. uvm_sequencer
• Manages the generation and sequencing of transactions.
• Provides transactions to the driver.
6. uvm_monitor
• Observes DUT activity and converts low-level signal activities into
high-level transactions.
• Sends transactions to analysis components (like scoreboards).
7. uvm_scoreboard
• Compares expected results with actual DUT outputs.
• Detects functional mismatches or errors.
UVM Components:
A typical UVM testbench hierarchy:
UVM Components:
Summary:
1. uvm_test: Top-level testbench configuration and control.
2. uvm_env: Encapsulates the environment, agents, and scoreboards.
3. uvm_agent: Contains driver, sequencer, and monitor.
4. uvm_driver: Drives transactions to the DUT.
5. uvm_sequencer: Generates and sequences transactions.
6. uvm_monitor: Observes DUT behavior and converts signals into
transactions.
7. uvm_scoreboard: Checks the correctness of DUT output.
Part-C
Modeling Finite State Machines with System Verilog
Modeling Finite State Machines (FSM) with System Verilog:
• Modeling Finite State Machines (FSMs) with System Verilog involves
representing sequential logic designs that transition between different states
based on input signals and conditions.
• FSMs are commonly used in digital design for controllers, protocols, and
state-dependent systems.
1. Types of FSMs
• Moore FSM: Outputs depend only on the current state.
• Mealy FSM: Outputs depend on both the current state and inputs.
2. FSM Components
• States: Enumerated conditions (e.g., IDLE, READ, WRITE).
• Inputs: Signals that determine state transitions.
• Outputs: Signals generated based on the state (and possibly inputs).
• State Transitions: Defined logic for moving between states.
Modeling Finite State Machines (FSM) with System Verilog:
3. Steps to Design FSM in System Verilog:
• Step 1: Declare the states
• Step 2: Define state registers
• Step 3: Create a state transition block
• Step 4: Implement the state transition logic
• Step 5: Define output logic
Modeling Finite State Machines (FSM) with System Verilog:
Example (Moore FSM): Example (Mealy FSM):
Modeling state machines with enumerated type:
• Modeling state machines using enumerated types in System Verilog
enhances readability, maintainability, and robustness.
• Enumerated types allow developers to define human-readable state names
instead of binary-encoded values, simplifying the design and debugging
process.
• The typedef enum construct assigns meaningful names to states with unique
binary encodings.
typedef enum logic [1:0] {
IDLE = 2'b00, // Initial state
READ = 2'b01, // Reading state
PROCESS = 2'b10, // Processing data
WRITE = 2'b11 // Writing state
} state_t;
Modeling state machines with enumerated type:
• Advantages of Using Enumerated Types
• Readability: Meaningful state names make the design easier to understand.
• Error Reduction: Eliminates errors caused by manual binary encoding.
• Maintainability: Adding or modifying states is straight forward.
• Debugging: Simulation outputs show state names instead of binary values,
making debugging more intuitive.
• Flexibility: Easy to add, remove, or reorder states without changing the
encoding logic.
Complete FSM Example:
Representing state encoding with enumerated types
• State encoding with enumerated types in System Verilog is a efficient way
to define and manage FSM states.
• Enumerated types automatically assign unique binary values to each state,
abstracting away the manual effort of encoding states with binary values.
• System Verilog's typedef enum is used to declare an enumeration of states.
Each state is assigned a unique binary value, either explicitly or implicitly.
• Syntax:
typedef enum logic [N-1:0] {
STATE1, // Assigned 0 by default
STATE2, // Assigned 1 by default
STATE3, // Assigned 2 by default
STATE4 // Assigned 3 by default
} state_t;
Representing state encoding with enumerated types
Examples of State Encoding:
1. Implicit State Encoding
System Verilog assigns consecutive binary values starting from 0.
typedef enum logic [1:0] { // 2 bits required for 4 states
IDLE, // 2'b00
READ, // 2'b01
PROCESS, // 2'b10
WRITE // 2'b11
} state_t;
Representing state encoding with enumerated types
2. Explicit State Encoding
The user can explicitly assign binary values to each state. This is
useful when specific encodings are required, such as in hardware
optimization.
typedef enum logic [2:0] {
IDLE = 3'b000,
READ = 3'b001,
PROCESS = 3'b010,
WRITE = 3'b100
} state_t;
Representing state encoding with enumerated types
3. One-Hot Encoding
One-hot encoding assigns a unique bit position (only one bit is set to
1) to each state. It requires more bits but is faster in terms of hardware
implementation.
typedef enum logic [3:0] {
IDLE = 4'b0001,
READ = 4'b0010,
PROCESS = 4'b0100,
WRITE = 4'b1000
} state_t;
Reversed case statements with enumerated types.
• Reversing case statements with enumerated types means prioritizing
the conditions in the logic rather than the states, allowing for more
concise and input-driven transition logic.
• This approach is helpful in scenarios where transitions are more
naturally defined by specific input conditions rather than by the
current state.
• In the reversed structure:
• The case statement evaluates input conditions or events instead of states.
• The state transitions depend on these conditions, with the next state assigned
accordingly.
Reversed case statements with enumerated types.
• Standard Approach (State-Centric)
• Reversed Approach (Condition-Centric)
Reversed case statements with enumerated types.
Advantages of Reversed Case Statements:
• Input-Driven Logic:
• Aligns closely with the input conditions.
• Useful when conditions are complex and span multiple states.
• Readable for Priority Handling:
• Easier to see which condition takes precedence.
• Simplifies Certain Designs:
• Particularly helpful when the same condition may affect transitions across
multiple states.
Reversed case statements with enumerated types.
When to Use Reversed Case Statements
• Condition-Driven FSMs: When input conditions primarily determine the
next state.
• Dynamic Transitions: For systems where multiple conditions can influence
transitions.
• Debugging: Easier to pinpoint why a specific transition occurred based on
conditions.