0% found this document useful (0 votes)
12 views4 pages

Cache Memory Mapping Procedures Explained

Questions of computers

Uploaded by

veezvision2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views4 pages

Cache Memory Mapping Procedures Explained

Questions of computers

Uploaded by

veezvision2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Explain various mapping procedures of cache memory.

Cache memory is a small, fast memory that stores frequently accessed data from the main
memory (RAM) to speed up CPU access. To efficiently manage which data from main memory
resides in the cache, and where it's placed, various mapping procedures are employed. These
procedures determine how a main memory block is mapped to a cache line.

The three primary cache mapping procedures are:

1. Direct Mapping
2. Fully Associative Mapping
3. Set-Associative Mapping

Let's explore each in detail:

1. Direct Mapping

Concept: In direct mapping, each block of main memory can only be placed into one specific
cache line. This is the simplest mapping technique.

How it works: The main memory address is divided into three parts:

 Tag: Uniquely identifies the main memory block within the cache line.
 Index (Cache Line Number): Determines the specific cache line where the block will be
stored.
 Block Offset (Word Offset): Specifies the exact word within the block.

The cache line number for a main memory block is typically calculated using the modulo
operation: Cache Line Number=Main Memory Block Number(modNumber of Cache Lines)

When the CPU requests data:

1. The index bits from the memory address point to a specific cache line.
2. The tag stored in that cache line is compared with the tag bits of the requested address.
3. If the tags match and the valid bit (indicating valid data) is set, it's a cache hit, and the
data is retrieved using the block offset.
4. If there's no match (a cache miss), the entire block containing the requested data is
fetched from main memory and loaded into the designated cache line, replacing any
existing data.

Advantages:

 Simple to implement: The mapping logic is straightforward, requiring minimal


hardware.
 Fast lookup: There's only one possible location to check for a block, making the lookup
process quick.
Disadvantages:

 High conflict misses (Thrashing): If multiple frequently accessed main memory blocks
map to the same cache line, they will repeatedly evict each other, even if other cache
lines are empty. This phenomenon is called "thrashing" and can significantly reduce
cache performance.
 Inefficient cache utilization: Some cache lines might be heavily used while others
remain empty, leading to suboptimal use of cache space.

2. Fully Associative Mapping

Concept: In fully associative mapping, any block from main memory can be placed into any
available cache line. This offers the most flexibility in terms of block placement.

How it works: The main memory address is divided into two parts:

 Tag: Identifies the main memory block. This tag is essentially the entire block address,
excluding the block offset.
 Block Offset (Word Offset): Specifies the exact word within the block.

When the CPU requests data:

1. The cache controller simultaneously compares the tag of the requested address with the
tags of all cache lines.
2. If a match is found, it's a cache hit. The data is retrieved using the block offset.
3. If no match is found (a cache miss), the block is fetched from main memory and can be
placed into any empty cache line.
4. If all cache lines are occupied, a replacement algorithm (e.g., Least Recently Used
(LRU), First-In-First-Out (FIFO), Random) is used to decide which existing block to
evict to make space for the new block.

Advantages:

 Lowest conflict misses: Since a block can go anywhere, it greatly reduces the chances of
thrashing compared to direct mapping. This leads to a higher hit rate.
 Efficient cache utilization: All cache lines can potentially be used, leading to better
overall utilization.

Disadvantages:

 High hardware complexity and cost: Requires a large number of comparators (one for
each cache line) to compare tags in parallel. This makes it expensive and power-
intensive, especially for large caches.
 Slower lookup: The parallel comparison process, while conceptually fast, introduces
complexity that can slightly increase access time for very large caches.
3. Set-Associative Mapping

Concept: Set-associative mapping is a compromise between direct mapping and fully


associative mapping, aiming to combine the benefits of both while mitigating their drawbacks.
The cache is divided into a number of "sets," and each set contains a fixed number of cache lines
(called "ways" or "associativity"). A main memory block can map to a specific set, but within
that set, it can be placed in any of the available cache lines.

How it works: The main memory address is divided into three parts:

 Tag: Uniquely identifies the main memory block within a specific set.
 Set Index: Determines which set in the cache the block maps to.
 Block Offset (Word Offset): Specifies the exact word within the block.

The set index for a main memory block is typically calculated as:
Set Index=Main Memory Block Number(modNumber of Sets)

When the CPU requests data:

1. The set index bits from the memory address point to a specific set in the cache.
2. The cache controller then simultaneously compares the tag of the requested address with
the tags of all the cache lines within that particular set.
3. If a match is found, it's a cache hit. The data is retrieved using the block offset.
4. If no match is found (a cache miss), the block is fetched from main memory and loaded
into any available line within that set.
5. If all lines within the set are occupied, a replacement algorithm (like LRU) is used to
choose which block to evict from that set.

Advantages:

 Reduced conflict misses: Provides more flexibility than direct mapping, significantly
reducing thrashing.
 Reasonable hardware complexity: The number of comparators needed is equal to the
associativity (number of ways per set), which is much less than fully associative mapping
but more than direct mapping.
 Good balance of performance and cost: Offers a good hit rate without the excessive
hardware cost of fully associative caches. This is the most common mapping technique in
modern CPUs.

Common Associativity Levels: Set-associative caches are often described as 2-way, 4-way, 8-
way, or 16-way set-associative, indicating the number of cache lines within each set. Higher
associativity generally leads to better performance (lower miss rates) but increases complexity.

Summary Table

Feature Direct Mapping Fully Associative Set-Associative Mapping


Mapping
Fixed, one possible Any location within a
Placement Any available location
location specific set
Address Fields Tag, Index, Offset Tag, Offset Tag, Set Index, Offset
Many (one per cache
Comparators One Few (one per way)
line)
Complexity Low High Moderate
Cost Low High Moderate
Flexibility Least Most Moderate
Conflict Misses High Lowest Reduced
Replacement Not required (fixed
Required Required (within set)
Algorithm eviction)
Export to Sheets

The choice of mapping procedure impacts cache performance (hit rate, miss rate), hardware
complexity, and cost. Modern CPUs typically employ set-associative caches as they offer the
best balance of these factors.

You might also like