0% found this document useful (0 votes)
21 views3 pages

Important Topics Answers

The document provides an overview of key concepts in computer organization, including program interrupts, subroutines, pipelining advantages and disadvantages, and cache miss types. It also discusses memory organization, mapping strategies, block replacement policies, and CPU performance metrics like MIPS. Additionally, it outlines the stages of pipelining and various write strategies in memory management.

Uploaded by

Swati Shaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views3 pages

Important Topics Answers

The document provides an overview of key concepts in computer organization, including program interrupts, subroutines, pipelining advantages and disadvantages, and cache miss types. It also discusses memory organization, mapping strategies, block replacement policies, and CPU performance metrics like MIPS. Additionally, it outlines the stages of pipelining and various write strategies in memory management.

Uploaded by

Swati Shaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Important Topics and Answers

1. Program Interrupts Mod6_1 - 1,7


Definition:
A program interrupt is the transfer of program control from a currently running program to another service program due
to an external or internal request. Control returns to the initial program after the service program executes.

Types of Program Interrupts:


- External Interrupts - Caused by input-output (I/O) devices, timers, power failure, etc.
- Internal Interrupts (Traps) - Arise due to illegal or erroneous instruction use, such as divide-by-zero errors.
- Software Interrupts - Initiated by executing special call instructions within the program.

2. Subroutine Mod6_1 - 39
A subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. It can be called
from different parts of a program to avoid repetition.

Recursive Subroutine: Mod6_1 - 48


A subroutine that calls itself. To prevent overwriting return addresses, a stack is used to store each return address when
calling the subroutine recursively.

3. Advantages and Disadvantages of Pipelining


Advantages:
- Increases instruction throughput.
- Reduces execution time for individual instructions.
- Allows overlapping execution of multiple instructions.
- Can achieve higher clock speeds.

Disadvantages:
- Introduces hazards (structural, data, and control hazards).
- Requires complex hardware for efficient pipeline management.
- Performance gain is limited if dependencies exist between instructions.

4. Hazards in Pipelining
Types of Hazards:
- Structural Hazards: Occur when hardware resources (e.g., memory, registers, ALU) are not available for concurrent
instruction execution.
- Data Hazards: Occur when one instruction depends on the result of another still in the pipeline.
- Read After Write (RAW)
- Write After Read (WAR)
- Write After Write (WAW)
- Control Hazards: Occur when the pipeline fetches an instruction that should not be executed due to branch
instructions.
5. Conflict Miss and Compulsory Miss (Cache Misses)
- Conflict Miss: Occurs when multiple memory blocks map to the same cache set, causing frequent replacements.
- Compulsory Miss (Cold Start Miss): Occurs when data is accessed for the first time and is not in the cache.

6. Flynn's Classification of Computer Organization


- SISD (Single Instruction, Single Data): Traditional sequential execution.
- SIMD (Single Instruction, Multiple Data): Parallel processing where the same instruction operates on multiple data sets
(e.g., vector processors).
- MISD (Multiple Instruction, Single Data): Rarely used, where multiple instructions operate on the same data.
- MIMD (Multiple Instruction, Multiple Data): Most powerful, with multiple processors executing different instructions on
different data (e.g., multicore CPUs).

7. 2-Level and 3-Level Memory Organization Derivation


2-Level Memory Organization:
M (Effective Memory Access Time) = H × C + (1 - H) × Mmain
Where:
- H = Hit ratio
- C = Cache memory access time
- Mmain = Main memory access time

3-Level Memory Organization:


M = H1 × C + (1 - H1) × (H2 × M1 + (1 - H2) × Mmain)
Where:
- H1, H2 = Hit ratios for cache levels
- C, M1, Mmain = Access times for cache, level-1 memory, and main memory

8. Direct Mapping vs. Associative Mapping


Direct Mapping:
- Each block maps to a fixed location in the cache.
- Advantages: Simple and fast lookup.
- Disadvantages: High conflict misses when multiple blocks compete for the same location.

Associative Mapping:
- Any block can be placed in any cache line.
- Advantages: Reduces conflict misses.
- Disadvantages: Expensive and requires complex hardware for searching.

9. What is TAT Bit?


TAT (Tag Address Translation) bit is used in cache memory to indicate whether a cache line is valid and correctly maps
to a particular block of main memory.

10. Block Replacement Policies


When a new block needs to be loaded into a full cache, one of the existing blocks must be replaced. Policies include:
- FIFO (First In, First Out): The oldest block in the cache is replaced.
- LIFO (Last In, First Out): The most recently added block is replaced.
- Recency-Based (LRU - Least Recently Used): The least accessed block is replaced.

11. Different Write Strategies in Memory Management


- Write-Through: Data is written to both cache and main memory simultaneously.
- Advantage: Ensures memory consistency.
- Disadvantage: Slower write performance.
- Write-Back: Data is written only to the cache and later updated in memory.
- Advantage: Faster writes.
- Disadvantage: Risk of data loss if the system crashes before writing to memory.

12. Five Stages of Pipelining


- Instruction Fetch (IF): CPU fetches the instruction from memory.
- Instruction Decode (ID): CPU decodes the instruction and identifies operands.
- Execute (EX): ALU performs the operation.
- Memory Access (MEM): Load/store operations are performed.
- Write Back (WB): The result is written back to registers.

13. What is MIPS?


MIPS (Million Instructions Per Second) is a measure of CPU performance, representing the number of instructions
executed per second.

14. Different Block Replacement Policies


- Random Replacement: Randomly selects a block for replacement.
- FIFO (First In, First Out): Oldest block is removed.
- LRU (Least Recently Used): Least accessed block is replaced.
- LFU (Least Frequently Used): Block with the least accesses is replaced.

You might also like