CO & A All Modules Notes 21CS34 PDF
CO & A All Modules Notes 21CS34 PDF
BASIC CONCEPTS
• Computer Architecture (CA) is concerned with the structure and behaviour of the computer.
• CA includes the information formats, the instruction set and techniques for addressing memory.
• In general covers, CA covers 3 aspects of computer-design namely: 1) Computer Hardware, 2)
Instruction set Architecture and 3) Computer Organization.
1. Computer Hardware
It consists of electronic circuits, displays, magnetic and optical storage media and
communication facilities.
2. Instruction Set Architecture
It is programmer visible machine interface such as instruction set, registers, memory
organization and exception handling.
Two main approaches are 1) CISC and 2) RISC.
(CISCComplex Instruction Set Computer, RISCReduced Instruction Set Computer)
3. Computer Organization
It includes the high level aspects of a design, such as
→ memory-system
→ bus-structure &
→ design of the internal CPU.
It refers to the operational units and their interconnections that realize the architectural
specifications.
It describes the function of and design of the various units of digital computer that store and
process information.
FUNCTIONAL UNITS
• A computer consists of 5 functionally independent main parts:
1) Input
2) Memory
3) ALU
4) Output &
5) Control units.
DEPARTMENT OF ISE 1
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 2
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 3
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
BUS STRUCTURE
• A bus is a group of lines that serves as a connecting path for several devices.
• A bus may be lines or wires.
• The lines carry data or address or control signal.
• There are 2 types of Bus structures: 1) Single Bus Structure and 2) Multiple Bus Structure.
1) Single Bus Structure
Because the bus can be used for only one transfer at a time, only 2 units can actively use the
bus at any given time.
Bus control lines are used to arbitrate multiple requests for use of the bus.
Advantages:
1) Low cost &
2) Flexibility for attaching peripheral devices.
2) Multiple Bus Structure
Systems that contain multiple buses achieve more concurrency in operations.
Two or more transfers can be carried out at the same time.
Advantage: Better performance.
Disadvantage: Increased cost.
DEPARTMENT OF ISE 4
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
PERFORMANCE
• The most important measure of performance of a computer is how quickly it can execute programs.
• The speed of a computer is affected by the design of
1) Instruction-set.
2) Hardware & the technology in which the hardware is implemented.
3) Software including the operating system.
• Because programs are usually written in a HLL, performance is also affected by the compiler that
translates programs into machine language. (HLL High Level Language).
• For best performance, it is necessary to design the compiler, machine instruction set and hardware in
a co-ordinated way.
examine the flow of program instructions and data between the memory & the processor.
• At the start of execution, all program instructions are stored in the main-memory.
• As execution proceeds, instructions are fetched into the processor, and a copy is placed in the cache.
• Later, if the same instruction is needed a second time, it is read directly from the cache.
• A program will be executed faster
if movement of instruction/data between the main-memory and the processor is minimized
which is achieved by using the cache.
PROCESSOR CLOCK
• Processor circuits are controlled by a timing signal called a Clock.
• The clock defines regular time intervals called Clock Cycles.
• To execute a machine instruction, the processor divides the action to be performed into a sequence
of basic steps such that each step can be completed in one clock cycle.
• Let P = Length of one clock cycle
R = Clock rate.
• Relation between P and R is given by
------(1)
• Equ1 is referred to as the basic performance equation.
• To achieve high performance, the computer designer must reduce the value of T, which means
reducing N and S, and increasing R.
The value of N is reduced if source program is compiled into fewer machine instructions.
The value of S is reduced if instructions have a smaller number of basic steps to perform. •C
DEPARTMENT OF ISE 5
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
are has to be taken while modifying values since changes in one parameter may affect the other.
• Let
us
DEPARTMENT OF ISE 6
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
CLOCK RATE
• There are 2 possibilities for increasing the clock rate R:
1) Improving the IC technology makes logic-circuits faster.
This reduces the time needed to compute a basic step. (IC integrated circuits).
This allows the clock period P to be reduced and the clock rate R to be increased.
2) Reducing the amount of processing done in one basic step also reduces the clock period P.
• In presence of a cache, the percentage of accesses to the main-memory is small.
Hence, much of performance-gain expected from the use of faster technology can be realized.
The value of T will be reduced by same factor as R is increased „.‟ S & N are not affected.
PERFORMANCE MEASUREMENT
• Benchmark refers to standard task used to measure how well a processor operates.
• The Performance Measure is the time taken by a computer to execute a given benchmark.
• SPEC selects & publishes the standard programs along with their test results for different application
domains. (SPEC System Performance Evaluation Corporation).
• SPEC Rating is given by
Problem 1:
List the steps needed to execute the machine instruction:
Load R2, LOC
in terms of transfers between the components of processor and some simple control commands.
Assume that the address of the memory-location containing this instruction is initially in register PC.
Solution:
1. Transfer the contents of register PC to register MAR.
2. Issue a Read command to memory.
And, then wait until it has transferred the requested word into register MDR.
3. Transfer the instruction from MDR into IR and decode it.
4. Transfer the address LOCA from IR to MAR.
5. Issue a Read command and wait until MDR is loaded.
6. Transfer contents of MDR to the ALU.
7. Transfer contents of R0 to the ALU.
8. Perform addition of the two operands in the ALU and transfer result into R0.
9. Transfer contents of PC to ALU.
DEPARTMENT OF ISE 7
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
10. Add 1 to operand in ALU and transfer incremented address to PC.
DEPARTMENT OF ISE 8
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
Problem 2:
List the steps needed to execute the machine instruction:
Add R4, R2, R3
in terms of transfers between the components of processor and some simple control commands.
Assume that the address of the memory-location containing this instruction is initially in register PC.
Solution:
1. Transfer the contents of register PC to register MAR.
2. Issue a Read command to memory.
And, then wait until it has transferred the requested word into register MDR.
3. Transfer the instruction from MDR into IR and decode it.
4. Transfer contents of R1 and R2 to the ALU.
5. Perform addition of two operands in the ALU and transfer answer into R3.
6. Transfer contents of PC to ALU.
7. Add 1 to operand in ALU and transfer incremented address to PC.
Problem 3:
(a) Give a short sequence of machine instructions for the task “Add the contents of memory-location A
to those of location B, and place the answer in location C”. Instructions:
Load Ri, LOC
and
Store Ri, LOC
are the only instructions available to transfer data between memory and the general purpose registers.
Add instructions are described in Section 1.3. Do not change contents of either location A or B.
(b) Suppose that Move and Add instructions are available with the formats:
Move Location1, Location2
and
Add Location1, Location2
These instructions move or add a copy of the operand at the second location to the first location,
overwriting the original operand at the first location. Either or both of the operands can be in the memory
or the general-purpose registers. Is it possible to use fewer instructions of these types to accomplish the
task in part (a)? If yes, give the sequence.
Solution:
(a)
Load A, R0
Load B, R1
Add R0, R1
Store R1, C
(b) Yes;
Move B, C
Add A, C
Problem 4:
A program contains 1000 instructions. Out of that 25% instructions requires 4 clock cycles,40%
instructions requires 5 clock cycles and remaining require 3 clock cycles for execution. Find the total time
required to execute the program running in a 1 GHz machine.
Solution:
N = 1000
25% of N= 250 instructions require 4 clock cycles.
40% of N =400 instructions require 5 clock cycles.
35% of N=350 instructions require 3 clock cycles.
T = (N*S)/R= (250*4+400*5+350*3)/1X10 9 =(1000+2000+1050)/1*109= 4.05 μs.
DEPARTMENT OF ISE 9
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
Problem 5:
For the following processor, obtain the performance.
Clock rate = 800 MHz
No. of instructions executed = 1000
Average no of steps needed / machine instruction = 20
Solution:
Problem 6:
(a) Program execution time T is to be examined for a certain high-level language program. The program
can be run on a RISC or a CISC computer. Both computers use pipelined instruction execution, but
pipelining in the RISC machine is more effective than in the CISC machine. Specifically, the effective
value of S in the T expression for the RISC machine is 1.2, bit it is only 1.5 for the CISC machine. Both
machines have the same clock rate R. What is the largest allowable value for N, the number of
instructions executed on the CISC machine, expressed as a percentage of the N value forthe RISC
machine, if time for execution on the CISC machine is to be longer than on the RISCmachine?
(b) Repeat Part (a) if the clock rate R for the RISC machine is 15 percent higher than that for the CISC
machine.
Solution:
(a) Let TR = (NR X SR)/RR & TC = (NC X SC)/RC be execution times on RISC and CISC processors.
Equating execution times and clock rates, we have
1.2NR = 1.5NC
Then
NC/NR = 1.2/1.5 = 0.8
Therefore, the largest allowable value for NC is 80% of NR.
Problem 7:
(a) Suppose that execution time for a program is proportional to instruction fetch time. Assume that
fetching an instruction from the cache takes 1 time unit, but fetching it from the main-memory takes
10 time units. Also, assume that a requested instruction is found in the cache with probability 0.96.
Finally, assume that if an instruction is not found in the cache it must first be fetched from the main-
memory into the cache and then fetched from the cache to be executed. Compute the ratio of program
execution time without the cache to program execution time with the cache. This ratio is called the
speedup resulting from the presence of the cache.
(b) If the size of the cache is doubled, assume that the probability of not finding a requested
instruction there is cut in half. Repeat part (a) for a doubled cache size.
Solution:
(a) Let cache access time be 1 and main-memory access time be 20. Every instruction that is
executed must be fetched from the cache, and an additional fetch from the main-memory must
be performed for 4% of these cache accesses.
Therefore,
(b)(b)
DEPARTMENT OF ISE 10
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 11
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 12
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
BYTE-ADDRESSABILITY
• In byte-addressable memory, successive addresses refer to successive byte locations in the memory.
• Byte locations have addresses 0, 1, 2. . . . .
• If the word-length is 32 bits, successive words are located at addresses 0, 4, 8. . with each word
having 4 bytes.
Consider a 32-bit integer (in hex): 0x12345678 which consists of 4 bytes: 12, 34, 56, and 78.
Hence this integer will occupy 4 bytes in memory.
Assume, we store it at memory address starting 1000.
On little-endian, memory will look like
Address Value
1000 78
1001 56
1002 34
1003 12
WORD ALIGNMENT
• Words are said to be Aligned in memory if they begin at a byte-address that is a multiple of the
number of bytes in a word.
• For example,
If the word length is 16(2 bytes), aligned words begin at byte-addresses 0, 2, 4 . . . . .
If the word length is 64(2 bytes), aligned words begin at byte-addresses 0, 8, 16 . . . . .
• Words are said to have Unaligned Addresses, if they begin at an arbitrary byte-address.
DEPARTMENT OF ISE 13
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
MEMORY OPERATIONS
• Two memory operations are:
1) Load (Read/Fetch) &
2) Store (Write).
• The Load operation transfers a copy of the contents of a specific memory-location to the processor.
The memory contents remain unchanged.
• Steps for Load operation:
1) Processor sends the address of the desired location to the memory.
2) Processor issues „read‟ signal to memory to fetch the data.
3) Memory reads the data stored at that address.
4) Memory sends the read data to the processor.
• The Store operation transfers the information from the register to the specified memory-location.
This will destroy the original contents of that memory-location.
• Steps for Store operation are:
1) Processor sends the address of the memory-location where it wants to store data.
2) Processor issues „write‟ signal to memory to store the data.
3) Content of register(MDR) is written into the specified memory-location.
DEPARTMENT OF ISE 14
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 15
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
Add Ri,Rj
Move Rj,C
DEPARTMENT OF ISE 16
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
Program Explanation
• Consider the program for adding a list of n numbers (Figure 2.9).
• The Address of the memory-locations containing the n numbers are symbolically given as NUM1,
NUM2…..NUMn.
• Separate Add instruction is used to add each number to the contents of register R0.
• After all the numbers have been added, the result is placed in memory-location SUM.
DEPARTMENT OF ISE 17
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
BRANCHING
• Consider the task of adding a list of „n‟ numbers (Figure 2.10).
• Number of entries in the list „n‟ is stored in memory-location N.
• Register R1 is used as a counter to determine the number of times the loop is executed.
• Content-location N is loaded into register R1 at the beginning of the program.
• The Loop is a straight line sequence of instructions executed as many times as needed.
The loop starts at location LOOP and ends at the instruction Branch>0.
• During each pass,
→ address of the next list entry is determined and
→ that entry is fetched and added to R0.
• The instruction Decrement R1 reduces the contents of R1 by 1 each time through the loop.
• Then Branch Instruction loads a new value into the program counter. As a result, the processor
fetches and executes the instruction at this new address called the Branch Target.
• A Conditional Branch Instruction causes a branch only if a specified condition is satisfied. If the
condition is not satisfied, the PC is incremented in the normal way, and the next instruction in sequential
address order is fetched and executed.
CONDITION CODES
• The processor keeps track of information about the results of various operations. This is
accomplished by recording the required information in individual bits, called Condition Code Flags.
• These flags are grouped together in a special processor-register called the condition code register (or
statue register).
• Four commonly used flags are:
1) N (negative) set to 1 if the result is negative, otherwise cleared to 0.
2) Z (zero) set to 1 if the result is 0; otherwise, cleared to 0.
DEPARTMENT OF ISE 18
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
3) V (overflow) set to 1 if arithmetic overflow occurs; otherwise, cleared to 0.
4) C (carry) set to 1 if a carry-out results from the operation; otherwise cleared to 0.
DEPARTMENT OF ISE 19
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 20
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
ADDRESSING MODES
• The different ways in which the location of an operand is specified in an instruction are referred to as
Addressing Modes (Table 2.1).
DEPARTMENT OF ISE 21
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
• To execute the Add instruction in fig 2.11 (a), the processor uses the value which is in register R1, as
the EA of the operand.
• It requests a read operation from the memory to read the contents of location B. The value read is
the desired operand, which the processor adds to the contents of register R0.
• Indirect addressing through a memory-location is also possible as shown in fig 2.11(b). In this case,
the processor first reads the contents of memory-location A, then requests a second read operation using
the value B as an address to obtain the operand.
Program Explanation
• In above program, Register R2 is used as a pointer to the numbers in the list, and the operands are accessed
indirectly through R2.
• The initialization-section of the program loads the counter-value n from memory-location N into R1 and uses the
immediate addressing-mode to place the address value NUM1, which is the address of the first number in the list,
into R2. Then it clears R0 to 0.
• The first two instructions in the loop implement the unspecified instruction block starting at LOOP.
• The first time through the loop, the instruction Add (R2), R0 fetches the operand at location NUM1 and adds it to
R0.
DEPARTMENT OF ISE 22
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
• The second Add instruction adds 4 to the contents of the pointer R2, so that it will contain the address value
NUM2 when the above instruction is executed in the second pass through the loop.
DEPARTMENT OF ISE 23
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
• Fig(a) illustrates two ways of using the Index mode. In fig(a), the index register, R1, contains the
address of a memory-location, and the value X defines an offset(also called a displacement) from this
address to the location where the operand is found.
• To find EA of operand:
Eg: Add 20(R1), R2
EA=>1000+20=1020
• An alternative use is illustrated in fig(b). Here, the constant X corresponds to a memory address, and
the contents of the index register define the offset to the operand. In either case, the effective-address
is the sum of two values; one is given explicitly in the instruction, and the other is stored in a register.
DEPARTMENT OF ISE 24
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 25
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
RELATIVE MODE
• This is similar to index-mode with one difference:
The effective-address is determined using the PC in place of the general purpose register Ri.
• The operation is indicated as X(PC).
• X(PC) denotes an effective-address of the operand which is X locations above or below the current
contents of PC.
• Since the addressed-location is identified "relative" to the PC, the name Relative mode is associated
with this type of addressing.
• This mode is used commonly in conditional branch instructions.
• An instruction such as
Branch > 0 LOOP ;Causes program execution to go to the branch target location
identified by name LOOP if branch condition is satisfied.
DEPARTMENT OF ISE 26
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
ASSEMBLY LANGUAGE
• We generally use symbolic-names to write a program.
• A complete set of symbolic-names and rules for their use constitute an Assembly Language.
• The set of rules for using the mnemonics in the specification of complete instructions and programs is
called the Syntax of the language.
• Programs written in an assembly language can be automatically translated into a sequence of
machine instructions by a program called an Assembler.
• The user program in its original alphanumeric text formal is called a Source Program, and the
assembled machine language program is called an Object Program.
For example:
MOVE R0,SUM ;The term MOVE represents OP code for operation performed by instruction.
ADD #5,R3 ;Adds number 5 to contents of register R3 & puts the result back into registerR3.
ASSEMBLER DIRECTIVES
• Directives are the assembler commands to the assembler concerning the program being assembled.
• These commands are not translated into machine opcode in the object-program.
• EQU informs the assembler about the value of an identifier (Figure: 2.18).
Ex: SUM EQU 200 ;Informs assembler that the name SUM should be replaced by the value 200.
• ORIGIN tells the assembler about the starting-address of memory-area to place the data block.
Ex: ORIGIN 204 ;Instructs assembler to initiate data-block at memory-locations starting from 204.
• DATAWORD directive tells the assembler to load a value into the location.
Ex: N DATAWORD 100 ;Informs the assembler to load data 100 into the memory-location N(204).
• RESERVE directive is used to reserve a block of memory.
Ex: NUM1 RESERVE 400 ;declares a memory-block of 400 bytes is to be reserved for data.
• END directive tells the assembler that this is the end of the source-program text.
• RETURN directive identifies the point at which execution of the program should be terminated.
• Any statement that makes instructions or data being placed in a memory-location may be given a
label. The label(say N or NUM1) is assigned a value equal to the address of that location.
DEPARTMENT OF ISE 27
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
depending on the type of instruction.
4) Comment Field is used for documentation purposes to make program easier to understand.
DEPARTMENT OF ISE 28
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 29
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
MEMORY-MAPPED I/O
• Some address values are used to refer to peripheral device buffer-registers such as DATAIN &
DATAOUT.
• No special instructions are needed to access the contents of the registers; data can be transferred
between these registers and the processor using instructions such as Move, Load or Store.
• For example, contents of the keyboard character buffer DATAIN can be transferred to register R1 in
the processor by the instruction
MoveByte DATAIN,R1
• The MoveByte operation code signifies that the operand size is a byte.
• The Testbit instruction tests the state of one bit in the destination, where the bit position to be
tested is indicated by the first operand.
DEPARTMENT OF ISE 30
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
STACKS
• A stack is a special type of data structure where elements are inserted from one end and elements
are deleted from the same end. This end is called the top of the stack (Figure: 2.14).
• The various operations performed on stack:
1) Insert: An element is inserted from top end. Insertion operation is called push operation.
2) Delete: An element is deleted from top end. Deletion operation is called pop operation.
• A processor-register is used to keep track of the address of the element of the stack that is at the top
at any given time. This register is called the Stack Pointer (SP).
• If we assume a byte-addressable memory with a 32-bit word length,
1) The push operation can be implemented as
Subtract #4, SP
Move NEWITEM, (SP)
2) The pop operation can be implemented as
Move (SP), ITEM
Add #4, SP
DEPARTMENT OF ISE 31
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
• Routine for a safe pop and push operation as follows:
DEPARTMENT OF ISE 32
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
QUEUE
• Data are stored in and retrieved from a queue on a FIFO basis.
• Difference between stack and queue?
1) One end of the stack is fixed while the other end rises and falls as data are pushed and popped.
2) In stack, a single pointer is needed to keep track of top of the stack at any given time.
In queue, two pointers are needed to keep track of both the front and end for removal
and insertion respectively.
3) Without further control, a queue would continuously move through the memory of acomputer
in the direction of higher addresses. One way to limit the queue to a fixed region in memory is to
use a circular buffer.
SUBROUTINES
• A subtask consisting of a set of instructions which is executed many times is called a Subroutine.
• A Call instruction causes a branch to the subroutine (Figure: 2.16).
• At the end of the subroutine, a return instruction is executed
• Program resumes execution at the instruction immediately following the subroutine call
• The way in which a computer makes it possible to call and return from subroutines is referred to as
its Subroutine Linkage method.
• The simplest subroutine linkage method is to save the return-address in a specific location, which
may be a register dedicated to this function. Such a register is called the Link Register.
• When the subroutine completes its task, the Return instruction returns to the calling-program by
branching indirectly through the link-register.
• The Call Instruction is a special branch instruction that performs the following operations:
→ Store the contents of PC into link-register.
→ Branch to the target-address specified by the instruction.
• The Return Instruction is a special branch instruction that performs the operation:
→ Branch to the address contained in the link-register.
DEPARTMENT OF ISE 33
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
PARAMETER PASSING
• The exchange of information between a calling-program and a subroutine is referred to as
Parameter Passing (Figure: 2.25).
• The parameters may be placed in registers or in memory-location, where they can be accessed by
the subroutine.
• Alternatively, parameters may be placed on the processor-stack used for saving the return-address.
• Following is a program for adding a list of numbers using subroutine with the parameters passed
through registers.
DEPARTMENT OF ISE 34
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
STACK FRAME
• Stack Frame refers to locations that constitute a private work-space for the subroutine.
• The work-space is
→ created at the time the subroutine is entered &
→ freed up when the subroutine returns control to the calling-program (Figure: 2.26).
DEPARTMENT OF ISE 35
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
• Fig: 2.27 show an example of a commonly used layout for information in a stack-frame.
DEPARTMENT OF ISE 36
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 37
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 38
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
• When SUB1 executes return statement, the main-program stores this answers in memory-location RESULT and
continues its execution.
DEPARTMENT OF ISE 39
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
LOGIC INSTRUCTIONS
• Logic operations such as AND, OR, and NOT applied to individual bits.
• These are the basic building blocks of digital-circuits.
• This is also useful to be able to perform logic operations is software, which is done using instructions
that apply these operations to all bits of a word or byte independently and in parallel.
• For example, the instruction
Not dst
DEPARTMENT OF ISE 40
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
ROTATE OPERATIONS
• In shift operations, the bits shifted out of the operand are lost, except for the last bit shifted out
which is retained in the Carry-flag C.
• To preserve all bits, a set of rotate instructions can be used.
• They move the bits that are shifted out of one end of the operand back into the other end.
• Two versions of both the left and right rotate instructions are usually provided.
In one version, the bits of the operand is simply rotated.
In the other version, the rotation includes the C flag.
DEPARTMENT OF ISE 41
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
DEPARTMENT OF ISE 42
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
Problem 1:
Write a program that can evaluate the expression A*B+C*D In a single-accumulator processor.
Assume that the processor has Load, Store, Multiply, and Add instructions and that all values fit in the
accumulator
Solution:
A program for the expression is:
Load A
Multiply B
Store RESULT
Load C
Multiply D
Add RESULT
Store RESULT
Problem 2:
Registers R1 and R2 of a computer contains the decimal values 1200 and 4600. What is the effective-
address of the memory operand in each of the following instructions?
(a) Load 20(R1), R5
(b) Move #3000,R5
(c) Store R5,30(R1,R2)
(d) Add -(R2),R5
(e) Subtract (R1)+,R5
Solution:
(a) EA = [R1]+Offset=1200+20 = 1220
(b) EA = 3000
(c) EA = [R1]+[R2]+Offset = 1200+4600+30=5830
(d) EA = [R2]-1 = 4599
(e) EA = [R1] = 1200
Problem 3:
Registers R1 and R2 of a computer contains the decimal values 2900 and 3300. What is the effective-
address of the memory operand in each of the following instructions?
(a) Load R1,55(R2)
(b) Move #2000,R7
(c) Store 95(R1,R2),R5
(d) Add (R1)+,R5
(e) Subtract-(R2),R5
Solution:
a) Load R1,55(R2) This is indexed addressing mode. So EA = 55+R2=55+3300=3355.
b) Move #2000,R7 This is an immediate addressing mode. So, EA = 2000
c) Store 95(R1,R2),R5 This is a variation of indexed addressing mode, in which contents of 2
registers are added with the offset or index to generate EA. So,
95+R1+R2=95+2900+3300=6255.
d) Add (R1)+,R5 This is Autoincrement mode. Contents of R1 are the EA so, 2900 is the EA.
e) Subtract -(R2),R5 This is Auto decrement mode. Here, R2 is subtracted by 4 bytes
(assuming 32-bt processor) to generate the EA, so, EA= 3300-4=3296.
Problem 4:
Given a binary pattern in some memory-location, is it possible to tell whether this pattern represents a
machine instruction or a number?
Solution:
No; any binary pattern can be interpreted as a number or as an instruction.
DEPARTMENT OF ISE 43
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 1
Problem 5:
Both of the following statements cause the value 300 to be stored in location 1000, but at different
times.
ORIGIN 1000
DATAWORD 300
And
Move #300,1000
Explain the difference.
Solution:
The assembler directives ORIGIN and DATAWORD cause the object program memory image
constructed by the assembler to indicate that 300 is to be placed at memory word location 1000
at the time the program is loaded into memory prior to execution.
The Move instruction places 300 into memory word location 1000 when the instruction is
executed as part of a program.
Problem 6:
Register R5 is used in a program to point to the top of a stack. Write a sequence of instructions using
the Index, Autoincrement, and Autodecrement addressing modes to perform each of the following tasks:
(a) Pop the top two items off the stack, and them, and then push the result onto the stack.
(b) Copy the fifth item from the top into register R3.
(c) Remove the top ten items from the stack.
Solution:
(a) Move (R5)+,R0
Add (R5)+,R0
Move R0,-(R5)
(b) Move 16(R5),R3
(c) Add #40,R5
Problem 7:
Consider the following possibilities for saving the return address of a subroutine:
(a) In the processor register.
(b) In a memory-location associated with the call, so that a different location is used when the
subroutine is called from different places
(c) On a stack.
Which of these possibilities supports subroutine nesting and which supports subroutine recursion(that
is, a subroutine that calls itself)?
Solution:
(a) Neither nesting nor recursion is supported.
(b) Nesting is supported, because different Call instructions will save the return address at
different memory-locations. Recursion is not supported.
(c) Both nesting and recursion are supported.
DEPARTMENT OF ISE 44
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 2
ACCESSING I/O-DEVICES
• A single bus-structure can be used for connecting I/O-devices to a computer (Figure 7.1).
• Each I/O device is assigned a unique set of address.
• Bus consists of 3 sets of lines to carry address, data & control signals.
• When processor places an address on address-lines, the intended-device responds to the command.
• The processor requests either a read or write-operation.
• The requested-data are transferred over the data-lines.
• There are 2 ways to deal with I/O-devices: 1) Memory-mapped I/O & 2) I/O-mapped I/O.
1) Memory-Mapped I/O
Memory and I/O-devices share a common address-space.
Any data-transfer instruction (like Move, Load) can be used to exchange information.
For example,
Move DATAIN, R0; This instruction sends the contents of location DATAIN to register R0.
Here, DATAIN address of the input-buffer of the keyboard.
2) I/O-Mapped I/O
Memory and I/0 address-spaces are different.
A special instructions named IN and OUT are used for data-transfer.
Advantage of separate I/O space: I/O-devices deal with fewer address-lines.
I/O Interface for an Input Device
1) Address Decoder: enables the device to recognize its address when this address
appears on the address-lines (Figure 7.2).
2) Status Register: contains information relevant to operation of I/O-device.
3) Data Register: holds data being transferred to or from processor. There are 2 types:
i) DATAIN Input-buffer associated with keyboard.
ii) DATAOUT Output data buffer of a display/printer.
DEPARTMENT OF ISE 1
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 2
2-33
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 2
INTERRUPTS
• There are many situations where other tasks can be performed while waiting for an I/O device to
become ready.
• A hardware signal called an Interrupt will alert the processor when an I/O device becomes ready.
• Interrupt-signal is sent on the interrupt-request line.
• The processor can be performing its own task without the need to continuously check the I/O-device.
• The routine executed in response to an interrupt-request is called ISR.
• The processor must inform the device that its request has been recognized by sending INTA signal.
(INTR Interrupt Request, INTA Interrupt Acknowledge, ISR Interrupt Service Routine)
• For example, consider COMPUTE and PRINT routines (Figure 3.6).
2-33
COMPUTER ORGANIZATION
INTERRUPT HARDWARE
• Most computers have several I/O devices that can request an interrupt.
• A single interrupt-request (IR) line may be used to serve n devices (Figure 4.6).
• All devices are connected to IR line via switches to ground.
• To request an interrupt, a device closes its associated switch.
• Thus, if all IR signals are inactive, the voltage on the IR line will be equal to Vdd.
• When a device requests an interrupt, the voltage on the line drops to 0.
• This causes the INTR received by the processor to go to 1.
• The value of INTR is the logical OR of the requests from individual devices.
INTR=INTR1+ INTR2+ ............................ +INTRn
• A special gates known as open-collector or open-drain are used to drive the INTR line.
• The Output of the open collector control is equal to a switch to the ground that is
→ open when gates input is in ”0‟ state and
→ closed when the gates input is in “1‟ state.
• Resistor R is called a Pull-up Resistor because
it pulls the line voltage up to the high-voltage state when the switches are open.
2-33
COMPUTER ORGANIZATION
HANDLING MULTIPLE DEVICES
• While handling multiple devices, the issues concerned are:
1) How can the processor recognize the device requesting an interrupt?
2) How can the processor obtain the starting address of the appropriate ISR?
3) Should a device be allowed to interrupt the processor while another interrupt is being
serviced?
4) How should 2 or more simultaneous interrupt-requests be handled?
POLLING
• Information needed to determine whether device is requesting interrupt is available in status-register
• Following condition-codes are used:
DIRQ Interrupt-request for display.
KIRQ Interrupt-request for keyboard.
KEN keyboard enable.
DEN Display Enable.
SIN, SOUT status flags.
• For an input device, SIN status flag in used.
SIN = 1 when a character is entered at the keyboard.
SIN = 0 when the character is read by processor.
IRQ=1 when a device raises an interrupt-requests (Figure 4.3).
• Simplest way to identify interrupting-device is to have ISR poll all devices connected to bus.
• The first device encountered with its IRQ bit set is serviced.
• After servicing first device, next requests may be serviced.
• Advantage: Simple & easy to implement.
Disadvantage: More time spent polling IRQ bits of all devices.
2-33
COMPUTER ORGANIZATION
VECTORED INTERRUPTS
• A device requesting an interrupt identifies itself by sending a special-code to processor over bus.
• Then, the processor starts executing the ISR.
• The special-code indicates starting-address of ISR.
• The special-code length ranges from 4 to 8 bits.
• The location pointed to by the interrupting-device is used to store the staring address to ISR.
• The staring address to ISR is called the interrupt vector.
• Processor
→ loads interrupt-vector into PC &
→ executes appropriate ISR.
• When processor is ready to receive interrupt-vector code, it activates INTA line.
• Then, I/O-device responds by sending its interrupt-vector code & turning off the INTR signal.
• The interrupt vector also includes a new value for the Processor Status Register.
2-33
COMPUTER ORGANIZATION
INTERRUPT NESTING
• A multiple-priority scheme is implemented by using separate INTR & INTA lines for each device
• Each INTR line is assigned a different priority-level (Figure 4.7).
• Priority-level of processor is the priority of program that is currently being executed.
• Processor accepts interrupts only from devices that have higher-priority than its own.
• At the time of execution of ISR for some device, priority of processor is raised to that of the device.
• Thus, interrupts from devices at the same level of priority or lower are disabled.
Privileged Instruction
• Processor's priority is encoded in a few bits of PS word. (PS Processor-Status).
• Encoded-bits can be changed by Privileged Instructions that write into PS.
• Privileged-instructions can be executed only while processor is running in Supervisor Mode.
• Processor is in supervisor-mode only when executing operating-system routines.
Privileged Exception
• User program cannot
→ accidently or intentionally change the priority of the processor &
→ disrupt the system-operation.
• An attempt to execute a privileged-instruction while in user-mode leads to a Privileged Exception.
2-33
COMPUTER ORGANIZATION
SIMULTANEOUS REQUESTS
• The processor must have some mechanisms to decide which request to service when simultaneous
requests arrive.
• INTR line is common to all devices (Figure 4.8a).
• INTA line is connected in a daisy-chain fashion.
• INTA signal propagates serially through devices.
• When several devices raise an interrupt-request, INTR line is activated.
• Processor responds by setting INTA line to 1. This signal is received by device 1.
• Device-1 passes signal on to device 2 only if it does not require any service.
• If device-1 has a pending-request for interrupt, the device-1
→ blocks INTA signal &
→ proceeds to put its identifying-code on data-lines.
• Device that is electrically closest to processor has highest priority.
• Advantage: It requires fewer wires than the individual connections.
Arrangement of Priority Groups
• Here, the devices are organized in groups & each group is connected at a different priority level.
• Within a group, devices are connected in a daisy chain. (Figure 4.8b).
2-33
COMPUTER ORGANIZATION
EXCEPTIONS
• An interrupt is an event that causes
→ execution of one program to be suspended &
→ execution of another program to begin.
• Exception refers to any event that causes an interruption. For ex: I/O interrupts.
1. Recovery from Errors
• These are techniques to ensure that all hardware components are operating properly.
• For ex: Many computers include an ECC in memory which allows detection of errors in stored-data.
(ECC Error Checking Code, ESR Exception Service Routine).
• If an error occurs, control-hardware
→ detects the errors &
→ informs processor by raising an interrupt.
• When exception processing is initiated (as a result of errors), processor.
→ suspends program being executed &
→ starts an ESR. This routine takes appropriate action to recover from the error.
2. Debugging
• Debugger
→ is used to find errors in a program and
→ uses exceptions to provide 2 important facilities: i) Trace & ii) Breakpoints
i) Trace
• When a processor is operating in trace-mode, an exception occurs after execution of every instruction
(using debugging-program as ESR).
• Debugging-program enables user to examine contents of registers, memory-locations and so on.
• On return from debugging-program,
next instruction in program being debugged is executed,
then debugging-program is activated again.
• The trace exception is disabled during the execution of the debugging-program.
ii) Breakpoints
• Here, the program being debugged is interrupted only at specific points selected by user.
• An instruction called Trap (or Software interrupt) is usually provided for this purpose.
• When program is executed & reaches breakpoint, the user can examine memory & register contents.
3. Privilege Exception
• To protect OS from being corrupted by user-programs, Privileged Instructions are executed only
while processor is in supervisor-mode.
• For e.g.
When processor runs in user-mode, it will not execute instruction that change priority of processor.
• An attempt to execute privileged-instruction will produce a Privilege Exception.
• As a result, processor switches to supervisor-mode & begins to execute an appropriate routine in OS.
2-33
COMPUTER ORGANIZATION
DIRECT MEMORY ACCESS (DMA)
• The transfer of a block of data directly b/w an external device & main-memory w/o continuous
involvement by processor is called DMA.
• DMA controller
→ is a control circuit that performs DMA transfers (Figure 8.13).
→ is a part of the I/O device interface.
→ performs the functions that would normally be carried out by processor.
• While a DMA transfer is taking place, the processor can be used to execute another program.
2-33
COMPUTER ORGANIZATION
BUS ARBITRATION
• The device that is allowed to initiate data-transfers on bus at any given time is called bus-master.
• There can be only one bus-master at any given time.
• Bus Arbitration is the process by which
→ next device to become the bus-master is selected &
→ bus-mastership is transferred to that device.
• The two approaches are:
1) Centralized Arbitration: A single bus-arbiter performs the required arbitration.
2) Distributed Arbitration: All devices participate in selection of next bus-master.
• A conflict may arise if both the processor and a DMA controller or two DMA controllers try to use the
bus at the same time to access the main-memory.
• To resolve this, an arbitration procedure is implemented on the bus to coordinate the activities of all
devices requesting memory transfers.
• The bus arbiter may be the processor or a separate unit connected to the bus.
2-33
COMPUTER ORGANIZATION
CENTRALIZED ARBITRATION
• A single bus-arbiter performs the required arbitration (Figure: 4.20).
• Normally, processor is the bus-master.
• Processor may grant bus-mastership to one of the DMA controllers.
• A DMA controller indicates that it needs to become bus-master by activating BR line.
• The signal on the BR line is the logical OR of bus-requests from all devices connected to it.
• Then, processor activates BG1 signal indicating to DMA controllers to use bus when it becomes free.
• BG1 signal is connected to all DMA controllers using a daisy-chain arrangement.
• If DMA controller-1 is requesting the bus,
Then, DMA controller-1 blocks propagation of grant-signal to other devices.
Otherwise, DMA controller-1 passes the grant downstream by asserting BG2.
• Current bus-master indicates to all devices that it is using bus by activating BBSY line.
• The bus-arbiter is used to coordinate the activities of all devices requesting memory transfers.
• Arbiter ensures that only 1 request is granted at any given time according to a priority scheme.
(BR Bus-Request, BG Bus-Grant, BBSY Bus Busy).
• The timing diagram shows the sequence of events for the devices connected to the processor.
• DMA controller-2
→ requests and acquires bus-mastership and
→ later releases the bus. (Figure: 4.21).
• After DMA controller-2 releases the bus, the processor resources bus-mastership.
2-33
COMPUTER ORGANIZATION
DISTRIBUTED ARBITRATION
• All device participate in the selection of next bus-master (Figure 4.22).
• Each device on bus is assigned a 4-bit identification number (ID).
• When 1 or more devices request bus, they
→ assert Start-Arbitration signal &
→ place their 4-bit ID numbers on four open-collector lines ARB 0 through ARB 3 .
• A winner is selected as a result of interaction among signals transmitted over these lines.
• Net-outcome is that the code on 4 lines represents request that has the highest ID number.
• Advantage:
This approach offers higher reliability since operation of bus is not dependent on any single device.
For example:
Assume 2 devices A & B have their ID 5 (0101), 6 (0110) and their code is 0111.
Each device compares the pattern on the arbitration line to its own ID starting from MSB.
If the device detects a difference at any bit position, it disables the drivers at that bit position.
Driver is disabled by placing ”0” at the input of the driver.
In e.g. “A” detects a difference in line ARB1, hence it disables the drivers on lines ARB1 & ARB0.
This causes pattern on arbitration-line to change to 0110. This means that “B” has won
contention.
2-33
COMPUTER ORGANIZATION
BUS
• Bus → is used to inter-connect main-memory, processor & I/O-devices
→ includes lines needed to support interrupts & arbitration.
2-33
COMPUTER ORGANIZATION
SYNCHRONOUS BUS
• All devices derive timing-information from a common clock-line.
• Equally spaced pulses on this line define equal time intervals.
• During a ”bus cycle‟, one data-transfer can take place.
A sequence of events during a read-operation
• At time t0, the master (processor)
→ places the device-address on address-lines &
→ sends an appropriate command on control-lines (Figure 7.3).
• The command will
→ indicate an input operation &
→ specify the length of the operand to be read.
• Information travels over bus at a speed determined by physical & electrical characteristics.
• Clock pulse width(t1 -t0 ) must be longer than max. propagation-delay b/w devices connected to bus.
• The clock pulse width should be long to allow the devices to decode the address & control signals.
• The slaves take no action or place any data on the bus before t1.
• Information on bus is unreliable during the period t0 to t1 because signals are changing state.
• Slave places requested input-data on data-lines at time t1.
• At end of clock cycle (at time t2 ), master strobes (captures) data on data-lines into its input-buffer
• For data to be loaded correctly into a storage device,
data must be available at input of that device for a period greater than setup-time of device.
2-33
COMPUTER ORGANIZATION
A Detailed Timing Diagram for the Read-operation
• The picture shows two views of the signal except the clock (Figure 7.4).
• One view shows the signal seen by the master & the other is seen by the salve.
• Master sends the address & command signals on the rising edge at the beginning of clock period (t0).
• These signals do not actually appear on the bus until tam.
• Sometimes later, at tAS the signals reach the slave.
• The slave decodes the address.
• At t1, the slave sends the requested-data.
• At t2, the master loads the data into its input-buffer.
• Hence the period t2, tDM is the setup time for the master‟s input-buffer.
• The data must be continued to be valid after t2, for a period equal to the hold time of that buffers.
Disadvantages
• The device does not respond.
• The error will not be detected.
2-33
COMPUTER ORGANIZATION
Multiple Cycle Transfer for Read-operation
• During, clock cycle-1, master sends address/command info the bus requesting a “read‟ operation.
• The slave receives & decodes address/command information (Figure 7.5).
• At the active edge of the clock i.e. the beginning of clock cycle-2, it makes accession to respond
immediately.
• The data become ready & are placed in the bus at clock cycle-3.
• At the same times, the slave asserts a control signal called slave-ready.
• The master strobes the data to its input-buffer at the end of clock cycle-3.
• The bus transfer operation is now complete.
• And the master sends a new address to start a new transfer in clock cycle4.
• The slave-ready signal is an acknowledgement from the slave to the master.
2-33
COMPUTER ORGANIZATION
ASYNCHRONOUS BUS
• This method uses handshake-signals between master and slave for coordinating data-transfers.
• There are 2 control-lines:
1) Master-Ready (MR) is used to indicate that master is ready for a transaction.
2) Slave-Ready (SR) is used to indicate that slave is ready for a transaction.
The Read Operation proceeds as follows:
• At t0, master places address/command information on bus.
• At t1, master sets MR-signal to 1 to inform all devices that the address/command-info is ready.
MR-signal =1 causes all devices on the bus to decode the address.
The delay t1 – t0 is intended to allow for any skew that may occurs on the bus.
Skew occurs when 2 signals transmitted from 1 source arrive at destination at different time
Therefore, the delay t1 – t0 should be larger than the maximum possible bus skew.
• At t2, slave
→ performs required input-operation &
→ sets SR signal to 1 to inform all devices that it is ready (Figure 7.6).
• At t3, SR signal arrives at master indicating that the input-data are available on bus.
• At t4, master removes address/command information from bus.
• At t5, when the device-interface receives the 1-to-0 transition of MR signal, it removes data and SR
signal from the bus. This completes the input transfer.
• A change of state is one signal is followed by a change is the other signal. Hence this scheme is
called as Full Handshake.
• Advantage: It provides the higher degree of flexibility and reliability.
2-33
COMPUTER ORGANIZATION
INTERFACE-CIRCUITS
• An I/O Interface consists of the circuitry required to connect an I/O device to a computer-bus.
• On one side of the interface, we have bus signals.
On the other side, we have a data path with its associated controls to transfer data between the
interface and the I/O device known as port.
• Two types are:
1. Parallel Port transfers data in the form of a number of bits (8 or 16) simultaneously to or
from the device.
2. Serial Port transmits and receives data one bit at a time.
• Communication with the bus is the same for both formats.
• The conversion from the parallel to the serial format, and vice versa, takes place inside the interface-
circuit.
• In parallel-port, the connection between the device and the computer uses
→ a multiple-pin connector and
→ a cable with as many wires.
• This arrangement is suitable for devices that are physically close to the computer.
• In serial port, it is much more convenient and cost-effective where longer cables are needed.
Functions of I/O Interface
1) Provides a storage buffer for at least one word of data.
2) Contains status-flags that can be accessed by the processor to determine whether the buffer
is full or empty.
3) Contains address-decoding circuitry to determine when it is being addressed by the
processor.
4) Generates the appropriate timing signals required by the bus control scheme.
5) Performs any format conversion that may be necessary to transfer data between the bus and
the I/O device (such as parallel-serial conversion in the case of a serial port).
2-33
COMPUTER ORGANIZATION
PARALLEL-PORT
KEYBOARD INTERFACED TO PROCESSOR
2-33
COMPUTER ORGANIZATION
INPUT-INTERFACE-CIRCUIT
• Output-lines of DATAIN are connected to the data-lines of bus by means of 3-state drivers (Fig 4.29).
• Drivers are turned on when
→ processor issues a read signal and
→ address selects DATAIN.
• SIN signal is generated using a status-flag circuit (Figure 4.30).
SIN signal is connected to line D0 of the processor-bus using a 3-state driver.
• Address-decoder selects the input-interface based on bits A1 through A31.
• Bit A0 determines whether the status or data register is to be read, when Master-ready is active.
• Processor activates the Slave-ready signal, when either the Read-status or Read-data is equal to 1.
2-33
COMPUTER ORGANIZATION
PRINTER INTERFACED TO PROCESSOR
2-33
COMPUTER ORGANIZATION
GENERAL 8 BIT PARALLEL PROCESSING
• Data-lines P7 through PO can be used for either input or output purposes (Figure 4.34).
• For increased flexibility,
→ some lines can be used as inputs and
→ some lines can be used as outputs.
• The DATAOUT register is connected to data-lines via 3-state drivers that are controlled by a DDR.
• The processor can write any 8-bit pattern into DDR. (DDR Data Direction Register).
• If DDR=1,
Then, data-line acts as an output-line;
Otherwise, data-line acts as an input-line.
• Two lines, C1 and C2 are used to control the interaction between interface-circuit and I/0 device.
Two lines, C1 and C2 are also programmable.
• Line C2 is bidirectional to provide different modes of signaling, including the handshake.
• The Ready and Accept lines are the handshake control lines on the processor-bus side.
Hence, the Ready and Accept lines can be connected to Master-ready and Slave-ready.
• The input signal My-address should be connected to the output of an address-decoder.
The address-decoder recognizes the address assigned to the interface.
• There are 3 register select lines: RS0-RS2.
Three register select lines allows up to eight registers in the interface.
• An interrupt-request INTR is also provided.
INTR should be connected to the interrupt-request line on the computer-bus.
2-33
COMPUTER ORGANIZATION
STANDARD I/O INTERFACE
• Consider a computer system using different interface standards.
• Let us look in to Processor bus and Peripheral Component Interconnect (PCI) bus (Figure 4.38).
• These two buses are interconnected by a circuit called Bridge.
• The bridge translates the signals and protocols of one bus into another.
• The bridge-circuit introduces a small delay in data transfer between processor and the devices.
2-33
COMPUTER ORGANIZATION
PCI
• PCI is developed as a low cost bus that is truly processor independent.
• PCI supports high speed disk, graphics and video devices.
• PCI has plug and play capability for connecting I/O devices.
• To connect new devices, the user simply connects the device interface board to the bus.
2-33
COMPUTER ORGANIZATION
2-33
COMPUTER ORGANIZATION
DEVICE CONFIGURATION OF PCI
• The PCI has a configuration ROM that stores information about that device.
• The configuration ROM’s of all devices are accessible in the configuration address-space.
• The initialization software read these ROM’s whenever the system is powered up or reset.
• In each case, it determines whether the device is a printer, keyboard or disk controller.
• Devices are assigned address during initialization process.
• Each device has an input signal called IDSEL# (Initialization device select) which has 21 address-
lines (AD11 to AD31).
• During configuration operation,
The address is applied to AD input of the device and
The corresponding AD line is set to 1 and all other lines are set to 0.
AD11 - AD31 Upper address-line
A0 - A10 Lower address-line: Specify the type of the operation and to access the
content of device configuration ROM.
• The configuration software scans all 21 locations. PCI bus has interrupt-request lines.
• Each device may requests an address in the I/O space or memory space
SCSI Bus
• SCSI stands for Small Computer System Interface.
• SCSI refers to the standard bus which is defined by ANSI (American National Standard Institute).
• SCSI bus the several options. It may be,
• Because of these various options, SCSI connector may have 50, 68 or 80 pins. The data transfer rate
ranges from 5MB/s to 160MB/s 320Mb/s, 640MB/s. The transfer rate depends on,
1) Length of the cable
2) Number of devices connected.
• To achieve high transfer rate, the bus length should be 1.6m for SE signaling and 12m for LVD
signaling.
• The SCSI bus us connected to the processor-bus through the SCSI controller. The data are
stored on a disk in blocks called sectors.
Each sector contains several hundreds of bytes. These data will not be stored in contiguous
memory-location.
• SCSI protocol is designed to retrieve the data in the first sector or any other selected sectors.
• Using SCSI protocol, the burst of data are transferred at high speed.
• The controller connected to SCSI bus is of 2 types. They are1) Initiator * 2) Target
1) Initiator
It has the ability to select a particular target & to send commands specifying the operation to
be performed.
They are the controllers on the processor side.
2) Target
The disk controller operates as a target.
It carries out the commands it receive from the initiator.
The initiator establishes a logical connection with the intended target.
2-33
COMPUTER ORGANIZATION
Steps for Read-operation
1) The SCSI controller contends for control of the bus (initiator).
2) When the initiator wins the arbitration-process, the initiator
→ selects the target controller and
→ hands over control of the bus to it.
3) The target starts an output operation. The initiator sends a command specifying the required read-
operation.
4) The target
→ sends a message to initiator indicating that it will temporarily suspend connection b/w them.
→ then releases the bus.
5) The target controller sends a command to the disk drive to move the read head to the first sector
involved in the requested read-operation.
6. The target
→ transfers the contents of the data buffer to the initiator and
→ then suspends the connection again.
7) The target controller sends a command to the disk drive to perform another seek operation.
8) As the initiator controller receives the data, it stores them into the main-memory using the DMA
approach.
9) The SCSI controller sends an interrupt to the processor indicating that the data are now available.
2-33
COMPUTER ORGANIZATION
PHASES IN SCSI BUS
• The phases in SCSI bus operation are:
1) Arbitration
2) Selection
3) Information transfer
4) Reselection
1) Arbitration
• When the –BSY signal is in inactive state,
→ the bus will be free &
→ any controller can request the use of bus.
• SCSI uses distributed arbitration scheme because
each controller may generate requests at the same time.
• Each controller on the bus is assigned a fixed priority.
• When –BSY becomes active, all controllers that are requesting the bus
→ examines the data-lines &
→ determine whether highest priority device is requesting bus at the same time.
• The controller using the highest numbered line realizes that it has won the arbitration-process.
• At that time, all other controllers disconnect from the bus & wait for –BSY to become inactive again.
2) Information Transfer
• The information transferred between two controllers may consist of
→ commands from the initiator to the target
→ status responses from the target to the initiator or
→ data-transferred to/from the I/0 device.
• Handshake signaling is used to control information transfers, with the target controller taking the role
of the bus-master.
3) Selection
• Here, Device
→ wins arbitration and
→ asserts –BSY and –DB6 signals.
• The Select Target Controller responds by asserting –BSY.
• This informs that the connection that it requested is established.
4) Reselection
• The connection between the two controllers has been reestablished, with the target in control of the
bus as required for data transfer to proceed.
2-33
COMPUTER ORGANIZATION
USB
• USB stands for Universal Serial Bus.
• USB supports 3 speed of operation. They are,
1) Low speed (1.5 Mbps)
2) Full speed (12 mbps) &
3) High speed (480 mbps).
• The USB has been designed to meet the key objectives. They are,
1) Provide a simple, low-cost and easy to use interconnection system.
This overcomes difficulties due to the limited number of I/O ports available on a computer.
2) Accommodate a wide range of data transfer characteristics for I/O devices.
For e.g. telephone and Internet connections
3) Enhance user convenience through a “plug-and-play” mode of operation.
• Advantage: USB helps to add many devices to a computer system at any time without opening the
computer-box.
Port Limitation
Normally, the system has a few limited ports.
To add new ports, the user must open the computer-box to gain access to the internal
expansion bus & install a new interface card.
The user may also need to know to configure the device & the s/w.
Plug & Play
The main objective: USB provides a plug & play capability.
The plug & play feature enhances the connection of new device at any time, while the system
is operation.
The system should
→ Detect the existence of the new device automatically.
→ Identify the appropriate device driver s/w.
→ Establish the appropriate addresses.
→ Establish the logical connection for communication.
2-33
COMPUTER ORGANIZATION
USB ARCHITECTURE
• To accommodate a large number of devices that can be added or removed at any time, the USB has
the tree structure as shown in the figure 7.17.
• Each node of the tree has a device called a Hub.
• A hub acts as an intermediate control point between the host and the I/O devices.
• At the root of the tree, a Root Hub connects the entire tree to the host computer.
• The leaves of the tree are the I/O devices being served (for example, keyboard or speaker).
• A hub copies a message that it receives from its upstream connection to all its downstream ports.
• As a result, a message sent by the host computer is broadcast to all I/O devices, but only the
addressed-device will respond to that message.
2-33
COMPUTER ORGANIZATION
USB ADDRESSING
• Each device may be a hub or an I/O device.
• Each device on the USB is assigned a 7-bit address.
• This address
→ is local to the USB tree and
→ is not related in any way to the addresses used on the processor-bus.
• A hub may have any number of devices or other hubs connected to it, and addresses are assigned
arbitrarily.
• When a device is first connected to a hub, or when it is powered-on, it has the address 0.
• The hardware of the hub detects the device that has been connected, and it records this fact as part
of its own status information.
• Periodically, the host polls each hub to
→ collect status information and
→ learn about new devices that may have been added or disconnected.
• When the host is informed that a new device has been connected, it uses sequence of commands to
→ send a reset signal on the corresponding hub port.
→ read information from the device about its capabilities.
→ send configuration information to the device, and
→ assign the device a unique USB address.
• Once this sequence is completed, the device
→ begins normal operation and
→ responds only to the new address.
USB PROTOCOLS
• All information transferred over the USB is organized in packets.
• A packet consists of one or more bytes of information.
• There are many types of packets that perform a variety of control functions.
• The information transferred on USB is divided into 2 broad categories: 1) Control and 2) Data.
• Control packets perform tasks such as
→ addressing a device to initiate data transfer.
→ acknowledging that data have been received correctly or
→ indicating an error.
• Data-packets carry information that is delivered to a device.
• A packet consists of one or more fields containing different kinds of information.
• The first field of any packet is called the Packet Identifier (PID) which identifies type of that
packet.
• They are transmitted twice.
1) The first time they are sent with their true values and
2) The second time with each bit complemented.
• The four PID bits identify one of 16 different packet types.
• Some control packets, such as ACK (Acknowledge), consist only of the PID byte.
• Control packets used for controlling data transfer operations are called Token Packets.
2-33
COMPUTER ORGANIZATION
Problem 1:
The input status bit in an interface-circuit is cleared as soon as the input data register is read. Why is
this important?
Solution:
After reading the input data, it is necessary to clear the input status flag before the program
begins a new read-operation. Otherwise, the same input data would be read a second time.
Problem 2:
What is the difference between a subroutine and an interrupt-service routine?
Solution:
A subroutine is called by a program instruction to perform a function needed by the calling
program.
An interrupt-service routine is initiated by an event such as an input operation or a hardware
error. The function it performs may not be at all related to the program being executed at the
time of interruption. Hence, it must not affect any of the data or status information relating to
that program.
Problem 3:
Three devices A, B, & C are connected to the bus of a computer. I/O transfers for all 3 devices use
interrupt control. Interrupt nesting for devices A & B is not allowed, but interrupt-requests from C may
be accepted while either A or B is being serviced. Suggest different ways in which this can be
accomplished in each of the following cases:
(a) The computer has one interrupt-request line.
(b) Two interrupt-request lines INTR1 & INTR2 are available, with INTR1 having higher priority.
Specify when and how interrupts are enabled and disabled in each case.
Solution:
(a) Interrupts should be enabled, except when C is being serviced. The nesting rules can be
enforced by manipulating the interrupt-enable flags in the interfaces of A and B.
(b) A and B should be connected to INTR , and C to INTR. When an interrupt-request is received
from either A or B, interrupts from the other device will be automatically disabled until the request
has been serviced. However, interrupt-requests from C will always be accepted.
Problem 4:
Consider a computer in which several devices are connected to a common interrupt-request line. Explain
how you would arrange for interrupts from device j to be accepted before the execution of the interrupt
service routine for device i is completed. Comment in particular on the times at which interrupts must
be enabled and disabled at various points in the system.
Solution:
Interrupts are disabled before the interrupt-service routine is entered. Once device i turns off
its interrupt-request, interrupts may be safely enabled in the processor. If the interface-circuit
of device i turns off its interrupt-request when it receives the interrupt acknowledge signal,
interrupts may be enabled at the beginning of the interrupt-service routine of device i. Otherwise,
interrupts may be enabled only after the instruction that causes device i to turn off its interrupt-
request has been executed.
Problem 5:
Consider the daisy chain arrangement. Assume that after a device generates an interrupt-request, it
turns off that request as soon as it receives the interrupt acknowledge signal. Is it still necessary to
disable interrupts in the processor before entering the interrupt service routine? Why?
Solution:
Yes, because other devices may keep the interrupt-request line asserted.
2-33
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
BASIC CONCEPTS
• Maximum size of memory that can be used in any computer is determined by addressing mode.
DEPARTMENT OF ISE 1
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 2
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
• The data-input and data-output of each Sense/Write circuit are connected to a single bidirectional
data-line.
• Data-line can be connected to a data-bus of the computer.
• Following 2 control lines are also used:
1) R/W’ Specifies the required operation.
2) CS’ Chip Select input selects a given chip in the multi-chip memory-system.
DEPARTMENT OF ISE 3
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
CMOS Cell
• Transistor pairs (T3, T5) and (T4, T6) form the inverters in the latch (Figure 8.5).
• In state 1, the voltage at point X is high by having T5, T6 ON and T4, T5 are OFF.
• Thus, T1 and T2 returned ON (Closed), bit-line b and b‟ will have high and low signals respectively.
• Advantages:
1) It has low power consumption „.‟ the current flows in the cell only when the cell is active.
2) Static RAM‟s can be accessed quickly. It access time is few nanoseconds.
DEPARTMENT OF ISE 4
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
• Disadvantage: SRAMs are said to be volatile memories „.‟ their contents are lost when power
is interrupted.
DEPARTMENT OF ISE 5
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
ASYNCHRONOUS DRAM
• Less expensive RAMs can be implemented if simple cells are used.
• Such cells cannot retain their state indefinitely. Hence they are called Dynamic RAM (DRAM).
• The information stored in a dynamic memory-cell in the form of a charge on a capacitor.
• This charge can be maintained only for tens of milliseconds.
• The contents must be periodically refreshed by restoring this capacitor charge to its full value.
• In order to store information in the cell, the transistor T is turned „ON‟ (Figure 8.6).
• The appropriate voltage is applied to the bit-line which charges the capacitor.
• After the transistor is turned off, the capacitor begins to discharge.
• Hence, info. stored in cell can be retrieved correctly before threshold value of capacitor drops down.
• During a read-operation,
→ transistor is turned „ON‟
→ a sense amplifier detects whether the charge on the capacitor is above the threshold value.
If (charge on capacitor) > (threshold value) Bit-line will have logic value „1‟.
If (charge on capacitor) < (threshold value) Bit-line will set to logic value „0‟.
DEPARTMENT OF ISE 6
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
• During Read/Write-operation,
→ row-address is applied first.
→ row-address is loaded into row-latch in response to a signal pulse on RAS’ input of chip.
(RAS = Row-address Strobe CAS = Column-address Strobe)
• When a Read-operation is initiated, all cells on the selected row are read and refreshed.
• Shortly after the row-address is loaded, the column-address is
→ applied to the address pins &
→ loaded into CAS’.
• The information in the latch is decoded.
• The appropriate group of 8 Sense/Write circuits is selected.
R/W’=1(read-operation) Output values of selected circuits are transferred to data-lines D0-D7.
R/W’=0(write-operation) Information on D0-D7 are transferred to the selected circuits.
• RAS‟ & CAS‟ are active-low so that they cause latching of address when they change from high
to low.
• To ensure that the contents of DRAMs are maintained, each row of cells is accessed periodically.
• A special memory-circuit provides the necessary control signals RAS‟ & CAS‟ that govern the timing.
• The processor must take into account the delay in the response of the memory.
Fast Page Mode
Transferring the bytes in sequential order is achieved by applying the consecutive sequence
of column-address under the control of successive CAS‟ signals.
This scheme allows transferring a block of data at a faster rate.
The block of transfer capability is called as fast page mode.
DEPARTMENT OF ISE 7
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
SYNCHRONOUS DRAM
• The operations are directly synchronized with clock signal (Figure 8.8).
• The address and data connections are buffered by means of registers.
• The output of each sense amplifier is connected to a latch.
• A Read-operation causes the contents of all cells in the selected row to be loaded in these latches.
• Data held in latches that correspond to selected columns are transferred into data-output register.
• Thus, data becoming available on the data-output pins.
• First, the row-address is latched under control of RAS‟ signal (Figure 8.9).
• The memory typically takes 2 or 3 clock cycles to activate the selected row.
• Then, the column-address is latched under the control of CAS‟ signal.
DEPARTMENT OF ISE 8
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
• After a delay of one clock cycle, the first set of data bits is placed on the data-lines.
• SDRAM automatically increments column-address to access next 3 sets of bits in the selected row.
DEPARTMENT OF ISE 9
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 10
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 11
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
MEMORY-SYSTEM CONSIDERATION
MEMORY CONTROLLER
• To reduce the number of pins, the dynamic memory-chips use multiplexed-address inputs.
• The address is divided into 2 parts:
1) High Order Address Bit
Select a row in cell array.
It is provided first and latched into memory-chips under the control of RAS‟ signal.
2) Low Order Address Bit
Selects a column.
They are provided on same address pins and latched using CAS‟ signals.
• The Multiplexing of address bit is usually done by Memory Controller Circuit (Figure 5.11).
• The Controller accepts a complete address & R/W‟ signal from the processor.
• A Request signal indicates a memory access operation is needed.
• Then, the Controller
→ forwards the row & column portions of the address to the memory.
→ generates RAS‟ & CAS‟ signals &
→ sends R/W‟ & CS‟ signals to the memory.
RAMBUS MEMORY
• The usage of wide bus is expensive.
• Rambus developed the implementation of narrow bus.
• Rambus technology is a fast signaling method used to transfer information between chips.
• The signals consist of much smaller voltage swings around a reference voltage Vref.
• The reference voltage is about 2V.
• The two logical values are represented by 0.3V swings above and below Vref.
• This type of signaling is generally is known as Differential Signalling.
• Rambus provides a complete specification for design of communication called as Rambus Channel.
• Rambus memory has a clock frequency of 400 MHz.
• The data are transmitted on both the edges of clock so that effective data-transfer rate is 800MHZ.
• Circuitry needed to interface to Rambus channel is included on chip. Such chips are called RDRAM.
(RDRAM = Rambus DRAMs).
• Rambus channel has:
1) 9 Data-lines (1st -8th line ->Transfer the data, 9th line->Parity checking).
2) Control-Line &
3) Power line.
• A two channel rambus has 18 data-lines which has no separate Address-Lines.
• Communication between processor and RDRAM modules is carried out by means of packets
transmitted on the data-lines.
• There are 3 types of packets:
1) Request
DEPARTMENT OF ISE 12
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
2) Acknowledge &
3) Data.
DEPARTMENT OF ISE 13
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 14
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
TYPES OF ROM
• Different types of non-volatile memory are
1) PROM
2) EPROM
3) EEPROM &
4) Flash Memory (Flash Cards & Flash Drives)
DEPARTMENT OF ISE 15
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
FLASH MEMORY
• In EEPROM, it is possible to read & write the contents of a single cell.
• In Flash device, it is possible to read contents of a single cell & write entire contents of a block.
• Prior to writing, the previous contents of the block are erased.
Eg. In MP3 player, the flash memory stores the data that represents sound.
• Single flash chips cannot provide sufficient storage capacity for embedded-system.
• Advantages:
1) Flash drives have greater density which leads to higher capacity & low cost per bit.
2) It requires single power supply voltage & consumes less power.
• There are 2 methods for implementing larger memory: 1) Flash Cards & 2) Flash Drives
1) Flash Cards
One way of constructing larger module is to mount flash-chips on a small card.
Such flash-card have standard interface.
The card is simply plugged into a conveniently accessible slot.
Memory-size of the card can be 8, 32 or 64MB.
Eg: A minute of music can be stored in 1MB of memory. Hence 64MB flash cards can store an
hour of music.
2) Flash Drives
Larger flash memory can be developed by replacing the hard disk-drive.
The flash drives are designed to fully emulate the hard disk.
The flash drives are solid state electronic devices that have no movable parts.
Advantages:
1) They have shorter seek & access time which results in faster response.
2) They have low power consumption. .‟. they are attractive for battery driven
application.
3) They are insensitive to vibration.
Disadvantages:
1) The capacity of flash drive (<1GB) is less than hard disk (>1GB).
2) It leads to higher cost per bit.
3) Flash memory will weaken after it has been written a number of times (typically at
least 1 million times).
DEPARTMENT OF ISE 16
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 17
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
CACHE MEMORIES
• The effectiveness of cache mechanism is based on the property of „Locality of Reference’.
Locality of Reference
• Many instructions in the localized areas of program are executed repeatedly during some time period
• Remainder of the program is accessed relatively infrequently (Figure 8.15).
• There are 2 types:
1) Temporal
The recently executed instructions are likely to be executed again very soon.
2) Spatial
Instructions in close proximity to recently executed instruction are also likely to be executed soon.
• If active segment of program is placed in cache-memory, then total execution time can be reduced.
• Block refers to the set of contiguous address locations of some size.
• The cache-line is used to refer to the cache-block.
MAPPING-FUNCTION
• Here we discuss about 3 different mapping-function:
1) Direct Mapping
2) Associative Mapping
3) Set-Associative Mapping
DEPARTMENT OF ISE 18
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DIRECT MAPPING
• The block-j of the main-memory maps onto block-j modulo-128 of the cache (Figure 8.16).
• When the memory-blocks 0, 128, & 256 are loaded into cache, the block is stored in cache-block 0.
Similarly, memory-blocks 1, 129, 257 are stored in cache-block 1.
• The contention may arise when
1) When the cache is full.
2) When more than one memory-block is mapped onto a given cache-block position.
• The contention is resolved by
allowing the new blocks to overwrite the currently resident-block.
• Memory-address determines placement of block in the cache.
DEPARTMENT OF ISE 19
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
ASSOCIATIVE MAPPING
• The memory-block can be placed into any cache-block position. (Figure 8.17).
• 12 tag-bits will identify a memory-block when it is resolved in the cache.
• Tag-bits of an address received from processor are compared to the tag-bits of each block of cache.
• This comparison is done to see if the desired block is present.
DEPARTMENT OF ISE 20
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
SET-ASSOCIATIVE MAPPING
• It is the combination of direct and associative mapping. (Figure 8.18).
• The blocks of the cache are grouped into sets.
• The mapping allows a block of the main-memory to reside in any block of the specified set.
• The cache has 2 blocks per set, so the memory-blocks 0, 64, 128…….. 4032 maps into cache set „0‟.
• The cache can occupy either of the two block position within the set.
6 bit set field
Determines which set of cache contains the desired block.
6 bit tag field
The tag field of the address is compared to the tags of the two blocks of the set.
This comparison is done to check if the desired block is present.
• The cache which contains 1 block per set is called direct mapping.
• A cache that has „k‟ blocks per set is called as “k-way set associative cache‟.
• Each block contains a control-bit called a valid-bit.
• The Valid-bit indicates that whether the block contains valid-data.
• The dirty bit indicates that whether the block has been modified during its cache residency.
Valid-bit=0 When power is initially applied to system.
Valid-bit=1 When the block is loaded from main-memory at first time.
• If the main-memory-block is updated by a source & if the block in the source is already exists in the
cache, then the valid-bit will be cleared to “0‟.
• If Processor & DMA uses the same copies of data then it is called as Cache Coherence Problem.
• Advantages:
DEPARTMENT OF ISE 21
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
1) Contention problem of direct mapping is solved by having few choices for block placement.
2) The hardware cost is decreased by reducing the size of associative search.
DEPARTMENT OF ISE 22
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 23
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
REPLACEMENT ALGORITHM
• In direct mapping method,
the position of each block is pre-determined and there is no need of replacement strategy.
• In associative & set associative method,
The block position is not pre-determined.
If the cache is full and if new blocks are brought into the cache,
then the cache-controller must decide which of the old blocks has to be replaced.
• When a block is to be overwritten, the block with longest time w/o being referenced is over-written.
• This block is called Least recently Used (LRU) block & the technique is called LRU algorithm.
• The cache-controller tracks the references to all blocks with the help of block-counter.
• Advantage: Performance of LRU is improved by randomness in deciding which block is to be over-
written.
Eg:
Consider 4 blocks/set in set associative cache.
2 bit counter can be used for each block.
When a ‘hit’ occurs, then block counter=0; The counter with values originally lower than the
referenced one are incremented by 1 & all others remain unchanged.
When a ‘miss’ occurs & if the set is full, the blocks with the counter value 3 is removed, the
new block is put in its place & its counter is set to “0‟ and other block counters are incremented
by 1.
DEPARTMENT OF ISE 24
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
PERFORMANCE CONSIDERATION
• Two key factors in the commercial success are 1) performance & 2) cost.
• In other words, the best possible performance at low cost.
• A common measure of success is called the Pricel Performance ratio.
• Performance depends on
→ how fast the machine instructions are brought to the processor &
→ how fast the machine instructions are executed.
• To achieve parallelism, interleaving is used.
• Parallelism means both the slow and fast units are accessed in the same manner.
INTERLEAVING
• The main-memory of a computer is structured as a collection of physically separate modules.
• Each module has its own
1) ABR (address buffer register) &
2) DBR (data buffer register).
• So, memory access operations may proceed in more than one module at the same time (Fig 5.25).
• Thus, the aggregate-rate of transmission of words to/from the main-memory can be increased.
DEPARTMENT OF ISE 25
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
VIRTUAL MEMORY
• It refers to a technique that automatically move program/data blocks into the main-memory when
they are required for execution (Figure 8.24).
• The address generated by the processor is referred to as a virtual/logical address.
• The virtual-address is translated into physical-address by MMU (Memory Management Unit).
• During every memory-cycle, MMU determines whether the addressed-word is in the memory.
If the word is in memory.
Then, the word is accessed and execution proceeds.
Otherwise, a page containing desired word is transferred from disk to memory.
• Using DMA scheme, transfer of data between disk and memory is performed.
DEPARTMENT OF ISE 26
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 27
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 28
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
• When OS changes contents of page-table, the control-bit will invalidate corresponding entry in TLB.
• Given a virtual-address, the MMU looks in TLB for the referenced-page.
If page-table entry for this page is found in TLB, the physical-address is obtained immediately.
Otherwise, the required entry is obtained from the page-table & TLB is updated.
Page Faults
• Page-fault occurs when a program generates an access request to a page that is not in memory.
• When MMU detects a page-fault, the MMU asks the OS to generate an interrupt.
• The OS
→ suspends the execution of the task that caused the page-fault and
→ begins execution of another task whose pages are in memory.
• When the task resumes the interrupted instruction must continue from the point of interruption.
• If a new page is brought from disk when memory is full, disk must replace one of the resident pages.
In this case, LRU algorithm is used to remove the least referenced page from memory.
• A modified page has to be written back to the disk before it is removed from the memory.
DEPARTMENT OF ISE 29
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
In this case, Write–Through Protocol is used.
DEPARTMENT OF ISE 30
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
SECONDARY-STORAGE
• The semi-conductor memories do not provide all the storage capability.
• The secondary-storage devices provide larger storage requirements.
• Some of the secondary-storage devices are:
1) Magnetic Disk
2) Optical Disk &
DEPARTMENT OF ISE 31
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
3) Magnetic Tapes.
DEPARTMENT OF ISE 32
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
MAGNETIC DISK
• Magnetic Disk system consists of one or more disk mounted on a common spindle.
• A thin magnetic film is deposited on each disk (Figure 8.27).
• Disk is placed in a rotary-drive so that magnetized surfaces move in close proximity to R/W heads.
• Each R/W head consists of 1) Magnetic Yoke & 2) Magnetizing-Coil.
• Digital information is stored on magnetic film by applying current pulse to the magnetizing-coil.
• Only changes in the magnetic field under the head can be sensed during the Read-operation.
• Therefore, if the binary states 0 & 1 are represented by two opposite states,
then a voltage is induced in the head only at 0-1 and at 1-0 transition in the bit stream.
• A consecutive of 0‟s & 1‟s are determined by using the clock.
• Manchester Encoding technique is used to combine the clocking information with data.
• R/W heads are maintained at small distance from disk-surfaces in order to achieve high bit densities.
• When disk is moving at their steady state, the air pressure develops b/w disk-surfaces & head.
This air pressure forces the head away from the surface.
• The flexible spring connection between head and its arm mounting permits the head to fly at the
desired distance away from the surface.
Winchester Technology
• Read/Write heads are placed in a sealed, air–filtered enclosure called the Winchester Technology.
• The read/write heads can operate closure to magnetic track surfaces because
the dust particles which are a problem in unsealed assemblies are absent.
Advantages
• It has a larger capacity for a given physical size.
• The data intensity is high because
the storage medium is not exposed to contaminating elements.
• The read/write heads of a disk system are movable.
• The disk system has 3 parts: 1) Disk Platter (Usually called Disk)
DEPARTMENT OF ISE 33
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
2) Disk-drive (spins the disk & moves Read/write heads)
3) Disk Controller (controls the operation of the system.)
DEPARTMENT OF ISE 34
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 35
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 36
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
Latency=3ms
Internet transfer rate=34MB/s
DEPARTMENT OF ISE 37
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DEPARTMENT OF ISE 38
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
DATA BUFFER/CACHE
• A disk-drive that incorporates the required SCSI circuit is referred as SCSI Drive.
• The SCSI can transfer data at higher rate than the disk tracks.
• A data buffer can be used to deal with the possible difference in transfer rate b/w disk and SCSI bus
• The buffer is a semiconductor memory.
• The buffer can also provide cache mechanism for the disk.
i.e. when a read request arrives at the disk, then controller first check if the data is available in
the cache/buffer.
If data is available in cache.
Then, the data can be accessed & placed on SCSI bus.
Otherwise, the data will be retrieved from the disk.
DISK CONTROLLER
• The disk controller acts as interface between disk-drive and system-bus (Figure 8.13).
• The disk controller uses DMA scheme to transfer data between disk and memory.
• When the OS initiates the transfer by issuing R/W‟ request, the controllers register will load the
following information:
1) Memory Address: Address of first memory-location of the block of words involved in the
transfer.
2) Disk Address: Location of the sector containing the beginning of the desired block of words.
3) Word Count: Number of words in the block to be transferred.
DEPARTMENT OF ISE 39
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
Problem 1:
Consider the dynamic memory cell. Assume that C = 30 femtofarads (10−15 F) and that leakage current
through the transistor is about 0.25 picoamperes (10 −12 A). The voltage across the capacitor when it is
fully charged is 1.5 V. The cell must be refreshed before this voltage drops below 0.9 V. Estimate the
minimum refresh rate.
Solution:
The minimum refresh rate is given by
Problem 2:
Consider a main-memory built with SDRAM chips. Data are transferred in bursts & the burst length is
8. Assume that 32 bits of data are transferred in parallel. If a 400-MHz clock is used, how much time
does it take to transfer:
(a) 32 bytes of data
(b) 64 bytes of data
What is the latency in each case?
Solution:
(a) It takes 5 + 8 = 13 clock cycles.
(b) It takes twice as long to transfer 64 bytes, because two independent 32-byte transfers have
to be made. The latency is the same, i.e. 38 ns.
Problem 3:
Give a critique of the following statement: “Using a faster processor chip results in a corresponding
increase in performance of a computer even if the main-memory speed remains the same.”
Solution:
A faster processor chip will result in increased performance, but the amount of increase will not
be directly proportional to the increase in processor speed, because the cache miss penalty will
remain the same if the main-memory speed is not improved.
Problem 4:
A block-set-associative cache consists of a total of 64 blocks, divided into 4-block sets. The main-
memory contains 4096 blocks, each consisting of 32 words. Assuming a 32-bit byte-addressable address-
space,
(a) how many bits are there in main-memory address
(b) how many bits are there in each of the Tag, Set, and Word fields?
Solution:
(a) 4096 blocks of 128 words each require 12+7 = 19 bits for the main-memory address.
(b) TAG field is 8 bits. SET field is 4 bits. WORD field is 7 bits.
Problem 5:
The cache block size in many computers is in the range of 32 to 128 bytes. What would be the main
advantages and disadvantages of making the size of cache blocks larger or smaller?
Solution:
Larger size
Fewer misses if most of the data in the block are actually used
Wasteful if much of the data are not used before the cache block is ejected from the
cache
Smaller size
DEPARTMENT OF ISE 40
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
More misses
DEPARTMENT OF ISE 41
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
Problem 5:
Consider a computer system in which the available pages in the physical memory are divided among
several application programs. The operating system monitors the page transfer activity and dynamically
adjusts the number of pages allocated to various programs. Suggest a suitable strategy that the
operating system can use to minimize the overall rate of page transfers.
Solution:
The operating system may increase the main-memory pages allocated to a program that has a
large number of page faults, using space previously allocated to a program with a few page faults
Problem 6:
In a computer with a virtual-memory system, the execution of an instruction may be interrupted by a
page fault. What state information has to be saved so that this instruction can be resumed later? Note
that bringing a new page into the main-memory involves a DMA transfer, which requires execution of
other instructions. Is it simpler to abandon the interrupted instruction and completely re-execute it later?
Can this be done?
Solution:
Continuing the execution of an instruction interrupted by a page fault requires saving the entire
state of the processor, which includes saving all registers that may have been affected by the
instruction as well as the control information that indicates how far the execution has progressed.
The alternative of re-executing the instruction from the beginning requires acapability to reverse
any changes that may have been caused by the partial execution of the instruction.
Problem 7:
When a program generates a reference to a page that does not reside in the physical main-memory,
execution of the program is suspended until the requested page is loaded into the main-memory from
a disk. What difficulties might arise when an instruction in one page has an operand in a different
page? What capabilities must the processor have to handle this situation?
Solution:
The problem is that a page fault may occur during intermediate steps in the execution of a
single instruction. The page containing the referenced location must be transferred from the
disk into the main-memory before execution can proceed.
Since the time needed for the page transfer (a disk operation) is very long, as compared to
instruction execution time, a context-switch will usually be made.
(A context-switch consists of preserving the state of the currently executing program, and
”switching” the processor to the execution of another program that is resident in the main-
memory.) The page transfer, via DMA, takes place while this other program executes. When the
page transfer is complete, the original program can be resumed.
Therefore, one of two features are needed in a system where the execution of an individual
instruction may be suspended by a page fault. The first possibility is to save the state of instruction
execution. This involves saving more information (temporary programmer- transparent registers,
etc.) than needed when a program is interrupted between instructions. The second possibility is
to ”unwind” the effects of the portion of the instruction completedwhen the page fault occurred,
and then execute the instruction from the beginning when the program is resumed.
Problem 8:
Magnetic disks are used as the secondary storage for program and data files in most virtual-memory
systems. Which disk parameter(s) should influence the choice of page size?
Solution:
The sector size should influence the choice of page size, because the sector is the smallest directly
addressable block of data on the disk that is read or written as a unit. Therefore, pages should
be some small integral number of sectors in size.
DEPARTMENT OF ISE 42
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 3
Problem 9:
A disk unit has 24 recording surfaces. It has a total of 14,000 cylinders. There is an average of 400
sectors per track. Each sector contains 512 bytes of data.
(a) What is the maximum number of bytes that can be stored in this unit?
(b) What is the data-transfer rate in bytes per second at a rotational speed of 7200 rpm?
(c) Using a 32-bit word, suggest a suitable scheme for specifying the disk address.
Solution:
(a) The maximum number of bytes that can be stored on this disk is 24 X 14000 X 400 X 512 =
68.8 X 109 bytes.
(b) The data-transfer rate is (400 X 512 X 7200)/60 = 24.58 X 10 6 bytes/s.
(c) Need 9 bits to identify a sector, 14 bits for a track, and 5 bits for a surface.
Thus, a possible scheme is to use address bits A8-0 for sector, A22-9 for track, and A27-23 for surface
identification. Bits A31-28 are not used.
DEPARTMENT OF ISE 43
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
MODULE 4: ARITHMETIC
DEPARTMENT OF ISE 1
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 2
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 3
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
adding signed numbers.
DEPARTMENT OF ISE 4
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 5
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 6
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 7
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
CARRY-LOOKAHEAD ADDITIONS
• The logic expression for si(sum) and ci+1(carry-out) of stage i are
si=xi+yi+ci ------(1) ci+1=xiyi+xici+yici ---------------------------(2)
• Factoring (2) into
ci+1=xiyi+(xi+yi)ci
we can write
ci+1=Gi+PiCi where Gi=xiyi and Pi=xi+yi
• The expressions Gi and Pi are called generate and propagate functions (Figure 9.4).
• If Gi=1, then ci+1=1, independent of the input carry ci. This occurs when both xi and yi are 1.
Propagate function means that an input-carry will produce an output-carry when either xi=1 or yi=1.
• All Gi and Pi functions can be formed independently and in parallel in one logic-gate delay.
• Expanding ci terms of i-1 subscripted variables and substituting into the c i+1 expression, we obtain
ci+1=Gi+PiGi-1+PiPi-1Gi-2. . . . . .+P1G0+PiPi-1 ....................P0c0
• Conclusion: Delay through the adder is 3 gate delays for all carry-bits &
4 gate delays for all sum-bits.
• Consider the design of a 4-bit adder. The carries can be implemented as
c1=G0+P0c0
c2=G1+P1G0+P1P0c0
c3=G2+P2G1+P2P1G0+P2P1P0c0
c4=G3+P3G2+P3P2G1+P3P2P1G0+P3P2P1P0c0
• The carries are implemented in the block labeled carry-lookahead logic. An adder implemented in this
form is called a Carry-Lookahead Adder.
• Limitation: If we try to extend the carry-lookahead adder for longer operands, we run into a problem
of gate fan-in constraints.
DEPARTMENT OF ISE 8
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 9
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 10
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
ARRAY MULTIPLICATION
• The main component in each cell is a full adder(FA)..
• The AND gate in each cell determines whether a multiplicand bit m j, is added to the incoming partial-
product bit, based on the value of the multiplier bit q i (Figure 9.6).
DEPARTMENT OF ISE 11
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 12
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 13
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 14
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 15
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
FAST MULTIPLICATION
BIT-PAIR RECODING OF MULTIPLIERS
• This method
→ derived from the booth algorithm
→ reduces the number of summands by a factor of 2
• Group the Booth-recoded multiplier bits in pairs. (Figure 9.14 & 9.15).
• The pair (+1 -1) is equivalent to the pair (0 +1).
DEPARTMENT OF ISE 16
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 17
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 18
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 19
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
• The full adder is input with three partial bit products in the first row.
• Multiplication requires the addition of several summands.
• CSA speeds up the addition process.
• Consider the array for 4x4 multiplication shown in fig 9.16.
• First row consisting of just the AND gates that implement the bit products m 3q0, m2q0, m1q0 and m0q0.
• The delay through the carry-save array is somewhat less than delay through the ripple-carry array.
This is because the S and C vector outputs from each row are produced in parallel in one full-adder delay.
• Consider the addition of many summands in fig 9.18.
• Group the summands in threes and perform carry-save addition on each of these groups in parallel to
generate a set of S and C vectors in one full-adder delay
• Group all of the S and C vectors into threes, and perform carry-save addition on them, generating a
further set of S and C vectors in one more full-adder delay
• Continue with this process until there are only two vectors remaining
• They can be added in a RCA or CLA to produce the desired product.
• When the number of summands is large, the time saved is proportionally much greater.
• Delay: AND gate + 2 gate/CSA level + CLA gate delay, Eg., 6 bit number require 15 gate delay,
array 6x6 require 6(n-1)-1 = 29 gate Delay.
• In general, CSA takes 1.7 log2k-1.7 levels of CSA to reduce k summands.
DEPARTMENT OF ISE 20
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
INTEGER DIVISION
• An n-bit positive-divisor is loaded into register M.
An n-bit positive-dividend is loaded into register Q at the start of the operation.
Register A is set to 0 (Figure 9.21).
• After division operation, the n-bit quotient is in register Q, and
the remainder is in register A.
DEPARTMENT OF ISE 21
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
NON-RESTORING DIVISION
• Procedure:
Step 1: Do the following n times
i) If the sign of A is 0, shift A and Q left one bit position and subtract M from A;
otherwise, shift A and Q left and add M to A (Figure 9.23).
ii) Now, if the sign of A is 0, set q0 to 1; otherwise set q0 to 0.
Step 2: If the sign of A is 1, add M to A (restore).
DEPARTMENT OF ISE 22
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
RESTORING DIVISION
• Procedure: Do the following n times
1) Shift A and Q left one binary position (Figure 9.22).
2) Subtract M from A, and place the answer back in A
3) If the sign of A is 1, set q0 to 0 and add M back to A(restore A).
If the sign of A is 0, set q0 to 1 and no restoring done.
DEPARTMENT OF ISE 23
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 24
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
NORMALIZATION
• When the decimal point is placed to the right of the first(non zero) significant digit, the number is
said to be normalized.
• If a number is not normalized, it can always be put in normalized form by shifting the fraction and
adjusting the exponent. As computations proceed, a number that does not fall in the representable range
of normal numbers might be generated.
• In single precision, it requires an exponent less than -126 (underflow) or greater than +127
(overflow). Both are exceptions that need to be considered.
SPECIAL VALUES
• The end values 0 and 255 of the excess-127 exponent E’ are used to represent special values.
• When E’=0 and the mantissa fraction m is zero, the value exact 0 is represented.
• When E’=255 and M=0, the value ∞ is represented, where ∞ is the result of dividing a normal
number by zero.
• when E’=0 and M!=-, denormal numbers are represented. Their value is X2-126
• When E’=255 and M!=0, the value represented is called not a number(NaN). A NaN is the result of
performing an invalied operation such as 0/0 or .
DEPARTMENT OF ISE 25
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 26
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
Problem 1:
Represent the decimal values 5, -2, 14, -10, 26, -19, 51 and -43 as signed 7-bit numbers in the
following binary formats:
(a) sign-and-magnitude
(b) 1’s-complement
(c) 2’s-complement
Solution:
The three binary representations are given as:
Problem 2:
(a) Convert the following pairs of decimal numbers to 5-bit 2’s-complement numbers, then add them.
State whether or not overflow occurs in each case.
a) 5 and 10 b) 7 and 13
c) –14 and 11 d) –5 and 7
e) –3 and –8
(b) Repeat Problem 1.7 for the subtract operation, where the second number of each pair is to be
subtracted from the first number. State whether or not overflow occurs in each case.
Solution:
(a)
(b) To subtract the second number, form its 2's-complement and add it to the first number.
DEPARTMENT OF ISE 27
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
Problem 3:
Perform following operations on the 6-bit signed numbers using 2's complement representation
system. Also indicate whether overflow has occurred.
Solution:
DEPARTMENT OF ISE 28
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
Problem 4:
Perform signed multiplication of following 2’s complement numbers using Booth’s algorithm.
(a) A=010111 and B=110110 (b) A=110011 and B=101100
(c) A=110101 and B=011011 (d) A=001111 and B=001111
(e) A=10100 and B=10101 (f) A=01110 and B=11000
Solution:
DEPARTMENT OF ISE 29
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 30
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
Problem 5:
Perform signed multiplication of following 2’s complement numbers using bit-pair recoding method.
(a) A=010111 and B=110110 (b) A=110011 and B=101100
(c) A=110101 and B=011011 (d) A=001111 and B=001111
Solution:
DEPARTMENT OF ISE 31
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
Problem 6:
Given A=10101 and B=00100, perform A/B using restoring division algorithm.
Solution:
Problem 7:
Given A=10101 and B=00101, perform A/B using non-restoring division algorithm.
Solution:
DEPARTMENT OF ISE 32
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
DEPARTMENT OF ISE 33
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 4
Problem 8:
Represent 1259.12510 in single precision and double precision formats
Solution:
Step 1: Convert decimal number to binary format
1259(10) =10011101011(2)
Fractional Part
0.125(10) =0.001
Binary number = 10011101011+0.001
= 10011101011.001
Step 2: Normalize the number
10011101011.001=1.0011101011001 x 210
Step 3: Single precision format:
For a given number S=0, E=10 and M=0011101011001
Bias for single precision format is = 127
E’= E+127 = 10+127 = 137(10)
= 10001001(2)
Number in single precision format is given as
DEPARTMENT OF ISE 34
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 5
DEPARTMENT OF ISE 1
COMPUTER ORGANIZATION & ARCHITECTURE MODULE 5
5-33
COMPUTER ORGANIZATION
5-33
COMPUTER ORGANIZATION
PERFORMING AN ARITHMETIC OR LOGIC OPERATION
• The ALU performs arithmetic operations on the 2 operands applied to its A and B inputs.
• One of the operands is output of MUX;
And, the other operand is obtained directly from processor-bus.
• The result (produced by the ALU) is stored temporarily in register Z.
• The sequence of operations for [R3][R1]+[R2] is as follows:
1) R1out, Yin
2) R2out, SelectY, Add, Zin
3) Zout, R3in
• Instruction execution proceeds as follows:
Step 1 --> Contents from register R1 are loaded into register Y.
Step2 --> Contents from Y and from register R2 are applied to the A and B inputs of ALU;
Addition is performed &
Result is stored in the Z register.
Step 3 --> The contents of Z register is stored in the R3 register.
• The signals are activated for the duration of the clock cycle corresponding to that step. All other
signals are inactive.
CONTROL-SIGNALS OF MDR
• The MDR register has 4 control-signals (Figure 7.4):
1) MDRin & MDRout control the connection to the internal processor data bus &
2) MDRinE & MDRoutE control the connection to the memory Data bus.
• MAR register has 2 control-signals.
1) MARin controls the connection to the internal processor address bus &
2) MARout controls the connection to the memory address bus.
5-33
COMPUTER ORGANIZATION
FETCHING A WORD FROM MEMORY
• To fetch instruction/data from memory, processor transfers required address to MAR.
At the same time, processor issues Read signal on control-lines of memory-bus.
• When requested-data are received from memory, they are stored in MDR. From MDR, they are
transferred to other registers.
• The response time of each memory access varies (based on cache miss, memory-mapped I/O). To
accommodate this, MFC is used. (MFC Memory Function Completed).
• MFC is a signal sent from addressed-device to the processor. MFC informs the processor that the
requested operation has been completed by addressed-device.
• Consider the instruction Move (R1),R2. The sequence of steps is (Figure 7.5):
1) R1out, MARin, Read ;desired address is loaded into MAR & Read command is issued.
2) MDRinE, WMFC ;load MDR from memory-bus & Wait for MFC response from memory.
3) MDRout, R2in ;load R2 from MDR.
where WMFC=control-signal that causes processor's control.
circuitry to wait for arrival of MFC signal.
5-33
COMPUTER ORGANIZATION
EXECUTION OF A COMPLETE INSTRUCTION
• Consider the instruction Add (R3),R1 which adds the contents of a memory-location pointed by R3 to
register R1. Executing this instruction requires the following actions:
1) Fetch the instruction.
2) Fetch the first operand.
3) Perform the addition &
4) Load the result into R1.
5-33
COMPUTER ORGANIZATION
BRANCHING INSTRUCTIONS
• Control sequence for an unconditional branch instruction is as follows:
5-33
COMPUTER ORGANIZATION
MULTIPLE BUS ORGANIZATION
• Disadvantage of Single-bus organization: Only one data-word can be transferred over the bus in
a clock cycle. This increases the steps required to complete the execution of the instruction
Solution: To reduce the number of steps, most processors provide multiple internal-paths. Multiple
paths enable several transfers to take place in parallel.
• As shown in fig 7.8, three buses can be used to connect registers and the ALU of the processor.
• All general-purpose registers are grouped into a single block called the Register File.
• Register-file has 3 ports:
1) Two output-ports allow the contents of 2 different registers to be simultaneously placed on
buses A & B.
2) Third input-port allows data on bus C to be loaded into a third register during the same
clock-cycle.
• Buses A and B are used to transfer source-operands to A & B inputs of ALU.
• The result is transferred to destination over bus C.
• Incrementer Unit is used to increment PC by 4.
5-33
COMPUTER ORGANIZATION
COMPLETE PROCESSOR
• This has separate processing-units to deal with integer data and floating-point data.
Integer Unit To process integer data. (Figure 7.14).
Floating Unit To process floating –point data.
• Data-Cache is inserted between these processing-units & main-memory.
The integer and floating unit gets data from data cache.
• Instruction-Unit fetches instructions
→ from an instruction-cache or
→ from main-memory when desired instructions are not already in cache.
• Processor is connected to system-bus &
hence to the rest of the computer by means of a Bus Interface.
• Using separate caches for instructions & data is common practice in many processors today.
• A processor may include several units of each type to increase the potential for concurrent
operations.
• The 80486 processor has 8-kbytes single cache for both instruction and data.
Whereas the Pentium processor has two separate 8 kbytes caches for instruction and data.
Note:
To execute instructions, the processor must have some means of generating the control-signals. There
are two approaches for this purpose:
1) Hardwired control and 2) Microprogrammed control.
5-33
COMPUTER ORGANIZATION
HARDWIRED CONTROL
• Hardwired control is a method of control unit design (Figure 7.11).
• The control-signals are generated by using logic circuits such as gates, flip-flops, decoders etc.
• Decoder/Encoder Block is a combinational-circuit that generates required control-outputs
depending on state of all its inputs.
• Instruction Decoder
It decodes the instruction loaded in the IR.
If IR is an 8 bit register, then instruction decoder generates 2 8(256 lines); one for each
instruction.
It consists of a separate output-lines INS1 through INSm for each machine instruction.
According to code in the IR, one of the output-lines INS1 through INSm is set to 1, and all
other lines are set to 0.
• Step-Decoder provides a separate signal line for each step in the control sequence.
• Encoder
It gets the input from instruction decoder, step decoder, external inputs and condition codes.
It uses all these inputs to generate individual control-signals: Yin, PCout, Add, End and so on.
For example (Figure 7.12), Zin=T1+T6.ADD+T4.BR
;This signal is asserted during time-slot T1 for all instructions.
during T6 for an Add instruction.
during T4 for unconditional branch instruction
• When RUN=1, counter is incremented by 1 at the end of every clock cycle.
When RUN=0, counter stops counting.
• After execution of each instruction, end signal is generated. End signal resets step counter.
• Sequence of operations carried out by this machine is determined by wiring of logic circuits, hence
the name “hardwired”.
• Advantage: Can operate at high speed.
• Disadvantages:
1) Since no. of instructions/control-lines is often in hundreds, the complexity of control unit is
very high.
2) It is costly and difficult to design.
3) The control unit is inflexible because it is difficult to change the design.
5-33
COMPUTER ORGANIZATION
HARDWIRED CONTROL VS MICROPROGRAMMED CONTROL
Attribute Hardwired Control Microprogrammed Control
Definition Hardwired control is a control Micro programmed control is a control
mechanism to generate control- mechanism to generate control-signals
signals by using gates, flip- by using a memory called control store
flops, decoders, and other (CS), which contains the control-
digital circuits. signals.
Speed Fast Slow
Control functions Implemented in hardware. Implemented in software.
Flexibility Not flexible to accommodate More flexible, to accommodate new
new system specifications or system specification or new instructions
new instructions. redesign is required.
Ability to handle large Difficult. Easier.
or complex instruction
sets
Ability to support Very difficult. Easy.
operating systems &
diagnostic features
Design process Complicated. Orderly and systematic.
Applications Mostly RISC microprocessors. Mainframes, some microprocessors.
Instructionset size Usually under 100 instructions. Usually over 100 instructions.
ROM size - 2K to 10K by 20-400 bit
microinstructions.
Chip area efficiency Uses least area. Uses more area.
Diagram
5-33
COMPUTER ORGANIZATION
MICROPROGRAMMED CONTROL
• Microprogramming is a method of control unit design (Figure 7.16).
• Control-signals are generated by a program similar to machine language programs.
• Control Word(CW) is a word whose individual bits represent various control-signals (like Add, PCin).
• Each of the control-steps in control sequence of an instruction defines a unique combination of 1s &
0s in CW.
• Individual control-words in microroutine are referred to as microinstructions (Figure 7.15).
• A sequence of CWs corresponding to control-sequence of a machine instruction constitutes the
microroutine.
• The microroutines for all instructions in the instruction-set of a computer are stored in a special
memory called the Control Store (CS).
• Control-unit generates control-signals for any instruction by sequentially reading CWs of
corresponding microroutine from CS.
• µPC is used to read CWs sequentially from CS. (µPC Microprogram Counter).
• Every time new instruction is loaded into IR, o/p of Starting Address Generator is loaded into µPC.
• Then, µPC is automatically incremented by clock;
causing successive microinstructions to be read from CS.
Hence, control-signals are delivered to various parts of processor in correct sequence.
Advantages
• It simplifies the design of control unit. Thus it is both, cheaper and less error prone implement.
• Control functions are implemented in software rather than hardware.
• The design process is orderly and systematic.
• More flexible, can be changed to accommodate new system specifications or to correct the design
errors quickly and cheaply.
• Complex function such as floating point arithmetic can be realized efficiently.
Disadvantages
• A microprogrammed control unit is somewhat slower than the hardwired control unit, because time is
required to access the microinstructions from CM.
• The flexibility is achieved at some extra hardware cost due to the control memory and its access
circuitry.
5-33
COMPUTER ORGANIZATION
ORGANIZATION OF MICROPROGRAMMED CONTROL UNIT TO SUPPORT CONDITIONAL
BRANCHING
• Drawback of previous Microprogram control:
It cannot handle the situation when the control unit is required to check the status of the
condition codes or external inputs to choose between alternative courses of action.
Solution:
Use conditional branch microinstruction.
• In case of conditional branching, microinstructions specify which of the external inputs, condition-
codes should be checked as a condition for branching to take place.
• Starting and Branch Address Generator Block loads a new address into µPC when a
microinstruction instructs it to do so (Figure 7.18).
• To allow implementation of a conditional branch, inputs to this block consist of
→ external inputs and condition-codes &
→ contents of IR.
• µPC is incremented every time a new microinstruction is fetched from microprogram memory except
in following situations:
1) When a new instruction is loaded into IR, µPC is loaded with starting-address of microroutine
for that instruction.
2) When a Branch microinstruction is encountered and branch condition is satisfied, µPC is
loaded with branch-address.
3) When an End microinstruction is encountered, µPC is loaded with address of first CW in
microroutine for instruction fetch cycle.
5-33
COMPUTER ORGANIZATION
MICROINSTRUCTIONS
• A simple way to structure microinstructions is to assign one bit position to each control-signal
required in the CPU.
• There are 42 signals and hence each microinstruction will have 42 bits.
• Drawbacks of microprogrammed control:
1) Assigning individual bits to each control-signal results in long microinstructions because
the number of required signals is usually large.
2) Available bit-space is poorly used because
only a few bits are set to 1 in any given microinstruction.
• Solution: Signals can be grouped because
1) Most signals are not needed simultaneously.
2) Many signals are mutually exclusive. E.g. only 1 function of ALU can be activated at a time.
For ex: Gating signals: IN and OUT signals (Figure 7.19).
Control-signals: Read, Write.
ALU signals: Add, Sub, Mul, Div, Mod.
• Grouping control-signals into fields requires a little more hardware because
decoding-circuits must be used to decode bit patterns of each field into individual control-signals.
• Advantage: This method results in a smaller control-store (only 20 bits are needed to store the
patterns for the 42 signals).
5-33
COMPUTER ORGANIZATION
TECHNIQUES OF GROUPING OF CONTROL-SIGNALS
• The grouping of control-signal can be done either by using
1) Vertical organization &
2) Horizontal organisation.
MICROPROGRAM SEQUENCING
• The task of microprogram sequencing is done by microprogram sequencer.
• Two important factors must be considered while designing the microprogram sequencer:
1) The size of the microinstruction &
2) The address generation time.
• The size of the microinstruction should be minimum so that the size of control memory required to
store microinstructions is also less.
• This reduces the cost of control memory.
• With less address generation time, microinstruction can be executed in less time resulting better
throughout.
• During execution of a microprogram the address of the next microinstruction to be executed has 3
sources:
1) Determined by instruction register.
2) Next sequential address &
3) Branch.
• Microinstructions can be shared using microinstruction branching.
• Disadvantage of microprogrammed branching:
1) Having a separate microroutine for each machine instruction results in a large total number
of microinstructions and a large control-store.
2) Execution time is longer because it takes more time to carry out the required branches.
• Consider the instruction Add src,Rdst ;which adds the source-operand to the contents of Rdst and
places the sum in Rdst.
• Let source-operand can be specified in following addressing modes (Figure 7.20):
a) Indexed
b) Autoincrement
c) Autodecrement
d) Register indirect &
e) Register direct
• Each box in the chart corresponds to a microinstruction that controls the transfers and operations
indicated within the box.
• The microinstruction is located at the address indicated by the octal number (001,002).
5-33
COMPUTER ORGANIZATION
5-33
COMPUTER ORGANIZATION
BRANCH ADDRESS MODIFICATION USING BIT-ORING
• The branch address is determined by ORing particular bit or bits with the current address of
microinstruction.
• Eg: If the current address is 170 and branch address is 171 then the branch address can be
generated by ORing 01(bit 1), with the current address.
• Consider the point labeled in the figure. At this point, it is necessary to choose between direct and
indirect addressing modes.
• If indirect-mode is specified in the instruction, then the microinstruction in location 170 is performed
to fetch the operand from the memory.
If direct-mode is specified, this fetch must be bypassed by branching immediately to location 171.
• The most efficient way to bypass microinstruction 170 is to have bit-ORing of
→ current address 170 &
→ branch address 171.
5-33
COMPUTER ORGANIZATION
Detailed Examination of Add (Rsrc)+,Rdst
• Consider Add (Rsrc)+,Rdst; which adds Rsrc content to Rdst content, then stores the sum in Rdst
and finally increments Rsrc by 4 (i.e. auto-increment mode).
• In bit 10 and 9, bit-patterns 11, 10, 01 and 00 denote indexed, auto-decrement, auto-increment and
register modes respectively. For each of these modes, bit 8 is used to specify the indirect version.
• The processor has 16 registers that can be used for addressing purposes; each specified using a 4-
bit-code (Figure 7.21).
• There are 2 stages of decoding:
1) The microinstruction field must be decoded to determine that an Rsrc or Rdst register is
involved.
2) The decoded output is then used to gate the contents of the Rsrc or Rdst fields in the IR into
a second decoder, which produces the gating-signals for the actual registers R0 to R15.
5-33
COMPUTER ORGANIZATION
MICROINSTRUCTIONS WITH NEXT-ADDRESS FIELDS
5-33
COMPUTER ORGANIZATION
5-33
COMPUTER ORGANIZATION
PREFETCHING MICROINSTRUCTIONS
• Disadvantage of Microprogrammed Control: Slower operating-speed because of the time it takes
to fetch microinstructions from the control-store.
Solution: Faster operation is achieved if the next microinstruction is pre-fetched while the
current one is being executed.
Emulation
• The main function of microprogrammed control is to provide a means for simple, flexible and
relatively inexpensive execution of machine instruction.
• Its flexibility in using a machine's resources allows diverse classes of instructions to be implemented.
• Suppose we add to the instruction-repository of a given computer M1, an entirely new set of
instructions that is in fact the instruction-set of a different computer M2.
• Programs written in the machine language of M2 can be then be run on computer M1 i.e. M1
emulates M2.
• Emulation allows us to replace obsolete equipment with more up-to-date machines.
• If the replacement computer fully emulates the original one, then no software changes have to be
made to run existing programs.
• Emulation is easiest when the machines involved have similar architectures.
5-33
COMPUTER ORGANIZATION
Problem 1:
Why is the Wait-for-memory-function-completed step needed for reading from or writing to the main
memory?
Solution:
The WMFC step is needed to synchronize the operation of the processor and the main memory.
Problem 2:
For the single bus organization, write the complete control sequence for the instruction: Move (R1), R1
Solution:
PCout, MARin, Read, Select4, Add, Zin
2) Zout, PCin, Yin, WMFC
3) MDRout, IRin
4) R1out, MARin, Read
5) MDRinE, WMFC
6) MDRout, R2in, End
Problem 3:
1)
Write the sequence of control steps required for the single bus organization in each of the following
instructions:
a) Add the immediate number NUM to register R1.
b) Add the contents of memory-location NUM to register R1.
c) Add the contents of the memory-location whose address is at memory-location NUM to
register R1.
Assume that each instruction consists of two words. The first word specifies the operation andN the
addressing mode, and the second word contains the number NUM
Solution:
Problem 4:
Show the control steps for the Branch on Negative instruction for a processor with three-bus
organization of the data path
Solution:
5-33
COMPUTER ORGANIZATION
MICROWAVE OVEN
• Microwave-oven is one of the examples of embedded-system.
• This appliance is based on magnetron power-unit that generates the microwaves used to heat food.
• When turned-on, the magnetron generates its maximum power-output.
Lower power-levels can be obtained by turning the magnetron on & off for controlled time-intervals.
• Cooking Options include:
→ Manual selection of the power-level and cooking-time.
→ Manual selection of the sequence of different cooking-steps.
→ Automatic melting of food by specifying the weight.
• Display (or Monitor) can show following information:
→ Time-of-day clock.
→ Decrementing clock-timer while cooking.
→ Information-messages to the user.
• I/O Capabilities include:
→ Input-keys that comprise a 0 to 9 number pad.
→ Function-keys such as Start, Stop, Reset, Power-level etc.
→ Visual output in the form of a LCD.
→ Small speaker that produces the beep-tone.
• Computational Tasks executed are:
→ Maintaining the time-of-day clock.
→ Determining the actions needed for the various cooking-options.
→ Generating the control-signals needed to turn on/off devices.
→ Generating display information.
• Non-volatile ROM is used to store the program required to implement the desired actions.
So, the program will not be lost when the power is turned off (Figure 10.1).
• Most important requirement: The microcontroller must have sufficient I/O capability.
Parallel I/O Ports are used for dealing with the external I/O signals.
Basic I/O Interfaces are used to connect to the rest of the system.
5-33
COMPUTER ORGANIZATION
5-33
COMPUTER ORGANIZATION
DIGITAL CAMERA
• Digital Camera is one of the examples of embedded system.
• An array of Optical Sensors is used to capture images (Figure 10.2).
• The optical-sensors convert light into electrical charge.
5-33
COMPUTER ORGANIZATION
HOME TELEMETRY (DISPLAY TELEPHONE)
• Home Telemetry is one of the examples of embedded system.
• The display-telephone has an embedded processor which enables a remote access to other devices in
the home.
• Display telephone can perform following functions:
1) Communicate with a computer-controlled home security-system.
2) Set a desired temperature to be maintained by an air conditioner.
3) Set start-time, cooking-time & temperature for food in the microwave-oven.
4) Read the electricity, gas, and water meters.
• All of this is easily implementable if each of these devices is controlled by a microcontroller.
• A link (wired or wireless) has to be provided between
1) Device microcontroller &
2) Microprocessor in the telephone.
• Using signaling from a remote location to observe/control state of device is referred to as telemetry.
5-33
COMPUTER ORGANIZATION
MICROCONTROLLER CHIPS FOR EMBEDDED APPLICATIONS
• Processor Core may be a basic version of a commercially available microprocessor (Figure 10.3).
• Well-known popular microprocessor architecture must be chosen. This is because, design of new
products is facilitated by
→ numerous CAD tools
→ good examples &
→ large amount of knowledge/experience.
• Memory-Unit must be included on the microcontroller-chip.
• The memory-size must be sufficient to satisfy the memory-requirements found in small applications.
• Some memory should be of RAM type to hold the data that change during computations.
Some memory should be of Read-Only type to hold the software.
This is because an embedded system usually does not include a magnetic-disk.
• A field-programmable type of ROM storage must be provided to allow cost-effective use.
For example: EEPROM and Flash memory.
• I/O ports are provided for both parallel and serial interfaces.
• Parallel and Serial Interfaces allow easy implementation of standard I/O connections.
• Timer Circuit can be used
→ to generate control-signals at programmable time intervals &
→ for event-counting purposes.
• An embedded system may include some analog devices.
• ADC & DAC are used to convert analog signals into digital representations, and vice versa.
5-33
COMPUTER ORGANIZATION
PARALLEL I/O INTERFACE
• Each parallel port has an associated 8-bit DDR (Data Direction Register) (Figure 10.4).
• DDR can be used to configure individual data lines as either input or output.
• If the data direction flip-flop contains a 0, then Port pin PAi is treated as an input (Figure 10.5).
If the data direction flip-flop contains a 1, then Port pin PAi is treated as an output.
• Activation of control-signal Read_Port, places the logic value on the port-pin onto the data line Di.
Activation of control-signal Write_Port, places value loaded into output data flip-flop onto port-pin.
• Addressable Registers are (Figure 10.6):
1) Input registers (PAIN for port A, PBIN for port B)
2) Output registers (PAOUT for port A, PBOUT for port B)
3) Direction registers (PADIR for port A, PBDIR for port B)
4) Status-register (PSTAT) &
5) Control register (PCONT).
5-33
COMPUTER ORGANIZATION
5-33
COMPUTER ORGANIZATION
SERIAL I/O INTERFACE
• The serial interface provides the UART capability to transfer data (Figure 10.7).
(UART Universal Asynchronous Receiver/Transmitter).
• Double buffering is
→ used in both the transmit- and receive-paths.
→ needed to handle bursts in I/O transfers correctly.
5-33
COMPUTER ORGANIZATION
• Input data are read from the Receive-buffer.
Output data are loaded into the Transmit-buffer.
• Status Register (SSTAT) provides information about the current status of
i) Receive-units and
ii) Transmit-units.
• Bit SSTAT0 = 1 When there are new data in the receive-buffer.
Bit SSTAT0 = 0 When the processor accepts the data by reading the receive-buffer.
• SSTAT1 = 1 When the data in transmit-buffer are accepted by the connected-device.
SSTAT1 = 0 When the processor writes data into transmit-buffer.
(SSTAT0 & SSTAT1 similar to SIN & SOUT)
• SSTAT2 = 1 if an error occurs during the receive process.
• The status-register also contains the interrupt flags.
• SSTAT4 =1 When the receive-buffer becomes full and the receiver-interrupt is enabled.
SSTAT5 = 1 When the transmit-buffer becomes empty & the transmitter-interrupt is enabled.
• Control Register (SCONT) is used to hold the interrupt-enable bits.
• If SCONT6−4 = 1.
Then the corresponding interrupts are enabled.
Otherwise, the corresponding interrupts are disabled.
• Control register also indicates how the transmit clock is generated
• If SCONT0 = 0.
Then, the transmit clock is the same as the system (processor) clock.
Otherwise, a lower frequency transmit clock is obtained using a clock-dividing circuit.
• Clock-divisor register (DIV) divides system-clock signal to generate the serial transmission clock.
• The counter generates a clock signal whose frequency is equal to
= Frequency of system clock
Contents of DIV register
5-33
COMPUTER ORGANIZATION
COUNTER/TIMER
• A 32-bit down-counter-circuit is provided for use as either a counter or a timer.
• Basic operation of the circuit involves
→ loading a starting value into the counter and
→ then decrementing the counter-contents using either
i) Internal system clock or
ii) External clock signal.
• The circuit can be programmed to raise an interrupt when the counter-contents reach 0.
• Counter/Timer Register (CNTM) can be loaded with an initial value (Figure 10.9).
• The initial value is then transferred into the counter-circuit.
• The current contents of the counter can be read by accessing memory-address FFFFFFD4.
• Control Register (CTCON) is used to specify the operating mode of the counter/timer circuit.
• The control register provides a mechanism for
→ starting & stopping the counting-process &
→ enabling interrupts when the counter-contents are decremented to 0.
• Status Register (CTSTAT) reflects the state of the circuit.
• There are 2 modes: 1) Counter mode 2) Timer mode.
Counter Mode
• CTCON7 = 0 When the counter mode is selected.
• The starting value is loaded into the counter by writing it into register CNTM.
• The counting-process begins when bit CTCON0 is set to 1 by a program.
• Once counting starts, bit CTCON0 is automatically cleared to 0.
• The counter is decremented by pulses on the Counter.
• Upon reaching 0, the counter-circuit
→ sets the status flag CTSTAT0 to 1 &
→ raises an interrupt if the corresponding interrupt-enable bit has been set to 1.
• The next clock pulse causes the counter to reload the starting value.
• The starting value is held in register CNTM, and counting continues.
• The counting-process is stopped by setting bit CTCON1 to 1.
Timer Mode
• CTCON7 = 1 When the timer mode is selected.
• This mode can be used to generate periodic interrupts.
• It is also suitable for generating a square-wave signal.
• The process starts as explained above for the counter mode.
• As the counter counts down, the value on the output line is held constant.
• Upon reaching 0, the counter is reloaded automatically with the starting value, and the
output signal on the line is inverted.
• Thus, the period of the output signal is twice the starting counter value multiplied by
the period of the controlling clock pulse.
• In the timer mode, the counter is decremented by the system clock.
5-33
COMPUTER ORGANIZATION
5-33
COMPUTER ORGANIZATION
3. Distributed Memory Systems
• All memory-modules serve as private memories for processors that are directly connected to
them.
• A processor cannot access a remote-memory without the cooperation of the remote-
processor.
• This cooperation takes place in the form of messages exchanged by the processors.
• Such systems are often called Distributed-Memory Systems (Figure 12.4).
5-33