0% found this document useful (0 votes)
113 views

2 - A Top-Level View of Computer Function and Interconnection

The document discusses the basic functions and components of a computer system including the CPU, memory, I/O modules, and their interconnection. It describes the memory read and write operations, bus and point-to-point interconnect structures, and how instructions are fetched and executed in the CPU cycle.

Uploaded by

Aliaa Tarek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views

2 - A Top-Level View of Computer Function and Interconnection

The document discusses the basic functions and components of a computer system including the CPU, memory, I/O modules, and their interconnection. It describes the memory read and write operations, bus and point-to-point interconnect structures, and how instructions are fetched and executed in the CPU cycle.

Uploaded by

Aliaa Tarek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

A top-level view of computer

function and interconnection


Computer Organization (2022/2023)
Eng. Hossam Mady
Teaching Assistant and Researcher at Aswan Faculty of Engineering
Important Registers
• The CPU exchanges data with memory. For this purpose, it
typically makes use of two internal (to the CPU) registers: a
memory address register (MAR), which specifies the address in
memory for the next read or write, and a memory buffer
register (MBR), which contains the data to be written into
memory or receives the data read from memory.
• Similarly, an I/O address register (I/OAR) specifies a particular
I/O device. An I/O buffer register (I/OBR) is used for the
exchange of data between an I/O module and the CPU.
Memory & I/O Module

• A memory module consists of a set of locations, defined by


sequentially numbered addresses. Each location contains a
binary number that can be interpreted as either an instruction or
data.
• An I/O module transfers data from external devices to CPU and
memory, and vice versa. It contains internal buffers for
temporarily holding these data until they can be sent on.
Memory Write Operation
• Three basic steps are needed in order for the CPU to perform a
write operation into a specified memory location:
• The word to be stored into the memory location is first loaded by the
CPU into a specified register, called the memory Buffer register (MBR).
• The address of the location into which the word is to be stored is loaded
by the CPU into a specified register, called the memory address register
(MAR).
• A signal, called write, is issued by the CPU indicating that the word
stored in the MDR is to be stored in the memory location whose address
in loaded in the MAR.
Memory Write Operation
Memory Read Operation

• Similar to the write operation, three basic steps are needed in


order to perform a memory read operation:
• The address of the location from which the word is to be read is loaded
into the MAR.
• A signal, called read, is issued by the CPU indicating that the word
whose address is in the MAR is to be read into the MBR.
• After some time, corresponding to the memory delay in reading the
specified word, the required word will be loaded by the memory into the
MDR ready for use by the CPU.
Memory Read Operation
Opcode and Operands

• Information involved in any operation performed by the CPU


needs to be addressed. In computer terminology, such
information is called the operand.
• Any instruction issued by the processor must carry at least two
types of information. These are the operation to be performed,
encoded in what is called the op-code field, and the address
information of the operand on which the operation is to be
performed, encoded in what is called the address field.
Computer Function

• The basic function performed by a computer is execution of a


program, which consists of a set of instructions stored in memory.
• The processor does the actual work by executing instructions
specified in the program.
• In its simplest form, instruction processing consists of two steps: The
processor reads (fetches) instructions from memory one at a time and
executes each instruction. Program execution consists of repeating
the process of instruction fetch and instruction execution.
Computer Function

• The processing required for a single instruction is called an


instruction cycle. Using the simplified two-step description
given previously, the two steps are referred to as the fetch cycle
and the execute cycle.
• At the beginning of each instruction cycle, the processor fetches
an instruction from memory. In a typical processor, a register
called the program counter (PC) holds the address of the
instruction to be fetched next.
Basic Functions

• Unless told otherwise, the processor always increments the PC


after each instruction fetch so that it will fetch the next
instruction in sequence.
• The fetched instruction is loaded into a register in the processor
known as the instruction register (IR). The instruction contains
bits that specify the action the processor is to take.
Basic Functions

• The processor interprets the instruction and performs the required


action. In general, these actions fall into four categories:
• Processor-memory: Data may be transferred from processor to memory or from memory
to processor.
• Processor-I/O: Data may be transferred to or from a peripheral device by transferring
between the processor and an I/O module.
• Data processing: The processor may perform some arithmetic or logic operation on data.
• Control: An instruction may specify that the sequence of execution be altered.

• An instruction’s execution may involve a combination of these actions.


Interconnection Structures

• A computer consists of a set of components or modules of three


basic types (processor, memory, I/O) that communicate with
each other. In effect, a computer is a network of basic modules.
Thus, there must be paths for connecting the modules.
• The collection of paths connecting the various modules is called
the interconnection structure. The design of this structure will
depend on the exchanges that must be made among modules.
Interconnection Structures

• The interconnection structure must support the following types


of transfers:
• Memory to processor
• Processor to memory.
• I/O to processor.
• Processor to I/O.
• I/O to or from memory.
Bus Interconnection

• The bus was the dominant means of computer system


component interconnection for decades.
• For general-purpose computers, it has gradually given way to
various point-to-point interconnection structures, which now
dominate computer system design. However, bus structures are
still commonly used for embedded systems, particularly
microcontrollers.
Bus Interconnection

• A bus is a communication pathway connecting two or more


devices. A key characteristic of a bus is that it is a shared
transmission medium.
• Multiple devices connect to the bus, and a signal transmitted by
any one device is available for reception by all other devices
attached to the bus. If two devices transmit during the same time
period, their signals will overlap and become garbled. Thus, only
one device at a time can successfully transmit.
Bus Interconnection

• Computer systems contain a number of different buses that


provide pathways between components at various levels of the
computer system hierarchy.
• A bus that connects major computer components (processor,
memory, I/O) is called a system bus.
Bus Interconnection

• Although there are many different bus designs, on any bus the
lines can be classified into three functional groups (Figure 3.16):
data, address, and control lines.
Data Bus
• The data lines provide a path for moving data among system
modules. These lines, collectively, are called the data bus.
• The data bus may consist of 32, 64, 128, or even more separate lines,
the number of lines being referred to as the width of the data bus.
• The width of the data bus is a key factor in determining overall system
performance.
• For example, if the data bus is 32 bits wide and each instruction is 64
bits long, then the processor must access the memory module twice
during each instruction cycle.
Data Bus

• Internal data paths are used to move data between registers and
between register and ALU. External data paths link registers to
memory and I/O modules.
Address Bus

• The address lines are used to designate the source or destination


of the data on the data bus.
• For example, if the processor wishes to read a word (8, 16, or 32
bits) of data from memory, it puts the address of the desired
word on the address lines.
• Clearly, the width of the address bus determines the maximum
possible memory capacity of the system.
Control Bus

• The control lines are used to control the access to and the use of
the data and address lines. Because the data and address lines
are shared by all components.
• Control signals transmit both command and timing information
among system modules. Timing signals indicate the validity of
data and address information. Command signals specify
operations to be performed
Point-to-Point Interconnect

• At higher and higher data rates, it becomes increasingly difficult


to perform the synchronization and arbitration functions in a
timely fashion.
• Compared to the shared bus, the point-to-point interconnect
has lower latency, higher data rate, and better scalability.
• Intel’s Quick Path Interconnect (QPI), which was introduced in
2008 is An important and representative example of the point-to-
point interconnect approach.
Quick Path Interconnect (QPI)
• The following are significant characteristics of QPI and other point-to-
point interconnect schemes:
• Multiple direct connections: Multiple components within the system enjoy direct
pairwise connections to other components. This eliminates the need for arbitration
found in shared transmission systems.
• Layered protocol architecture: As found in network environments, such as TCP/IP-
based data networks, these processor-level interconnects use a layered protocol
architecture, rather than the simple use of control signals found in shared bus
arrangements.
• Packetized data transfer: Data are not sent as a raw bit stream. Rather, data are sent
as a sequence of packets, each of which includes control headers and error control
codes.
Quick Path Interconnect (QPI)

• QPI is defined as a four-layer protocol architecture encompassing the


following layers:
• Physical: Consists of the actual wires carrying the signals. The unit of transfer
at the Physical layer is 20 bits, which is called a Phit (physical unit).
• Link: Responsible for reliable transmission and flow control. The Link layer’s
unit of transfer is an 80-bit Flit (flow control unit).
• Routing: Provides the framework for directing packets through the fabric.
• Protocol: The high-level set for exchanging packets of data between devices.
A packet is comprised of an integral number of Flits.
Quick Path Interconnect (QPI)
QPI Physical Layer

• The QPI port consists of 84 individual links grouped as follows.


Each data path consists of a pair of wires that transmits data one
bit at a time; the pair is referred to as a lane. There are 20 data
lanes in each direction (transmit and receive), plus a clock lane in
each direction. The 20-bit unit is referred to as a phit. direction
• The lanes in each are grouped into four quadrants of 5 lanes
each.
QPI Physical Layer
• In a typical implementation, the transmitter injects a small
current into one wire or the other, depending on the logic level to
be sent. The current passes through a resistor at the receiving
end, and then returns in the opposite direction along the other
wire. The receiver senses the polarity of the voltage across the
resistor to determine the logic level (0 or 1).
• Another function performed by the physical layer is that it
manages the translation between 80-bit flits and 20-bit phits
using a technique known as multilane distribution.
QPI Physical Layer
QPI Link Layer

• The QPI link layer performs two key functions: flow control and
error control. These functions operate on the level of the flit (flow
control unit).
• Each flit consists of a 72-bit message payload and an 8-bit error
control code called a cyclic redundancy check (CRC).
• The flow control function is needed to ensure that a sending QPI
entity does not overwhelm a receiving QPI entity by sending data
faster than the receiver can process the data and clear buffers for
more incoming data.
QPI Link Layer

• To control the flow of data, QPI makes use of a credit scheme.


During initialization, a sender is given a set number of credits to
send flits to a receiver.
• The error control function at the link layer detects and recovers
from such bit errors, and so isolates higher layers from
experiencing bit errors.
PCI Express

• Compared with other common bus specifications, PCI delivers


better system performance for high-speed I/O subsystems.
• As with the system bus discussed in the preceding sections, the
bus-based PCI scheme has not been able to keep pace with the
data rate demands of attached devices.
• Accordingly, a new version, known as PCI Express (PCIe) has
been developed. PCIe, as with QPI, is a point-to-point
interconnect scheme intended to replace bus-based schemes
such as PCI.
PCI Express

• Figure 3.21 shows a typical configuration that supports the use of


PCIe.
• A root complex device, also referred to as a chipset or a host
bridge, connects the processor and memory subsystem to the
PCI Express switch fabric comprising one or more PCIe and PCIe
switch devices.
• The root complex acts as a buffering device, to deal with
difference in data rates between I/O controllers and memory and
processor components.
PCI Express
PCI Express
• The root complex also translates between PCIe transaction
formats and the processor and memory signal and control
requirements.
• Chipset can connect to:
• Switch: The switch manages multiple PCIe streams.
• PCIe endpoint: An I/O device or controller that implements PCIe, such as a
Gigabit ethernet switch.
• PCIe/PCI bridge: Allows older PCI devices to be connected to PCIe-based
systems.
PCI Express
• As with QPI, PCIe interactions are defined using a protocol
architecture. The PCIe protocol architecture encompasses the
following layers:
• Physical: Consists of the actual wires carrying the signals.
• Data link: Is responsible for reliable transmission and flow control. Data packets
generated and consumed by the DLL are called Data Link Layer Packets (DLLPs).
• Transaction: Generates and consumes data packets used to implement load/ store
data transfer mechanisms and also manages the flow control of those packets
between the two components on a link. Data packets generated and consumed by
the TL are called Transaction Layer Packets (TLPs).
PCI Express
PCIe Physical Layer

• Each PCIe port consists of a number of bidirectional lanes.


• A PCI port can provide 1, 4, 6, 16, or 32 lanes.
• As with QPI, PCIe uses a multilane distribution technique.
• Each block of 128 bits is encoded into a unique 130-bit codeword
for transmission; this is referred to as 128b/130b encoding (2 bits
sync character for synchronization).
PCIe Transaction Layer and Data Link Layer

• The transaction layer (TL) receives read and write requests from
the software above the TL and creates request packets for
transmission to a destination via the link layer.
• The purpose of the PCIe data link layer is to ensure reliable
delivery of packets across the PCIe link. The DLL participates in
the formation of TLPs and also transmits DLLPs.

You might also like