0% found this document useful (0 votes)
12 views

Computer Architecture and Organization

Uploaded by

mehwishamin310
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Computer Architecture and Organization

Uploaded by

mehwishamin310
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Introduction to computer system

A computer system is a complex and interconnected set of hardware and software


components designed to process information. It plays a crucial role in our modern
world, influencing almost every aspect of our daily lives. Let's break down the key
components and concepts associated with a computer system:

1. Hardware:
 Central Processing Unit (CPU): Often referred to as the brain of the
computer, the CPU performs calculations and executes instructions.
 Memory (RAM): Random Access Memory is volatile memory that the CPU
uses to store data and instructions temporarily while the computer is running.
 Storage: This includes devices like hard drives and solid-state drives,
providing non-volatile storage for long-term data retention.
 Input Devices: These allow users to interact with the computer, such as
keyboards, mice, and touchscreens.
 Output Devices: Devices like monitors and printers display or produce results
from the computer's processed information.
 Motherboard: The main circuit board that connects and facilitates
communication between various hardware components.
2. Software:
 Operating System (OS): This is the core software that manages hardware
resources and provides essential services for other software applications.
Examples include Windows, macOS, and Linux.
 Applications: Software programs designed for specific tasks, such as word
processors, web browsers, and games.
 Device Drivers: Software that allows the operating system to communicate
with and control hardware devices.
 Utilities: Programs that perform system maintenance tasks, like antivirus
software and disk cleanup tools.
3. Data:
 Binary Code: Computers process data in the form of binary code,
represented by 0s and 1s. These binary digits, or bits, are the fundamental
units of digital information.
 Data Representation: Information is stored and processed in different
formats, such as text, numbers, images, and videos.
4. Communication:
 Input and Output (I/O): The process of exchanging data between the
computer and its external environment through input and output devices.
 Networks: Computers can be connected to form networks, enabling them to
communicate and share resources. The Internet is a global example of a
computer network.
5. System Software and Application Software:
 System Software: Includes the operating system and utilities that manage
the computer system's hardware and provide a platform for running
applications.
 Application Software: Programs designed to perform specific tasks or
functions, catering to the needs of users.

Understanding these components and their interactions is fundamental to grasping


the functioning of a computer system. The synergy between hardware and software
enables computers to execute diverse tasks efficiently, making them indispensable in
various fields.
Computer architecture
Computer architecture refers to the design and organization of a computer
system, encompassing its hardware and the way in which its components
interact to execute instructions. It defines the system's logical structure and
functional organization, specifying how data is stored, processed, and
communicated within the system. Here are key aspects of computer
architecture:

1. Instruction Set Architecture (ISA):


 ISA defines the set of instructions that a computer's CPU can
execute. It includes operations like arithmetic, logic, data movement,
and control flow.
 Different types of ISAs include Reduced Instruction Set Computing
(RISC) and Complex Instruction Set Computing (CISC).
2. Processor Organization:
 This involves the internal structure of the CPU, including components
like the Arithmetic Logic Unit (ALU), control unit, and registers.
 The control unit manages the execution of instructions, and the ALU
performs arithmetic and logic operations.
3. Memory Hierarchy:
 Memory hierarchy involves different levels of memory with varying
speeds and sizes, such as registers, cache, RAM (main memory),
and secondary storage (hard drives or SSDs).
 Caches are small, high-speed memory units that store frequently
accessed data to reduce the latency of memory access.
4. Input/Output (I/O) System:
 Describes how the computer communicates with external devices. It
includes buses, controllers, and interfaces for devices like keyboards,
mice, displays, and storage.
5. System Interconnection:
 This covers how different components within a computer system
communicate. Buses and interconnects facilitate data transfer
between the CPU, memory, and other peripherals.
6. Parallelism and Pipelining:
 Computer architectures often leverage parallel processing and
pipelining to improve performance.
 Parallel processing involves executing multiple tasks simultaneously,
while pipelining divides the instruction execution into stages to
overlap different operations.
7. Performance Optimization:
 Architects aim to optimize performance by considering factors like
clock speed, instruction throughput, and memory access times.
 Power efficiency is also a concern, especially in mobile and battery-
powered devices.
8. Multiprocessing and Multithreading:
 Multiprocessing involves using multiple processors to perform tasks
concurrently, enhancing overall system performance.
 Multithreading allows multiple threads of execution within a single
process, enabling better utilization of CPU resources.
9. Instruction Pipelining:
 Instruction pipelining breaks down instruction execution into stages,
allowing multiple instructions to be processed simultaneously. This
improves throughput and efficiency.
10. Virtualization:
 Virtualization allows the creation of virtual machines, enabling
multiple operating systems to run on the same physical hardware
concurrently.

Understanding computer architecture is essential for computer engineers,


system designers, and programmers, as it provides insights into how
hardware components interact and how software instructions are executed
at the hardware level. Advances in computer architecture contribute
significantly to the ongoing improvement of computing systems in terms of
speed, efficiency, and capabilities.
What is RAM?
RAM (Random Access Memory) is the hardware in a computing device where
the operating system (OS), application programs and data in current use are
kept so they can be quickly reached by the device's processor. RAM is the
main memory in a computer. It is much faster to read from and write to
than other kinds of storage, such as a hard disk drive (HDD), solid-state drive
(SSD) or optical drive.

Random Access Memory is volatile. That means data is retained in RAM as


long as the computer is on, but it is lost when the computer is turned off. When
the computer is rebooted, the OS and other files are reloaded into RAM,
usually from an HDD or SSD.

Function of RAM
Because of its volatility, RAM can't store permanent data. RAM can be
compared to a person's short-term memory, and a hard disk drive to a
person's long-term memory. Short-term memory is focused on immediate
work, but it can only keep a limited number of facts in view at any one time.
When a person's short-term memory fills up, it can be refreshed with facts
stored in the brain's long-term memory.
A computer also works this way. If RAM fills up, the computer's processor
must repeatedly go to the hard disk to overlay the old data in RAM with new
data. This process slows the computer's operation.

How does RAM work?


The term random access as applied to RAM comes from the fact that any
storage location, also known as any memory address, can be accessed
directly. Originally, the term Random Access Memory was used to distinguish
regular core memory from offline memory.

Offline memory typically referred to magnetic tape from which a specific piece
of data could only be accessed by locating the address sequentially, starting at
the beginning of the tape. RAM is organized and controlled in a way that
enables data to be stored and retrieved directly to and from specific locations.

Other types of storage -- such as the hard drive and CD-ROM-- are also
accessed directly or randomly, but the term random access isn't used to
describe these other types of storage.

RAM is similar in concept to a set of boxes in which each box can hold a 0 or a
1. Each box has a unique address that is found by counting across the
columns and down the rows. A set of RAM boxes is called an array, and each
box is known as a cell.

To find a specific cell, the RAM controller sends the column and row address
down a thin electrical line etched into the chip. Each row and column in a RAM
array has its own address line. Any data that's read flows back on a separate
data line.

RAM is physically small and stored in microchips. It's also small in terms of the
amount of data it can hold. A typical laptop computer may come with 8
gigabytes of RAM, while a hard disk can hold 10 terabytes.
A hard drive, on the other hand, stores data on the magnetized surface of
what looks like a vinyl record. Alternatively, an SSD stores data in memory
chips that, unlike RAM, are nonvolatile. They don't depend on having constant
power and won't lose data once the power is turned off. RAM microchips are
gathered together into memory modules. These plug into slots in a computer's
motherboard. A bus, or a set of electrical paths, is used to connect the
motherboard slots to the processor.

Most PCs enable users to add RAM modules up to a certain limit. Having
more RAM in a computer cuts down on the number of times the processor
must read data from the hard disk, an operation that takes longer than reading
data from RAM. RAM access time is in nanoseconds, while storage memory
access time is in milliseconds.

System Bus Design


Definition:
The electrically conducting path along which data is transmitted inside any
digital electronic device. A Computer bus consists of a set of parallel
conductors, which may be conventional wires, copper tracks on a PRINTED
CIRCUIT BOARD, or microscopic aluminum trails on the surface of a silicon
chip. Each wire carries just one bit, so the number of wires determines the
most significant data WORD the bus can transmit: a bus with eight wires can
carry only 8-bit data words and hence defines the device as an 8-bit device.
 The bus is a communication channel.
 The characteristic of the bus is shared transmission media.
 The limitation of a bus is only one transmission at a time.
 A bus used to communicate between the major components of a
computer is called a System bus.
Computer:
System bus contains 3 categories of lines used to provide the communication
between the CPU, memory and IO named as:
1. Address lines (AL)
2. Data lines (DL)
3. Control lines (CL)
1. Address Lines:
 Used to carry the address to memory and IO.
 Unidirectional.
 Based on the width of an address bus we can determine the capacity of a
main memory
Example:
2. Data Lines:
 Used to carry the binary data between the CPU, memory and IO.
 Bidirectional.
 Based on the width of a data bus we can determine the word length of a
CPU.
 Based on the word length we can determine the performance of a CPU.
Example:
3. Control Lines:

 Used to carry the control signals and timing signals


 Control signals indicate the type of operation.
 Timing Signals are used to synchronize the memory and IO operations
with a CPU clock.
 Typical Control Lines may include Memory Read/Write, IO Read/Write,
Bus Request/Grant, etc.

Information = Bits + Context


Bits & Bytes

A bit is a “binary digit” and represents the smallest unit of data


measurement. They are mostly grouped into bytes, with each byte
containing exactly 8 bits. A byte is often referred to as the smallest
amount of data and is used to store data and execute instructions.
As each bit is a binary digit and can either be 0 or 1, one byte can
represent 256 different combinations of 0 and 1. Therefore, adding
more bytes and bits reveals more and more possibilities to represent
different instructions.
If you either talk about bits or bytes depends on the context. Bits are
mostly used when referring to network and download speeds.
You’ve probably seen your telecom provider offering internet speed
of e. g. 100MB/s. This means that 100 megabits (1 megabit =
approx. 1'000'000 bits) can be uploaded or downloaded per second.
Bytes, on the other hand, are used when talking about memory
storage. A USB-Stick with 1 gigabyte of storage holds storage for
approx. 1 billion bytes.

Information = Bits + Context

Up until now, we have talked about the first part of the information:
the bits. We know by now that everything inside a machine goes
back to long strings of 0 and 1, which are somehow encoded to give
them a meaning. But how does this work? It depends on the context.
Given a specific context, bits can be encoded respectively. The
following two examples illustrate this.

Example : Color in a pixel of a picture

Have you ever wondered how a computer can produce colors? Just
as with everything else, it's by putting the correct order of 0 and 1 in
the correct place and by choosing the correct encoding. When we
talk about colors, we often use a hexadecimal number system in
addition. The hexadecimal number system is a base-16 number
system, which means that a single digit can show 16 different values
instead of 2 as in the binary system. The values in a base-16 system
range from 0–9 to A-F. Putting these values together gives a certain
color code. Each color is a combination of the colors red, green, and
blue.
If a computer has a certain string to decode (e. g.
111111111111111111111111), in the context of colors it would be
decoded to a hex string (e. g. FFFFFF), which mixes the colors red,
green, and blue with the given combination and reveals the color
white.

Computer Language Translator


and its Types
A translator is a computer program that translates a
program written in a given programming language into a
functionally equivalent program in a different language.

Depending on the translator, this may mean changing or


simplifying the flow of the program without changing its
core. This makes a program that works the same as the
original.

Types of Language Translators

There are mainly three types of translators that are used to


translate different programming languages into machine-
equivalent code:

1. Assembler
2. Compiler
3. Interpreter

Assembler

An assembler translates assembly language into machine


code.

Assembly language consists of mnemonics for machine op-


codes, so assemblers perform a 1:1 translation from
mnemonic to direct instruction. For example, LDA #4
converts to 0001001000100100.

Conversely, one instruction in a high-level language will


translate to one or more instructions at the machine level.

The Benefits of Using Assembler

Here is a list of the advantages of using assembler:


 As a 1 to 1 relationship, assembly language to machine
code translation is very fast.
 Assembly code is often very efficient (and therefore
fast) because it is a low-level language.
 Assembly code is fairly easy to understand due to the
use of English, like in mnemonics.

The Drawbacks of Using Assembler

Assembly language is written for a certain instruction set


and/or processor.

Assembly tends to be optimized for the hardware it is


designed for, meaning it is often incompatible with different
hardware.

Lots of assembly code is needed to do a relatively simple


task, and complex programs require lots of programming
time.

Compiler

A compiler is a computer program that translates code


written in a high-level language into a low-level language,
machine code.

The most common reason for translating source code is to


create an executable program (converting from high-level
language into machine language).

Advantages of using a compiler

Below is a list of the advantages of using a compiler:

 Source code is not included; therefore, compiled code


is more secure than interpreted code.
 tends to produce faster code and is better at
interpreting source code.
 Because the program generates an executable file, it
can be run without the need for the source code.

Disadvantages of using a compiler

Below is a list of the disadvantages of using a compiler:

 Before a final executable file can be created, object


code must be generated; this can be a time-consuming
process.
 The source code must be 100% correct for the
executable file to be produced.

Interpreter

An interpreter program executes other programs directly,


running through the program code and executing it line-by-
line. As it analyses every line, an interpreter is slower than
running compiled code, but it can take less time to interpret
program code than to compile and then run it. This is very
useful when prototyping and testing code.

Interpreters are written for multiple platforms; this means


code written once can be immediately run on different
systems without having to recompile for each. Examples of
this include flash-based web programs that will run on your
PC, Mac, gaming console, and mobile phone.

Advantages of using an interpreter

Here is a list of some of the main advantages of using an


interpreter:

 easier to debug (check errors) than a compiler.


 It is easier to create multi-platform code, as each
different platform would have an interpreter to run the
same code.
 useful for prototyping software and testing basic
program logic.

Disadvantages of using an interpreter

And here is the list of some of the main disadvantages of


using an interpreter:

 Source code is required for the program to be executed,


and this source code can be read, making it insecure.
 Due to the on-line translation method, interpreters are
generally slower than compiled programs.

Compilation process in c
What is a compilation?
The compilation is a process of converting the source code into object code. It is
done with the help of the compiler. The compiler checks the source code for the
syntactical or structural errors, and if the source code is error-free, then it generates
the object code.

The c
compilation process converts the source code taken as input into the object code or
machine code. The compilation process can be divided into four steps, i.e., Pre-
processing, Compiling, Assembling, and Linking.

The preprocessor takes the source code as an input, and it removes all the comments
from the source code. The preprocessor takes the preprocessor directive and
interprets it. For example, if <stdio.h>, the directive is available in the program, then
the preprocessor interprets the directive and replace this directive with the content of
the 'stdio.h' file.

The following are the phases through which our program passes before being
transformed into an executable form:

o Preprocessor
o Compiler
o Assembler
o Linker

Preprocessor
The source code is the code which is written in a text editor and the source code file
is given an extension ".c". This source code is first passed to the preprocessor, and
then the preprocessor expands this code. After expanding the code, the expanded
code is passed to the compiler.

Compiler
The code which is expanded by the preprocessor is passed to the compiler. The
compiler converts this code into assembly code. Or we can say that the C compiler
converts the pre-processed code into assembly code.
Assembler
The assembly code is converted into object code by using an assembler. The name of
the object file generated by the assembler is the same as the source file. The
extension of the object file in DOS is '.obj,' and in UNIX, the extension is 'o'. If the
name of the source file is 'hello.c', then the name of the object file would be
'hello.obj'.

Linker
Mainly, all the programs written in C use library functions. These library functions are
pre-compiled, and the object code of these library files is stored with '.lib' (or '.a')
extension. The main working of the linker is to combine the object code of library
files with the object code of our program. Sometimes the situation arises when our
program refers to the functions defined in other files; then linker plays a very
important role in this. It links the object code of these files to our program. Therefore,
we conclude that the job of the linker is to link the object code of our program with
the object code of the library files and other files. The output of the linker is the
executable file. The name of the executable file is the same as the source file but
differs only in their extensions. In DOS, the extension of the executable file is '.exe',
and in UNIX, the executable file can be named as 'a.out'. For example, if we are using
printf() function in a program, then the linker adds its associated code in an output
file.

Let's understand through an example.

hello.c

1. #include <stdio.h>
2. int main()
3. {
4. printf("Hello javaTpoint");
5. return 0;
6. }

Now, we will create a flow diagram of the above program:


In the above flow diagram, the following steps are taken to execute a program:

o Firstly, the input file, i.e., hello.c, is passed to the preprocessor, and the preprocessor
converts the source code into expanded source code. The extension of the expanded
source code would be hello.i.
o The expanded source code is passed to the compiler, and the compiler converts this
expanded source code into assembly code. The extension of the assembly code
would be hello.s.
o This assembly code is then sent to the assembler, which converts the assembly code
into object code.
o After the creation of an object code, the linker creates the executable file. The loader
will then load the executable file for the execution.

How processors read and interpret instructions stored in m


How processors read and interpret instructions stored in m

Basics of Cache Memory


The speed of the main memory is very low in comparison with the speed
of modern processors.
For good performance, the processor cannot spend much of its time
waiting to access instructions
and data in main memory. Hence, it is important to devise a scheme
that reduces the time needed
to access the necessary information. Since the speed of the main memory
unit is limited by electronic
and packaging constraints, the solution must be sought in a different
architectural arrangement.
An efficient solution is to use a fast cache memory, which essentially
makes the main memory
appear to the processor to be faster than it really is. The cache is a smaller,
faster memory which
stores copies of the data from the most frequently used main memory locations.
As long as most
memory accesses are to cached memory locations, the average latency
of memory accesses will
be closer to the cache latency than to the latency of main memory.

You might also like