CEAT Microprocessor DELAJ Modules MidtermA
CEAT Microprocessor DELAJ Modules MidtermA
MODULE
in
Microprocessor
(CPE 323L/L)
Page 1 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Jen
Page 2 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Module 1
Functional Operations of Microprocessor
Learning Outcomes:
The students be able to understand the functional operations of both microprocessor and
microcontroller
Learning Objectives:
In this module provides you understanding on the functional operations of
microprocessor and microcontroller, and instruction set.
Engage
Activity 1: Microprocessor
Have you engaged yourself to microprocessor already? As we had finished the
foundation of microprocessor.
Now let us move deeper into our discussion on microprocessor.
Page 3 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Explore
Activity 2: Research
You are to browse, research and know the functional operations of
mircroprocessor.
Page 4 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Unit I
Functional Operations of Microprocessor
Types of microprocessors :
Complex instruction set microprocessor –
The processors are designed to minimize the number of instructions per program
and ignore the number of cycles per instruction. The compiler is used to translate a
high-level language to assembly-level language because the length of code is
relatively short and an extra RAM is used to store the instructions. These
processors can do tasks like downloading, uploading, and recalling data from
memory. Apart from these tasks, this microprocessor can perform complex
mathematical calculations in a single command.
Example: IBM 370/168, VAX 11/780
Superscalar microprocessor –
These processors can perform many tasks at a time. They can be used for ALUs
and multiplier-like arrays. They have multiple operation units and perform tasks by
Page 5 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Disadvantages of microprocessors –
1. Overheating occurs due to overuse
2. Performance depends on the size of the data
3. Large board size than microcontrollers
4. Most microprocessors do not support floating-point operation
Page 6 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
ELABORATE
3. It has a Program Counter (PC) register that stores the address of the next
instruction based on the value of the PC, Microprocessor jumps from one
location to another and takes decisions.
Page 7 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
80186/80188: 6MHz
80286: 8MHz
32-bit Microprocessor –
INTEL 80386: 16MHz to 33MHz
PENTIUM: 66MHz
Page 8 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
64-bit Microprocessor –
INTEL CORE-2: 1.2GHz to 3GHz
Page 9 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Example:
1. IBM RS6000
2. MC88100
3. DEC Alpha 21064
4. DEC Alpha 21164
5. DEC Alpha 21264
Explicitly Parallel Instruction Computing (EPIC) –
EPIC or Explicitly Parallel Instruction Computing permits computers to
execute instructions parallel using compilers. It allows complex
instructions execution without using higher clock frequencies.EPIC
encodes its instruction into 128-bit bundles. each bundle contains three
instructions which are encoded in 41 bits each and a 5-bit template
field(contains information about types of instructions in a bundle and
which instructions can be executed in parallel).
Example:
1. IA-64 (Intel Architecture-64)
https://www.geeksforgeeks.org/introduction-of-microprocessor/
Page 10 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Cache Memory is a special very high-speed memory. It is used to speed up and
synchronizing with high-speed CPU. Cache memory is costlier than main memory or
disk memory but economical than CPU registers. Cache memory is an extremely fast
memory type that acts as a buffer between RAM and the CPU. It holds frequently
requested data and instructions so that they are immediately available to the CPU
when needed.
Cache memory is used to reduce the average time to access data from the Main
memory. The cache is a smaller and faster memory which stores copies of the data
from frequently used main memory locations. There are various different independent
caches in a CPU, which store instructions and data.
Levels of memory:
Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately
stored in CPU. Most commonly used register is accumulator, Program counter,
address register etc.
Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily
stored for faster access.
Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and once
power is off data no longer stays in this memory.
Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data stays
permanently in this memory.
Page 11 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Cache Performance:
When the processor needs to read or write a location in main memory, it first checks
for a corresponding entry in the cache.
If the processor finds that the memory location is in the cache, a cache hit has
occurred and data is read from cache
If the processor does not find the memory location in the cache, a cache miss has
occurred. For a cache miss, the cache allocates a new entry and copies in data from
main memory, then the request is fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity
called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache block size, higher
associativity, reduce miss rate, reduce miss penalty, and reduce the time to hit in the
cache.
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory
which are as follows: Direct mapping, Associative mapping, and Set-Associative
mapping. These are explained below.
1. Direct Mapping –
The simplest technique, known as direct mapping, maps each block of main
memory into only one possible cache line. or
In Direct mapping, assign each memory block to a specific line in the cache. If a
line is previously taken up by a memory block when a new block needs to be
loaded, the old block is trashed. An address space is split into two parts index field
and a tag field. The cache is used to store the tag field whereas the rest is stored in
the main memory. Direct mapping`s performance is directly proportional to the Hit
ratio.
2. i = j modulo m
3. where
4. i=cache line number
5. j= main memory block number
m=number of lines in the cache
For purposes of cache access, each main memory address can be viewed as
consisting of three fields. The least significant w bits identify a unique word or
byte within a block of main memory. In most contemporary machines, the address
is at the byte level. The remaining s bits specify one of the 2s blocks of main
memory. The cache logic interprets these s bits as a tag of s-r bits (most significant
Page 12 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
portion) and a line field of r bits. This latter field identifies one of the m=2r lines of
the cache.
6. Associative Mapping –
In this type of mapping, the associative memory is used to store content and
addresses of the memory word. Any block can go into any line of the cache. This
means that the word id bits are used to identify which word in the block is needed,
but the tag becomes all of the remaining bits. This enables the placement of any
Page 13 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
word at any place in the cache memory. It is considered to be the fastest and the
most flexible mapping form.
7. Set-associative Mapping –
This form of mapping is an enhanced form of direct mapping where the drawbacks
of direct mapping are removed. Set associative addresses the problem of possible
thrashing in the direct mapping method. It does this by saying that instead of
having exactly one line that a block can map to in the cache, we will group a few
lines together creating a set. Then a block in memory can map to any one of the
lines of a specific set..Set-associative mapping allows that each word that is
present in the cache can have two or more words in the main memory for the same
index address. Set associative cache mapping combines the best of direct and
associative cache mapping techniques.
In this case, the cache consists of a number of sets, each of which consists of a
number of lines. The relationships are
m = v * k
i= j mod v
Page 14 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
where
i=cache set number
j=main memory block number
v=number of sets
m=number of lines in the cache number of sets
k=number of lines in each set
Page 15 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Types of Cache –
Primary Cache –
A primary cache is always located on the processor chip. This cache is small
and its access time is comparable to that of processor registers.
Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the
Page 16 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is
also housed on the processor chip.
Locality of reference –
Since size of cache memory is less as compared to main memory. So to check
which part of main memory should be given priority and loaded in cache is
decided based on locality of reference.
Types of Locality of reference
5. Spatial Locality of reference
This says that there is a chance that element will be present in the close
proximity to the reference point and next time if again searched then more
close proximity to the point of reference.
6. Temporal Locality of reference
In this Least recently used algorithm will be used. Whenever there is page fault
occurs within a word will not only load word in main memory but complete
page fault will be loaded because spatial locality of reference rule says that if
you are referring any word next word will be referred in its register that’s why
we load complete page table so the complete block will be loaded.
Page 17 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Cache is close to CPU and faster than main memory. But at the same time is smaller
than main memory. The cache organization is about mapping data in memory to a
location in cache.
A Simple Solution:
One way to go about this mapping is to consider last few bits of long memory address
to find small cache address, and place them at the found address.
Problems With Simple Solution:
The problem with this approach is, we loose the information about high order bits and
have no way to find out the lower order bits belong to which higher order bits.
Solution is Tag:
To handle above problem, more information is stored in cache to tell which block of
Page 18 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
The above arrangement is Direct Mapped Cache and it has following problem
We have discussed above that last few bits of memory addresses are being used to
address in cache and remaining bits are stored as tag. Now imagine that cache is very
small and addresses of 2 bits. Suppose we use the last two bits of main memory
Page 19 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
address to decide the cache (as shown in below diagram). So if a program accesses 2,
6, 2, 6, 2, …, every access would cause a hit as 2 and 6 have to be stored in same
location in cache.
Source:
https://www.youtube.com/watch?v=sg4CmZ-p8rU
Page 20 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Caches are the faster memories that are built to deal with the Processor-Memory gap
in data read operation, i.e. the time difference in a data read operation in a CPU
register and that in the main memory. Data read operation in registers is generally 100
times faster than in the main memory and it keeps on increasing substantially, as we
go down in the memory hierarchy.
Caches are installed in the middle of CPU registers and the main memory to bridge
this time gap in data reading. Caches serve as temporary staging area for a subset of
data and instructions stored in relatively slow main memory. Since the size of cache is
small, only the data which is frequently used by the processor during the execution of
a program is stored in cache. Caching of this frequently used data by CPU eliminates
the need of bringing the data from the slower main memory again and again which
takes hundreds of CPU cycles.
The idea of caching the useful data centers around a fundamental property of
computer programs known as locality. Programs with good locality tend to access the
same set of data items over and over again from the upper levels of the memory
hierarchy (i.e. cache) and thus run faster.
Example: The run time of different matrix multiplication kernels that perform the
same number of arithmetic operations, but have different degrees of locality, can vary
by a factor of 20!
Types of Locality:
Temporal locality –
Temporal locality states that the same data objects are likely to be reused multiple
times by the CPU during the execution of a program. Once a data object has been
written into the cache on the first miss, a number of subsequent hits on that object
can be expected. Since the cache is faster than the storage at the next lower level
like the main memory, these subsequent hits can be served much faster than the
original miss.
Spatial locality –
It states that if a data object is referenced once, then there is a high probability that
it’s neighbor data objects will also be referenced in near future. Memory blocks
usually contain multiple data objects. Because of spatial locality, we can expect
that the cost of copying a block after a miss will be amortized by subsequent
references to other objects within that block.
Importance of Locality –
Locality in programs has an enormous impact on the design and performance of
hardware and software systems. In modern computing systems, the locality based
Page 21 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
advantages are not only confined to the architecture but also, operating systems and
application programs are built in a manner that they can exploit the locality to the full
extent.
In operating systems, the principle of locality allows the system to use main memory
as a cache of the most recently referenced chunk of virtual address space and also in
case of recently used disk blocks in disk file systems.
Similarly, Application programs like web browsers exploit temporal locality by
caching recently referenced documents on a local disk. High-volume web servers hold
recently requested documents in the front end disk cache that satisfy requests for these
documents without any intervention of server.
Cache Friendly Code –
Programs with good locality generally run faster as they have lower cache miss rate in
comparison with the ones with bad locality. In a good programming practice, cache
performance is always counted as one of the important factor when it comes to the
analysis of the performance of a program. The basic approach on how a code can be
cache friendly is:
Frequently used cases need to be faster: Programs often invest most of the time
in a few core functions and these functions in return have most to do with the
loops. So, these loops should be designed in a way that they possess a good
locality.
Multiple loops: If a program constitutes of multiple loops then minimize the cache
misses in the inner loop to alleviate the performance of the code.
Example-1: The above context can be understood by following the simple examples
of multi-dimensional array code. Consider the sum_array() function which sums the
elements of a two dimension array in row-major order:
Page 22 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
}
Assuming, the cache has a block size of 4 words each, word size being 4 bytes. It is
initially empty and since, C stores arrays in row-major order so the references will
result in the following pattern of hits and misses, independent of cache organization.
The block which contains w[0]–w[3] is loaded into the cache from memory and
reference to w[0] is a miss but the next three references are all hits. The reference to
v[4] causes another miss as a new block is loaded into the cache, the next three
references are hits, and so on. In general, three out of four references will hit, which is
the best that can be done with a cold cache. Thus, the hit ratio is 3/4*100 = 75%
Example-2: Now, the sum_array() function sums the elements of a two dimension
array in column-major order.
Page 23 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
As C stores arrays in row-major order but in this case array is being accessed in
column major order, so the locality spoils in this case. the references will be made in
order: a[0][0], a[1][0], a[2][0] and so on. As the cache size is smaller, with each
reference there will be a miss due to poor locality of the program. Hence, the hit ratio
will be 0. Poor hit ratio will eventually decrease the performance of a program and
will lead to a slower execution. In programming, these type of practices should be
avoided.
Conclusion –
When talking about real life application programs and programming realms,
optimized cache performance gives a good speedup to a program, even if the runtime
complexity of the program is high. A good example is Quick sort. Though it has a
worst case complexity of O(n2), it is the most popular sorting algorithm and one of the
important factor is the better cache performance than many other sorting algorithms.
Codes should be written in a way that they can exploit the cache to the best extent for
a faster execution.
Page 24 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Both CPU Cache and TLB are hardware used in microprocessors but what’s the
difference, especially when someone says that TLB is also a type of Cache?
First thing first. CPU Cache is a fast memory that is used to improve the latency of
fetching information from Main memory (RAM) to CPU registers. So CPU Cache sits
between Main memory and CPU. And this cache stores information temporarily so
that the next access to the same information is faster. A CPU cache which used to
store executable instructions, it’s called Instruction Cache (I-Cache). A CPU cache is
used to store data, it’s called Data Cache (D-Cache). So I-Cache and D-cache speed
up fetching time for instructions and data respectively. A modern processor contains
both I-Cache and D-Cache. For completeness, let us discuss D-cache hierarchy as
well. D-Cache is typically organized in a hierarchy i.e. Level 1 data cache, Level 2
data cache, etc. It should be noted that L1 D-cache is faster/smaller/costlier as
compared to L2 D-Cache. But the basic idea of ‘CPU cache‘ is to speed up
instruction/data fetch time from Main memory to CPU.
Translation Lookaside Buffer (i.e. TLB) is required only if Virtual Memory is used
by a processor. In short, TLB speeds up the translation of virtual addresses to a
physical address by storing page-table in faster memory. In fact, TLB also sits
between CPU and Main memory. Precisely speaking, TLB is used by MMU when a
virtual address needs to be translated to a physical address. By keeping this mapping
of virtual-physical addresses in fast memory, access to page-table improves. It should
be noted that page-table (which itself is stored in RAM) keeps track of where virtual
pages are stored in the physical memory. In that sense, TLB also can be considered as
a cache of the page table.
But the scope of operation for TLB and CPU Cache is different. TLB is about
‘speeding up address translation for Virtual memory’ so that page-table needn’t be
accessed for every address. CPU Cache is about ‘speeding up main memory access
latency’ so that RAM isn’t accessed always by the CPU. TLB operation comes at the
time of address translation by MMU while CPU cache operation comes at the time of
memory access by CPU. In fact, any modern processor deploys all I-Cache, L1 & L2
D-Cache, and TLB.
Page 25 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 26 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Memory Interleaving
Virtual Memory
Page 27 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 28 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Now again we assume 16 Data’s to be transferred to the Four Module. But Now the
consecutive Data are added in Consecutive Module. That is, 10 [Data] is added in
Module 1, 20 [Data] in Module 2 and So on.
Least Significant Bit (LSB) provides the Address of the Module & Most significant
bit (MSB) provides the address of the data in the module.
For Example, to get 90 (Data) 1000 will be provided by the processor. This 00 will
indicate that the data is in module 00 (module 1) & 10 is the address of 90 in Module
00 (module 1). That is,
Page 29 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 30 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Memories are made up of registers. Each register in the memory is one storage
location. The storage location is also called as memory location. Memory locations are
identified using Address. The total number of bit a memory can store is its capacity.
Page 31 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
A word is a group of bits where a memory unit stores binary information. A word
with group of 8 bits is called a byte.
A memory unit consists of data lines, address selection lines, and control lines that
specify the direction of transfer. The block diagram of a memory unit is shown
below:
Data lines provide the information to be stored in memory. The control inputs specify
the direct transfer. The k-address lines specify the word chosen.
When there are k address lines, 2k memory words can be accessed.
Refer for RAM and ROM, different types of RAM, cache memory, and secondary
memory
Page 32 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
2n = N
where n is the no. of address lines and N is the total memory in bytes.
There will be 2n words.
2D Memory organization –
In 2D organization, memory is divided in the form of rows and columns(Matrix).
Each row contains a word, now in this memory organization, there is a decoder. A
decoder is a combinational circuit that contains n input lines and 2n output lines. One
of the output lines selects the row by the address contained in the MAR and the word
which is represented by that row gets selected and is either read or written through the
data lines.
Page 33 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 34 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 35 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 36 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 37 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 38 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
1. SRAM :
The SRAM memories consist of circuits capable of retaining the stored information as
long as the power is applied. That means this type of memory requires constant power.
SRAM memories are used to build Cache Memory.
SRAM Memory Cell: Static memories(SRAM) are memories that consist of circuits
capable of retaining their state as long as power is on. Thus this type of memory is
called volatile memory. The below figure shows a cell diagram of SRAM. A latch is
formed by two inverters connected as shown in the figure. Two transistors T1 and T2
are used for connecting the latch with two-bit lines. The purpose of these transistors is
to act as switches that can be opened or closed under the control of the word line,
which is controlled by the address decoder. When the word line is at 0-level, the
transistors are turned off and the latch remains its information. For example, the cell is
Page 39 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
at state 1 if the logic value at point A is 1 and at point, B is 0. This state is retained as
long as the word line is not activated.
For Read operation, the word line is activated by the address input to the address
decoder. The activated word line closes both the transistors (switches) T1 and T2.
Then the bit values at points A and B can transmit to their respective bit lines. The
sense/write circuit at the end of the bit lines sends the output to the processor.
For Write operation, the address provided to the decoder activates the word line to
close both the switches. Then the bit value that is to be written into the cell is provided
through the sense/write circuit and the signals in bit lines are then stored in the cell.
2. DRAM :
DRAM stores the binary information in the form of electric charges applied to
capacitors. The stored information on the capacitors tends to lose over a period of time
and thus the capacitors must be periodically recharged to retain their usage. The main
memory is generally made up of DRAM chips.
DRAM Memory Cell: Though SRAM is very fast, but it is expensive because of its
every cell requires several transistors. Relatively less expensive RAM is DRAM, due
to the use of one transistor and one capacitor in each cell, as shown in the below
figure., where C is the capacitor and T is the transistor. Information is stored in a
DRAM cell in the form of a charge on a capacitor and this charge needs to be
Page 40 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
periodically recharged.
For storing information in this cell, transistor T is turned on and an appropriate
voltage is applied to the bit line. This causes a known amount of charge to be stored in
the capacitor. After the transistor is turned off, due to the property of the capacitor, it
starts to discharge. Hence, the information stored in the cell can be read correctly only
if it is read before the charge on the capacitors drops below some threshold value.
Types of DRAM :
There are mainly 5 types of DRAM:
1. Asynchronous DRAM (ADRAM) –
The DRAM described above is the asynchronous type DRAM. The timing of the
memory device is controlled asynchronously. A specialized memory controller
circuit generates the necessary control signals to control the timing. The CPU must
take into account the delay in the response of the memory.
Page 41 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
SDRAM chips and forming the required capacity for the modules.
Page 42 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Page 43 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Random Access Memory (RAM) is used to store the programs and data being used
by the CPU in real-time. The data on the random access memory can be read, written,
and erased any number of times. RAM is a hardware element where the data being
currently used is stored. It is a volatile memory. Types of RAM:
1. Static RAM, or (SRAM) which stores a bit of data using the state of a six
transistor memory cell.
2. Dynamic RAM, or (DRAM) which stores a bit data using a pair of transistor and
capacitor which constitute a DRAM memory cell.
Read Only Memory (ROM) is a type of memory where the data has been
prerecorded. Data stored in ROM is retained even after the computer is turned off ie,
non-volatile. Types of ROM:
1. Programmable ROM, where the data is written after the memory chip has been
created. It is non-volatile.
2. Erasable Programmable ROM, where the data on this non-volatile memory chip
can be erased by exposing it to high-intensity UV light.
3. Electrically Erasable Programmable ROM, where the data on this non-volatile
memory chip can be electrically erased using field electron emission.
4. Mask ROM, in which the data is written during the manufacturing of the memory
chip.
The following table differentiates ROM and RAM:
Use Used to store the data that has It stores the instructions required
Page 44 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Size and
Capacity Large size with higher capacity. Small size with less capacity.
Page 45 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Evaluate
Page 46 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Reference:
https://www.geeksforgeeks.org/introduction-of-microprocessor/
Page 47 of 48
NORTHWESTERN UNIVERSITY,INC
College of Engineering, Architecture & Technology
Laoag City, Ilocos Norte
Cong
ratul
ations
!
You are done with the
Prelim Term!
Page 48 of 48